simonw
I have yet to find a code assistant that's more generally useful than copying and pasting code back and forth from https://claude.ai and https://chatgpt.com

I use Copilot in VS Code as a typing assistant, but for the most part I find copy and paste into the chat interfaces is the most valuable way to use LLMs for coding.

Even more so with the Claude Artifacts feature, which lets me see an interactive prototype of frontend code instantly - and ChatGPT Code Interpreter, which can run, test, debug and rewrite Python snippets for me.

padolsey
Cursor is incredible. You can use whatever model you like. Give it your openAI key if you want o1-preview. Using it with Claude Sonnet is usually enough though. I use it everyday and it makes it possible to work between dependencies and it can create and edit files on the fly -- pending your approval of git-vibe diffs it presents to you. I've been programming 19 years and it's honestly a life-changer.
ChicagoDave
I’ve been working on a complex game engine in C# 8 using Claude 3.5. I’m using Visual Studio so copying artifacts from Claude to VS, then replacing uploads in Claude is a bit of a chore, but does provide good results.

There are definitely instances where Claude can’t solve a problem and I have to hand write the code and explain it.

It definitely gets confused with designing multiple modules together.

But there are times when it’s simply brilliant. I needed a specific inheritance pattern and Claude introduced the curious recurring template pattern, which had escaped my career. It’s not something I’d use in business code, but in this use case it was perfect.

Claude also helped me build a bi-directional graph to host my game world.

And Claude is phenomenal at unit testing whole projects, allowing me to find design flaws very fast.

My overall experience is that if you know what you’re building, GenAI can be extremely powerful, but if you don’t, I could see both good and bad results coming out.

The lessor experienced developer using it won’t know when it should direct the process in a certain direction and more senior developers will.

joshstrange
Copilot for autocomplete++

ChatGPT/Claude for larger chunks

Aider for multi-file edits

mistercow
Copilot’s main thing is to be autocomplete on steroids, so I’m not sure what exactly you’re looking for. Cursor’s Copilot++ is able to make more complex edits, which is good in boilerplate heavy situations, but it’s not as good at inline completions. I use a combination, flipping copilot++ on only when I’m doing rote migrations and refactors.

If you’re looking for “type in some English text and get fifty lines of code written”, Cursor’s chat is the best I’ve tried. But I’m not a fan of that workflow, so take my opinion with a grain of salt on that.

tetha
Hm, are other AI Assistants much more than just a bit fancier autocomplete?

We're using CoPilot at work. When we were evaluating this, the question we asked our test group: How much time does it save you per week? And most people arrived at estimations of like 1-4 hours saved a week, especially when banging out new code in a code base with patterns and architecture. This was a good enough tradeoff to buy it.

Like, I recently got a terraform provider going for one of our systems. Copilot was useful to generate the boilerplate code for resources so I just had to fill in the blanks of actual logic. Or you can just hand it sample JSON and it creates go structs for those to use in an API client, and generates the bulk of methods accessing APIs with these. Or it's decent at generating test cases for edge cases.

It doesn't enable me to do things I could not do before, but it saves me a lot of typing.

Well maybe I wouldn't test my functions with all of these edge cases because that's a lot of stuff to do, heh.

ctoth
I've had lots of luck with Aider, and it continues to get better. Another cool thing about Aider is that it is mostly written with itself[0], and so it's an example of a flywheel just as was predicted.

[0]: https://aider.chat/HISTORY.html

thenipper
I really like Sourecgraphs Cody. It’s got diffing like Cursor. You can choose models, chat etc. plus it’s only 8 bucks a month
wkirby
I’ve yet to find any LLM that adds value instead of noise. Even the boilerplate you’d hope it could knock out easily is often subtly wrong or outdated.
georg-stone
Supermaven seems to be a good option. It's extremely fast and has a decent free tier. I personally use the free tier, and it's okay.
smukherjee19
I personally don't use any AI Code Assistant, but I did find the below video a level-headed and nice analysis of the subject, where the author uses 4 different AIs to build an HTTP server in Python.

Kinda different from your specific use case, but should give some hints on which one would serve you best, and is an interesting watch:

https://www.youtube.com/watch?v=-ONQvxqLXqE

makk
I’m using the private beta of Solver by Laredo labs. It’s very early. That said, having tried Cody, cursor, copy/paste chatgpt… Solver is far and away the best.

It’s a straight up pair programmer. Point it at your GitHub repository and then just converse with it. It drives, you look over its shoulder. imagine OpenAI GPT o1 that connects to your GitHub repo and produces diffs or PRs on command, and a chat gpt view with tabs for switching between the conversation and the diff.

Also, for front end, there is v0.dev — which is great for whipping stuff together.

yoduhvegas
i like zed, which uses claude 3.5 sonnet.

i had cursor then ran out of free credit. i had github copilot then found it too expensive.

given i'm a software engineer i'm basically looking for free. a.i. is like the t-shirt of the digital world, as in, i don't care where it comes from, just give me a free one when i use your product.

maille
I find Codeium on PyCharm very powerful. Too bad their plugin for MSVC is so limited in comparison.
bearjaws
https://aider.chat/

Only uses CLI, so you have two contexts you work in. One is you manually writing code just like you are used to. The other is a specific context of files that you inform the LLM you are working on.

By creating a separate context, you get much better results when you keep the tasks small.

Specifically use it with claude 3.5 sonnet

djeastm
Are the authors of the code assistants using those assistants to write the code assistant's code?
muratsu
Auto-complete on steroids sounds like the correct assessment.

Aider & Cursor is worth the try if you're interested in trying out multi-file edits. I think both are interesting (ergonomics-wise) but the current SOTA models do not perform well.

For single file edits or chat, I would recommend Cody (generous free tier) or some self-hosted alternative.

hnrodey
Three options I've tried:

- Copilot using Visual Studio and VS Code

- ChatGPT Plus / Claude, copy/pasting back and forth

- Cursor, free trial and w/ Claude api key

Copilot was like 30/70 good-to-bad. The autocomplete hijacks my mind whereby creating a mental block at my ability to write the next token of code. The suggestions were occasionally amazing, but multiple times it introduced a subtle bug that I missed and later spent hours debugging. Not really a time saver. I quit Copilot just as they were introducing the embedded chat feature so maybe it's got better.

In Visual Studio, I thought Copilot was garbage. The results (compared to using in VS Code) were just awful. The VS extension felt unrefined and lacking.

ChatGPT / Claude - this is a decent way to get AI programming. Multiple times it fixed bugs for me that just simply blew me away with it's ability to understand the code and fix it. Love it's ability to scaffold large chunks of working code so I can then get busy enhancing it for the real stuff. Often, it will suggest code using older version of a framework or API so it's necessary to prompt it with stuff like "For Next.js, use code from v14 and the app router". There is thought required that goes into the prompt to increase chances of getting it right the first time.

Cursor - ah, Cursor. Thus far, my favorite. I went through my free trial and opted into the free plan. The embedded sidebar is nice for AI chat - all of the benefits of using ChatGPT/Claude but keeping me directly in the "IDE". The cost is relatively cheap when hooked to my Claude api key. I like the ability to ask questions about specific lines of code (embedded in the current window), or add multiple files to the chat window to give it more context.

Cursor does a great job at keeping you in the code the entire time so there's less jumping from Cursor to browser and back.

Winner: Cursor

As a C#/Java backend developer, you might not like leaving IntelliJ or Visual Studio to use Cursor or VS Code. Very understandable. In that case, I'd probably stick to using ChatGPT Plus or paid Claude. I suggest the premium versions so for premium uptime access to the services and higher limits for their flagship models.

The free versions might get you by, but expect to be kicked out of them from time to time based on system demand.

jamil7
I use aider and claude for some work but I’ve found that the quality varies a huge amount and the fact that I need to closely check it’s output negates some of the productivity gains. Claude in general seems to have gone downhill the last few months, likely as they’re scaling. I’m still, like a lot of us I guess, figuring out what tasks LLMs are actually good for.
devinsvysh
Found qodo (IntelliJ plug-in)helpful for Java. Haven’t tried it for complex request. Used for personal repo, can see that it’s recommendation useful to avoid PR review churn if you are new to a code base.

(Not affiliated with the company, it was called CodiumAI earlier)

pftg
CodePilot delivers the most effective performance for its cost.

After trying other options like Continue + deepseek-v2, I found that the expense of hosting a bigger local version of LLM is too high to match CodePilot's performance.

Played with Continue + Yi-Coder too - requires a lot of time to clarify requests to generate valid code.

I made the decision to stick with CodePilot.

mlboss
Cursor. It is so smart. Feels like it understand what I am thinking. The killer feature is it can directly modify the source files and present a diff that you can accept/reject. They are doing cool work on top of VSCode fork.
yapyap
Github Copilot for me is finez

It is meant to be an autocomplete on steroids-ish feature where you will have to read through all the code it generated because at the end off the day it’s a black box you can’t trust.

But for low intelligence easy tasks it’s generally a fine product.

I feel like most AI coding assistants are though.

qup
tikkun
I polled my friends and there were two major themes:

1) Claude/ChatGPT (copy and pasting back and forth)

2) Cursor

shahzaibmushtaq
For some best AI code assistant is Copilot, for some its others.

It depends entirely on the subjective experience because everyone experiences things differently.

redox99
I really enjoy Cursor. A bit expensive at 20 USD but still good ROI.
agadius
ChatGPT has a new canvas mode that allows for editing code. I found it very good. Less copy paste than before
Havoc
Cursor seems to be flavour of the month
amelius
Any open-source (or on premises) assistants that are worth trying?
0points
> So far, I've tried Copilot, but to say that I'm disappointed is an understatement.

Don't expect any other offerings to change your mind. We are years away from AGI or anything generally useful in this area. It's only a matter of time until the rest of the world realizes this and stops the hype.

emmanueloga_
TL;DR: Writing custom scripts may be more effective for communicating with LLMs than using a fancy GUI. Details follow below :-).

----

Like many here, I’ve been using both GitHub Copilot on VS Code and copy-pasting from a ChatGPT window.

Copilot is super hit or miss. Honestly, it rarely spits out anything useful unless I’m writing really repetitive code, where it might save me a few keystrokes. But even then, I could often just use some "Vim tricks" (like recording a macro or something) and get the same result. The built-in chat is a total waste of time... sigh.

ChatGPT has been way more helpful. But even with that, I often feel like it’s just a really fancy rubber duck or a glorified search engine. Still, it's way better than a Google/Bing search sometimes. I’ve been using a prompt someone here shared (maybe this one verbatim? [0] I need to shop for prompts again :-p) and that could be making a difference... I did not A/B test prompts but at least ChatGPT stopped apologizing so much lol.

I do want to try Cursor and Zed AI since I’ve heard good things. I also saw a recent post here about layouts.dev [1], and it looks really impressive. I’ve been asking ChatGPT for nice Tailwind CSS patterns, and the workflow in the that tool seems really streamlined and nice for web design (only caveat is... I'm not really interested in NextJS right now #shrug). BTW, nobody ever seems to talk about Gemini? I personally don't reach for it almost ever, for whatever reason...

----

Now for the part about scripting your LLM interaction yourself... I’ve been working on a passion project lately, a programming languages database. I stumbled across this cool pattern [2] where I write code that generates data, and that data can then be used to generate more code. (Code is data is code, right?). I used OpenAI's Structured Output [3] and after massaging TypeScript types and JSON Schemas for a while, it generated pretty easy to digest output.

The interesting part is that you can use this setup to feed prompts into ChatGPT in a much easier way. Imagine something like this:

    const code = SelectThingfromCodeBase(); // Not necessarily SQL! Perhaps just concatenating your files as ppl mention here.

    const answer = sendChatGPT(promptFrom(code));

    const newCode = generateCodeFrom(answer);

    profitFrom(newCode); // :-p

I think this pattern has a lot of potential, but I need to play around with it more. For now, I’ve got a super crude but working example of how I pulled this off for my little programming languages database (coming soon, hopefully :-p). I did this so me or a contributor can run a script to generate the code for a pull-request to add more data to my project.

NOTE: my example isn’t very... "meta" since the data<->code thing doesn't really describe the project itself. To expand on this idea, we might need to dust off some of the old declarative tools like UML or 4GLs or come up with something inspired by those things. If this sounds vague, it’s because it is—but maybe it makes some sense to someone here :-p.

---

0: https://www.reddit.com/r/ChatGPTPro/comments/15ffpx3/comment...

1: https://news.ycombinator.com/item?id=41785751

2: https://github.com/EmmanuelOga/plangs2/blob/main/packages/ai...

3: https://platform.openai.com/docs/guides/structured-outputs