Urgh. I can't wait to retire.
I can’t say if the bug count is higher or not. Maybe it is higher in terms of total number of bugs I write throughout my coding session. But if bug count goes up 10% then the speed with which I fix those bugs and get to a final edit of my code is 30% or 40% faster, so the bug count is not the right metric.
Maybe the differentiator is that I am a solo-dev for all this work, and so the negative effects of the copilot are only experienced by me. If I were in a 10 person team, the bugs and the weird out of context code snippets would be magnified by the 9 other people, and the negative effects would be strong. But I don’t know.
Additionally, I find the copilot code suggestions during code reviews / pull requests sometimes useful. At times, it can offer some insightful bits about a code segment, such as potential exception handling fixes, etc.
I'd like to explore having copilot write unit tests, including representative test data, that can execute edge code paths. I haven't done this yet, but this seems exactly the type of thing that a "copilot" would do for me (not too unlike paired-programming, maybe).
Having a copilot completely write my code base, that's another thing entirely. There would be too much going back and verifying that it got it right. And additionally, I've seen it completely conjure up bogus solutions as well. For example, I've had copilot offer a configuration change that was completely fabricated; it looked legitimate enough that a senior systems engineer attempted to install/deliver the "fix" it offered when the suggestion was completely made up.
Overall, I guess my experience with copilot is not much different than working with any human. Trust but verify.
No it does not. Does an assistant have to be as qualified as their boss?
> “The LLM does not possess critical thinking, self-awareness, or the ability to think.”
This is completely irrelevant. The LLM can understand your instructions and it can type 30,000 times faster than you.
Much like how a nailgun won't just magically build you a house, it'll just let you build one quicker.
I get great benefit out of llms for coding. I'm not a good coder. But I am decent at planning and understanding what I want. Llms get me there 100x quicker than not using them. I don't need 4 years of cs to learn all the tedious algos for sorting or searching. I just ask an ai for a bunch of examples, assess them for what they are and get on with it. It can tell me the common pros and cons of it all and much like any other decision in business I make my best judgement and go with it.Need to sort a heap of data into x y z or convert it from x to y? Llm will show me the way, now I don't need to hire someone to do it for me.
But alas, so many seem to think a language interpretation tool is actually a do it all one stop shop of production. Pebkac, your using the tool wrong.
The best I can say is that the implementation in Jetbrains IntelliJ IDEA is pretty good. It's basically only useful for some repetitive Java boilerplate, but actually that's perfect, it's mindless enough yet easy to validate. It makes me dislike programming in Java a little bit less.