I’ve tried Copilot and to be honest, most of the time it’s a coin toss, even for short snippets. In one scenario it might try to autocomplete a unit test I’m writing and get it pretty much spot on, but it’s also equally likely to spit out complete garbage that won’t even compile, never mind being semantically correct.
To have any chance of producing decent output, even for quite simple tasks, you will need to give an LLM an extremely specific prompt, detailing the precise behaviour you want and what the code should do in each scenario, including failure cases (hmm…there used to be a term for this…)
Even then, there are no guarantees it won’t just spit out hallucinated nonsense. And for larger, enterprise scale applications? Forget it.
Not exactly crazy but just mysterious…this was at a software company I worked at many years ago. It was one of the developers in the team adjacent to ours who I worked with occasionally - nice enough person, really friendly and helpful, everyone seemed to get on with them really well and generally seemed like a pretty competent developer. Nothing to suggest any kind of gross misconduct was happening.
Anyway, we all went off to get lunch one day and came back to an email that this person no longer worked at the company, effective immediately. Never saw them again.
No idea what went down - but the culture at that place actually became pretty toxic after a while, which led to a few people (including me) quitting - so maybe they dodged a bullet.