363
New Junior Developers Can’t Actually Code
nmn.glSomething’s been bugging me about how new devs and I need to talk about it. We’re at this weird inflection point in software development. Every junior dev I talk to has Copilot or Claude or GPT running 24/7. They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning. Sure, the code works, but ask why it works that way instead of another way? Crickets. Ask about edge cases? Blank stares. The foundational knowledge that used to come from struggling through problems is just… missing. We’re trading deep understanding for quick fixes, and while it feels great in the moment, we’re going to pay for this later.
Agreed. I wanted to test a new config in my router yesterday, which is configured using scripts. So I thought it would be a good idea for ChatGPT to figure it out for me, instead of 3 hours of me reading documentation and trying tutorials. It was a test scenario, so I thought it might do well.
It did not do well at all. The scripts were mostly correct but often in the wrong order (referencing a thing before actually defining it). Sometimes the syntax would be totally wrong and it kept mixing version 6 syntax with version 7 syntax (I’m on 7). It will also make mistakes and when I point out the mistake it says Oh you are totally right, I made a mistake. Then goes on to explain what mistake it did and output new code. However more often than not the new code contained the exact same mistake. This is probably because of a lack of training data, where it is referencing only one example and that example just had a mistake in it.
In the end I gave up on ChatGPT, searched for my testscenario and it turned out a friendly dude on a forum put together a tutorial. So I followed that and it almost worked right away. A couple of minutes of tweaking and testing and I got it working.
I’m afraid for a future where forums and such don’t exist and sources like Reddit get fucked and nuked. In an AI driven world the incentive for creating new original content is way lower. So when AI doesn’t know the answer, you are just hooped and have to re-invent the wheel yourself. In the long run this will destroy productivity and not give the gains people are hoping for at the moment.
The one example could be flawless, but the output of an LLM is influenced by all of its input. 99.999% of that input is irrelevant to your situation, so of course it’s going to degenerate the output.
What you (and everyone else) needs is a good search engine to find the needle in the haystack of human knowledge, you don’t need that haystack ground down to dust to give you a needle-shaped piece of crap with slightly more iron than average.