Anthropic accidentally released their Claude source code.1 There are humans checking out this code and upset about how the terrible nature. I’m 27 years into working in IT. I’ve seen open source programming code and proprietary programming. Sometimes it’s thoughtful, well documented, and clear what’s intended. Most of the time, I’m amazing the junk actually works.
Given all the different things I’ve worked on in my career, the one thing that’s remained consistent is finding ever worse source code running something important that’s designed to be used in a very specific way without any sort of error handling (meaning it’s waiting to be abused).
- Does it work? Yeah, for the use case specification.
- Does it break? Yeah, when not used correctly.
- Does it break the service? Sometimes, but probably not encountered in the way anyone notices.
Given these factors, it’s going to go unnoticed.
This is the theory for why software should go through code reviews. People should check each other’s work. This is why it should go through quality assurance testing to ensure it works as expected and doesn’t break unexpectedly. This is why there are bug bounties to find problems so they can get addressed before they break things.
But, as long as things kind of sort of just work enough, the garbage will remain out there. AI won’t fix it. AI will probably make more of it. Human code reviewers, QA testers, and bug bounties won’t keep up plus miss things, so AI will take over those roles. Which means it will miss things too, but it will keep up with the code generation.
The questions are:
- Will things crash and burn because the AI misses something.
- Will the humans tasked to resolving it, be able to figure out what the AI created?
My Magic 8-Ball says: Signs point to yes about #1 and no about #2.

Leave a Reply