…this produces what I will call the AI Developer’s Descent Into Madness:
- Whoa, I produced this prototype so fast! I have super powers!
- This prototype is getting buggy. I’ll tell the AI to fix the bugs.
- Hmm, every change now causes as many new bugs as it fixes.
- Aha! But if I have an AI agent also review the code, it can find its own bugs!
- Wait, why am I personally passing data back and forth between agents
- I need an agent framework
- I can have my agent write an agent framework!
- Return to step 1
It’s actually alarming how many friends and respected peers I’ve lost to this cycle already.
@snarfed.org I hadn’t thought about it like this until reading this but I think I’m realizing why so many fall to it:
Y’know how developers often want to streamline setups and “optimize this more, for later”. It’s like the worst version of that because it’s instant dopamine for that but an endless puzzle because it cannot, in fact, solve itself, and neither can anyone, because puzzles aren’t made of dice like this.
this cycle hits every team eventually lol. the agents that escape it are the ones where someone did the boring work upfront – clean task scoping, explicit failure modes, no circular deps. basically, good process before automation not after
Fine-tuning behavior at scale is still an unsolved problem.