The Evolutionary Leap of AI
I've been thinking — why is it that when we build artificial intelligence, we expect it to immediately replace highly intellectual labor?
We expect it to write well, translate between languages, code, design, solve hard scientific problems. And yet we instantly recognize "AI slop" — even though, honestly, we usually can't actually tell whether a piece of text was written by an LLM or a human. Either way, we don't like LLM-written text.
Remember when everyone laughed at AI-generated photos of people with eight fingers? Sure, we've fixed that by now, but the flood of AI-generated video slop irritates everyone.
Agents have learned to code. Their code even runs, and on the surface it looks like it works — until the news breaks that an agent wiped a whole codebase with no way to recover it. And if you actually open the code, it's a horror show.
So here's what I started thinking: the way we're rolling out AI looks like taking a caveman, handing him an entire library to read, and asking him to build, say, a CRM or a Photoshop clone. Is that really the right approach?
Why don't we first train it on less intellectual work — the kind that's easier to formalize, simpler processes — so that our caveman can evolve gradually?
What do you think?
Unit economics & financial modeling in practice
50€/year
less than €1/week · billed annuallyPlus: theme customization, font settings, article printing and image zoom.
Subscribe to get access.
If you're already a customer, just log in.
we do not store your email, only the encrypted hash, which increases the security of your email.