AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
Yes! This post hits the nail on the head.
There’s a certain conversation that I’m sure you’ve had if you knew how to code between 2008 and 2015, right in the middle of the smartphone app “gold rush.” It’s when a person would approach you and say, “Oh, you can code? Sick! I’ve got this million dollar idea for an app. Bro, if you write the code for it, I’ll split the profits with you, 50/50.* You in?”
This person was wrong about a couple of things:
This type of person tends to think in incredibly abstract terms. They’re always talking about things at a very high level, likely because they don’t actually know how to do things at the implementation level. So they don’t understand (or are willfully ignorant) of the fact that the real value of an idea is in it’s implementation. The costs, and the payoff, come when you have people actually building something.
But it’s not just the monetary value that lies there; that’s also where real discoveries happen. The growth in that field of knowledge, and your growth as a person, happen when you actually do the damn thing. The benefit of actually building something yourself is not just in the thing you built, but that you learned and grew while doing it.
And now, with AI, that mentally undercooked “idea man” will get to go through life believing that his abstract, vague ideas really are a gift to the world by themself; no other skills necessary. Even if some of them do end up being valuable in a dollar amount, the personal growth just will not happen.
* Also they would usually try to offer you more like 10%.
100%, at my work we use AI for a lot of stuff but I literally just finished a task where they AI couldn’t complete it with a simple prompt. The code was absolute horror show and asking it to fix it is beyond most people since it’s terse and needlessly complex.
To finish the task I had to use my experience and rewrite the shitshow by writing function signatures and have the AI do the implementation based on simple requirements such as using a set data structure etc.
The AI just tried and failed 10 times before I gave up on it. AI absolutely sucks at a lot of stuff but it’s still been a massive time saver for me personally.
Exactly, why we’re always missing this point?
Beautifully said.