I've been preparing for a podcast regarding AI probing questions about its impact, and my own thoughts on the subject of AI. I am in discourse against "AI is dangerous, take things over, etc.".
I've used it as a powerful tool with measured success, so my "fears" of AI are not the same as others. Here is a bit of the outcome of my preparation for the podcast.
----
I think we are slightly misnaming the divide that AI is creating. Most people call it technical literacy, as if the key question is who knows how to use the tools. That matters, but I do not think it is the deepest split. The deeper divide is what I would call managerial literacy of thought.
By that I mean: some people know how to direct cognition well. They know how to define the actual problem, brief clearly, set constraints, inspect output, challenge weak reasoning, revise the approach, and decide what to do next. Others do not. And AI exposes that difference very quickly.
Before these tools, a lot of weak thinking could hide inside institutions. It could hide behind meetings, credentials, process, jargon, even status. You could still be seen as the knowledgeable person without necessarily being very good at governing how thought gets turned into decisions. But AI produces something immediately. So now the question becomes obvious: can you tell whether the output is good, weak, shallow, misleading, or incomplete? Can you improve it? Can you use it responsibly? Can you own the consequence?
That is why I keep coming back to the idea that the shift is not just from knowledge to judgment. It is from possessing information to directing cognition. In the old model, value came from being the person with the answers. In the new model, answers are increasingly abundant. What becomes scarce is the ability to manage the answer pipeline well: framing, steering, validating, and acting.
So when people say AI is replacing human intelligence, I think that is too crude. What it is really doing is repricing competence. It is exposing who can think operationally and who was mainly benefiting from information scarcity. And that is uncomfortable, because a lot of institutional authority was built on being the gatekeeper of knowledge, not necessarily on having the best judgment.
To me, that is the real story. The premium is moving from knowing to directing. And the people who will do well are not just the smartest people in the room. They are the ones who can govern thinking—human or machine—toward a real-world outcome.
I think one of the deepest mistakes in the AI conversation is that we keep treating intelligence and wisdom as if they are basically the same thing. They are not.
AI can approximate a lot of what looks like wisdom. It can detect patterns, model consequences, compare tradeoffs, surface options, and present them in a calm, coherent way. And because it does that so well, people are starting to confuse the appearance of wisdom with the thing itself.
But human wisdom is forged somewhere else. It is forged in consequence. It is forged in the fact that a human being has to choose, act, absorb the cost, and live with what follows. That is what I mean when I say wisdom can be approximated by pattern and consequence, but human wisdom is forged by mortality. Mortality is not just death in the abstract. It is finitude. It is irreversibility. It is accountability. It is the fact that time runs out, mistakes matter, trust can be lost, and some decisions cannot be walked back.
So the real question with AI is not whether it can generate intelligent outputs. Clearly it can. The question is what happens when humans begin to outsource not just analysis, but judgment. Not just thinking, but consequence-bearing responsibility. That is where cognitive fragility begins.
The machine can advise. It can simulate perspective. It can even sound wise. But it does not stand in the blast radius of the decision. We do. And that means the human role does not disappear as intelligence becomes abundant. It becomes more important, because someone still has to own what gets chosen.
So to me, the future of AI is not mainly about smarter machines. It is about whether humans retain responsibility at the decision boundary. Because intelligence can propose. Wisdom still has to answer.
Host-to-Host Outline
Me: I think we are asking the wrong primary question about AI. The question is not whether machines can think. The question is who bears the consequence of thought.
Co-host: What do you mean by that?
Me: AI can approximate the form of wisdom. It can model patterns, consequences, tradeoffs. But human wisdom is forged by mortality, meaning consequence, finitude, and accountability.
Co-host: So wisdom is not just good analysis?
You :No. Good analysis is part of it. Wisdom begins when someone still has to choose and live with what follows.
Me: That is why this matters now. AI makes intelligence abundant. But it does not make responsibility disappear. If anything, it makes responsibility more important, because plausible answers get easier to generate.
Co-host: So the danger is not only bad answers?
ME: Correct. The deeper danger is outsourcing consequence-bearing judgment.
Link to cognitive fragility
Me: Cognitive fragility starts when people stop treating decisions as something they must own and start treating them like menu selections from a machine.
Co-host: That is a very different critique than “AI is bad.”
Me: Yes. It is not anti-AI. It is pro-responsibility. The machine can recommend. It cannot repent. The human still stands in the blast radius. AI am a force multiplier, not the force itself. A multiplier attached to zero still yields zero. A multiplier attached to a capable operator can look almost magical.