A friend of mine is an AI skeptic, and raised some profound points against the tech. (He is a really smart guy). I am posting my responses to his questions and doubts. You can derive his questions from the answers I gave. Hope you get another insight into the AI tool. (Sorry for the length).
---------------
Thanks for a very thoughtful reply. This is exactly the kind of thinking we need around Ai, not fanboy cheerleading.
*Five ideas.*
-Concrete, non-theoretical “real world” use I’ve put through the fire
-AI psychosis / atrophy risk (using it as a crutch)
-Hallucinations, bad training data, and “how do you ever trust it?”
-Surveillance, profiling, and “building a map of me”
-The human element: letters, art, connection, and why I still write my own stuff
*What I’ve actually used it for:
You’re absolutely right: one real test beats a thousand hot takes.
Here are a few categories where I do use ChatGPT as a co-pilot; not theory, actual work that hit the real world:
*Deal / document drafting (then lawyer and counterparty reviewed)
What I asked it to do, take my rough bullet points and turn them into:
-Non-disclosure agreements
-Letters of intent
-Term sheets
-Board briefs and explanation memos
*How I implemented:
-I give it - the parties, the structure, the economics in plain language
-Non-negotiables, like “partner equity cannot be diluted below X%”
It drafts a first pass - I rewrite, tighten, and throw away anything that smells off.
Then it goes to - my lawyer, the other side’s lawyer, the actual counterparty for negotiation.
*Real-world result-
-Documents were accepted as starting points.
-Lawyers focused on genuine edge cases instead of billing ten hours to produce boilerplate.
-We got to “paper everyone can sign” in fewer iterations.
Gap vs expectation:
AI wasn’t “magic”. It didn’t see political landmines or hidden subtext. It did accelerate the grunt work and gave me more cycles for strategy and relationships.
So: not “AI closed a nine-figure deal.” It’s “AI drafted the scaffolding so humans could focus on the stuff that actually blows up deals.”
*Complex planning and scenario runs:
What I asked it to do - build schedules, risk registers, and scenario trees for:
-Multi-megawatt data center rollouts
-Multi-party partnership structures
-Funding paths with several sources (equity, debt, capital, etc.)
*How I implemented:
I feed it constraints: timelines, budget ranges, regulatory gates.
It proposes:
Phased plans, Critical paths, “What-if X slips by 90 days” scenarios
*Real-world:
Helped me see choke points earlier (e.g., “permitting is the real risk, not hardware lead times”).
Let me walk into meetings already armed with Plan A/B/C.
*Limit:
It’s still a simulation. Like your 10–30% engineering rule: I treat its plans as strong hypotheses, not gospel. Reality still punches everyone in the mouth; AI just helps me see where the punch is likely to come from.
*Personal thinking / leadership work:
This is squishier but important.
*What I asked it to do:
-Act as an “External Second Me”:
-Challenge my assumptions.
-Mirror back patterns in my own writing over time.
-Help me articulate principles, roles, and operating systems.
*How I implemented:
I dump messy thoughts in. Ask it to:
-Extract themes,
-contradictions,
-assumptions.
Push back: “What am I not seeing?”
*Real-world:
It has absolutely helped me see blind spots faster than journaling alone.
Some of the hardest punches to my ego came from AI reflecting me back at myself.
*Limit:
It cannot tell me what matters.
That’s on me, my conscience, my community, and God.
It’s great at “cleaning the lens”; it is not the light.
*AI psychosis & atrophy – the real mental risk:
Here we agree more than we disagree.
There are at least three nasty failure modes:
*Outsourcing critical thinking:
People stop thinking. “The model said it, so it must be right.”
Same way GPS made people forget how to read a map.
*Identity drift:
You unconsciously start shaping yourself to what the model mirrors back.
If you’re lonely, it becomes a synthetic friend — very dangerous if you don’t have strong real-world anchors.
*Reality blurring:
Deepfakes, synthetic text, generated personas — all erode “what’s real?”. That can absolutely feed anxiety, alienation, and the “AI psychosis” you’re talking about.
*How I try to push against that in my own use:
I force myself to make a first-pass guess before asking AI:
“Here’s what I think. Now critique it.” That keeps my own critical machinery turned on.
*I ban it (for myself) from:
-No outsourcing spiritual discernment.
-No outsourcing major life decisions.
-No outsourcing “hard conversations” to pure AI text.
I treat it as a sparring partner, not an oracle - If it gives me an answer I like too quickly, that’s a red flag, not comfort.
You’re right: most people will not do this. That’s where I think the real societal damage comes: not from the tool, but from a culture that’s already addicted to outsourcing thinking.
*Hallucinations, garbage data, and “we don’t know how it works”
You’re dead on about:
-Hallucinations — confident nonsense, fake citations, invented cases.
-Garbage in / garbage out — especially when the “garbage” has now been fed back into the training pool via the web.
*A few direct points:
“We don’t know how it works”
I’d nuance that - we do know how we build and train these models at the engineering level: gradient descent, transformers, loss functions, etc.
What we don’t fully understand is the internal representation -why a specific neuron cluster activates for a concept, or why a specific chain of tokens emerges.
In other words:
We know the recipe and oven. We don’t fully understand all the chemistry happening in the dough during baking.
That’s not comfort; I’m with you that it demands humility.
*How I handle hallucinations personally:
I assume, by default, that the model is: “A very fast, very articulate bullshitter that is capable of being accurate if properly constrained.”
*So in my own operating system:
I enforce confidence levels:
-Answer with High/Medium/Low confidence and say why.”
-I force it to show its work - “List assumptions you made.”
-I don’t let it fabricate citations -If I need sources, I go get them separately and compare.
In practice, that kills maybe 95% of the hallucination risk for my use. For the average user who just pastes a question and copy-pastes the answer? Totally agree — the risk is high.
Your core question: “How do they know if they have a good prompt?”
They don’t.
Most people:
-Don’t test systematically.
-Don’t do before/after real-world validation.
-Don’t track error rates.
That is a societal problem, not just an individual one. I don’t have a pretty answer there other than we need AI literacy the same way we built basic science literacy in school. “How to argue with your model” should be a required skill, not a niche one.
*Data, profiling, and “the map of me”:
On this, your paranoia is not paranoia. It’s realism.
*You’re right that:
Modern cars, phones, browsers, and social media already build frighteningly detailed profiles. It is entirely plausible — and in some regimes, confirmed — that AI systems are feeding surveillance and control architectures (China’s social credit system being the obvious poster child).
But a few distinctions that matter in practice:
Local models (running on your own hardware):
-No data leaves your box unless you send it.
-Harder to set up, less convenient, but much more private.
Cloud models (ChatGPT, etc.)
You are inherently trading some privacy for power and ease. I personally treat anything I type into a cloud system as: “Potentially logged, audited, and used in aggregate.”
So:
I strip identifiers where I can. I don’t paste proprietary or confidential partner data that could burn someone else.
For truly sensitive work, I would use:
Either local models (which I am currently building), or Enterprise setups with contractual data protections (and even then, with caution).
*The “map of me” concern:
You put your finger on the exact tradeoff -the very pattern-building that makes the tool more useful to you is the same pattern-building that could be used against you.
Where I personally draw a line:
I accept some mapping in exchange for utility, but:
-Not around my location.
-Not around my finances in raw transactional form.
-Not around my family or people who didn’t consent to this.
Is that a perfect line? No. Is there risk that even my more disciplined use still contributes to the vast profiling apparatus? Yes.
But I’ll be blunt: if someone is carrying a smartphone, using Gmail, and living in a modern city, AI is not the thing that begins surveillance — it’s just the next layer of exploitation on top of the pile.
That doesn’t make it okay. It just means the fight is bigger than “this one tool”.
*The human element – letters, art, and “who you become”:
Here I’m with you 95%.
If I sent you a long, heartfelt message and you knew it was 95% machine-generated with my name slapped on, you’d feel lied to. Same with music, videos, anything pretending to be deeply personal.
Let me separate two modes:
*Crutch / counterfeit:
-“Write my love letter for me.”.
-“Make me sound deep.”
-“Do my art so I can feel like an artist.”
This absolutely short-circuits growth.
You don’t struggle for the words. You don’t confront your own shallowness. You don’t fail publicly and learn.
If I operated like that, I’d fully expect to become less human, not more.
*Coach / amplifier:
-“Here is my ugly first draft. Help me sharpen it without changing my voice.”
-“Give me three alternate phrasings so I can choose what actually matches my heart.”
-“Explain why this chorus in my song doesn’t land emotionally.”
Here:
The human is still doing the core act: deciding what’s true, what feels honest, what fits their story. The tool is a mirror and a suggestion engine, not the author.
What I personally do for anything relational comms:
I either:
Write it myself, raw, then maybe ask the AI, “What am I not seeing in how this might land?” Or I draft with it, but I always rewrite in my own words before hitting send.
If someone can “spot AI” in my message, that means I got lazy and let the machine speak where I should have. That is a failure on my part.
*On your music / video point:
You’re right: if you delegate the whole creative process to AI, you don’t grow much.
But if you use it like:
-“Suggest three harmonic variations on this riff and explain what emotion each evokes,” then you’re actually learning faster.
The dull knife vs sharp knife analogy was about access to high-end technique and knowledge, not skipping the gym of real practice.
I’ll refine my own analogy in light of what you said:
AI is less like a hammer and more like an exoskeleton. Strap it on, and you can lift far more — but if you never train your own muscles, you become dependent and weak underneath the suit.
Where I agree with you:
You’re right to be wary. You’re right that most people will not test, will not validate, will not think deeply about psyche and formation. You’re right that surveillance and profiling are not sci-fi; they’re here.
Where I disagree slightly is on inevitability:
I don’t think using AI seriously automatically means atrophied reasoning and fake relationships. I do think it amplifies whatever direction you are already headed: If you’re lazy, you become lazier. If you’re hungry to learn and willing to be wrong, you can grow faster.
*For me, the reason I am an AI advocate is:
I’d rather walk into this with eyes open, weaponize it as a force multiplier for the “small guy”, and build guardrails around my own use, than pretend it isn’t coming while others quietly build the machinery around us.
But that’s also why I’m glad you’re pushing.
People like me, trying to wring value out of it for real work, and
People like you, constantly asking, “At what cost, and who pays it?”
Happy to get even more specific if you want to zoom in on any one of those areas (psych effects, data, or real-world tests). But I’m not outsourcing my friendship or my conscience to a silicon parrot. That part stays human.