Building Credibility in an AI-Swamped World
Why more AI doesn’t automatically mean more trust We are living through the peak of the AI hype cycle. Trillions of dollars are being poured into infrastructure, tools, and promises of productivity. But beneath the optimism sits a quieter problem: credibility erosion. More AI-generated information doesn’t automatically lead to better outcomes. In many cases, it does the opposite. This article is a curated and abridged reflection on a powerful talk by Eva Digital Trust, exploring how generative AI, when used carelessly, can quietly undermine trust, expertise, and brand credibility — and what to do instead. 👉 Watch Eva’s original presentation here: https://www.evadigitaltrust.com/speaking?utm_source=substack&utm_medium=email#h.sj3r0zwe6ytz 1. AI hype doesn’t equal value AI investment numbers are staggering, but hype alone doesn’t deliver ROI. When use cases are vague and productivity gains don’t materialise, pressure builds — especially on leaders — to prove AI is “working.” The problem isn’t AI itself. It’s deploying it without clarity, strategy, or accountability. 2. Hallucinations are a feature, not a bug Large language models don’t “know” facts; they predict patterns. That means hallucinations are inherent to how they work. The danger is subtle: outputs often sound confident, structured, and professional, while quietly being wrong, irrelevant, or misaligned with context, regulation, or real constraints. This leads to what’s now called “workslop”; polished-looking content that creates more rework, more risk, and more cost. You can’t slop your way to a credible strategy, product, or point of view. 3. Visible AI use can trigger bias Research shows that openly disclosing AI use can lower perceptions of competence — particularly for women, older workers, neurodivergent professionals, and people writing in a second language. AI may aim for neutrality. Humans do not. This means credibility isn’t just about whether you use AI, but how visibly and how thoughtfully you use it.