The Honeymoon's Over: AI's Trust Reckoning Has Arrived
Four stories. One satisfying question: What can you actually trust?
This Week in 30 Seconds
The companies getting ROI from AI are asking harder questions: Does this actually work? Can I measure it? Should I trust it?
Four stories this week point to the same shift. For each one: the news (what happened), the noise (what everyone’s saying), and the signal (what actually matters). The hype cycle is giving way to a trust reckoning. And the operators who see it clearly will make better decisions than the ones still chasing magic.
Story 1: The ROI Gap Nobody Wants to Admit
The News: PwC surveyed 4,454 CEOs across 95 countries. 81% are prioritizing AI investment (up from 60% last year). But only 21% report actual revenue growth from AI. At Davos, Writer CEO May Habib dropped a bomb:
“The fundamental physics of this is the executives are saying, ‘Change stuff with AI,’ and then they’re giving people AI assistants and productivity tools, and you’re not going to get the wholesale reinvention that actually drives impact.”
The Noise: “AI adoption is accelerating!” “Companies are going all-in!” “This is the year of transformation!”
The Signal: The gap comes down to one thing: the difference between adding tools and changing workflows.
Habib nailed it: “Silos are getting flattened. It makes no sense for sales and marketing to be separate teams for most companies. You’ve got to really break those silos to change workflows end-to-end.”
That 21%? They asked a different question: What work should stop existing entirely?
But there’s more hiding in the data: Most companies skipped the baseline. They have no idea if AI is helping because they never measured what “before” looked like. You can’t calculate ROI if you don’t know your starting point.
Your Move: Before your next AI initiative, answer three questions: (1) What does this workflow cost us today in time, money, and errors? (2) What would “this workflow doesn’t exist anymore” look like? (3) Who owns measuring the before and after? I’ve watched teams skip these questions and spend six months wondering why nothing improved. Don’t be that team.
Story 2: The Skill You Just Learned Is Already Obsolete
The News: Forbes declared prompt engineering is no longer the most valuable AI skill. As AI evolves from chatbots that wait for instructions to systems that act on their own, the capability that matters is knowing when to trust AI, how much oversight is needed, and where human judgment remains essential. The quote that stuck with me: “AI skills are no longer technical skills; they’re leadership skills.”
The Noise: “Master these 50 prompting techniques!” “Prompt engineers are the new developers!” “Here’s how to write the perfect prompt...”
The Signal: The article buries the real insight in a banking example. In an agentic workflow, AI handles document gathering, compliance checks, and back-and-forth communication. But at key moments (borderline risk scores, unusual customer profiles) human judgment kicks in.
The skill isn’t prompting. It’s pattern recognition for when NOT to automate.
Think about what this means. The most valuable people will be the ones who know which things AI shouldn’t do. That’s closer to management judgment than technical skill.
Your Move: Pick one workflow your team uses AI for. Map it out and identify three things: where AI runs unsupervised, where humans currently intervene, and where humans SHOULD intervene but don’t. That third category is where your risk is hiding.
Try This Prompt:
For ChatGPT/Claude:
I want to audit one of my AI-assisted workflows for oversight gaps.
The workflow: [Describe your workflow — e.g., "We use AI to draft customer emails, then a team member reviews before sending"]
Help me map three things:
1. WHERE AI RUNS UNSUPERVISED
- Which steps happen without human review?
- What decisions is AI making autonomously?
2. WHERE HUMANS CURRENTLY INTERVENE
- What checkpoints exist today?
- What triggers human review?
3. WHERE HUMANS SHOULD INTERVENE BUT DON'T
- What's slipping through?
- Where could errors cause real damage?
For each gap in category 3, give me:
- What could go wrong (specific scenario)
- How we'd catch it (detection method)
- What it costs if we don't (business impact)
Be direct. I want actionable gaps, not generic warnings.For Perplexity:
What are the most common oversight gaps in AI-assisted business workflows? Include specific failure modes, detection methods, and case studies of AI errors that human review would have caught. Focus on practical business applications 2024-2026.Story 3: Your Team Can’t Tell Real from Fake Anymore
The News: The World Economic Forum’s Global Cybersecurity Outlook 2026 warns that AI-driven fraud has overtaken ransomware as the top cyber risk. 73% of CEOs surveyed said they (or someone in their professional or personal network) had been affected by cyber-enabled fraud in 2025. The FTC reported $12.5B in consumer fraud losses in 2024, up 25% year-over-year. And that’s just what got reported.
The Noise: “Deepfakes are scary!” “AI is being used for evil!” “We need more regulation!”
The Signal: Forget nation-state attacks. The real story is that everyone can now create convincing fakes.
The same tools that help your marketing team personalize outreach help scammers personalize their cons. Your old playbook (look for typos, suspicious links, urgent requests) is obsolete. AI-generated scams don’t have typos. They’re written in perfect, contextually appropriate language. They reference real details about your company, your vendors, your recent transactions.
Your team’s biggest vulnerability is the assumption that they can tell real from fake.
SMBs are particularly exposed. Smaller teams mean fewer verification layers. Relationship-based business makes “I trust that voice” dangerous. Less security infrastructure means more reliance on human judgment, and that judgment just got a lot harder.
Your Move: This week, implement one verification protocol. Any payment change request gets confirmed via a different channel (email request? Call to verify). Any “urgent” request from leadership gets a 15-minute delay and direct confirmation. Any new vendor contact gets verified through your existing records, not the contact info they provide. Yes, it adds friction. That’s the point.
Story 4: Your AI Assistant Now Has a Side Hustle
The News: OpenAI is rolling out advertising in ChatGPT, starting with beta brands committing $1M each. Ads will appear “at the bottom of answers when there’s a relevant sponsored product or service based on your current conversation.” Sam Altman once called advertising a “last resort” and “unsettling.” Here we are. Google’s DeepMind CEO, at Davos, took a shot:
“It’s interesting they’ve gone for that so early. Maybe they feel they need to make more revenue.”
The Noise: “ChatGPT is selling out!” “This is the end of AI trust!” “Advertising ruins everything!”
The Signal: Forget the hypocrisy angle. The real story is the incentive shift.
When ChatGPT’s business model was subscriptions, the incentive was: give you the best answer so you keep paying. Now ads enter the picture. New incentive: give you an answer that creates ad-serving opportunities.
Do these conflict? Not always. But they’re not perfectly aligned either.
When you ask “what’s the best CRM for a 20-person sales team,” are you getting the best answer or the sponsored answer? The answer is probably “both,” but “both” is different from “just the best answer.”
Three implications: (1) Platform dependency risk is real. If your workflows depend heavily on one AI tool, you’re now dependent on that tool’s business model decisions. Diversification isn’t paranoia. (2) Free tiers get complicated. The ad-supported experience will differ from paid. Factor this into tool decisions. (3) Adjust your default trust setting. “Skeptical by default” is healthier than “trusting by default,” especially for purchase decisions.
Your Move: For high-stakes decisions (vendors, purchases, strategy), treat AI recommendations like you’d treat a recommendation from a salesperson: useful input, but verify independently.
The Pattern
Four stories. One theme. Trust is the new bottleneck.
Can we trust the investment thesis? (Only 21% are seeing returns.) Can we trust what we’ve learned? (The skills are already shifting.) Can we trust what we see and hear? (AI makes deception trivially easy.) Can we trust the tools themselves? (They have their own incentives now.)
Call it what it is: maturity. Technology adoption looks like this when the hype fades. The honeymoon phase (where AI felt like magic and every implementation felt like progress) is giving way to harder questions about reliability, measurement, and sustainable adoption.
See this clearly, and you’ll make better decisions than the ones still chasing magic.
The Contrarian Corner
Everyone’s framing the ROI gap as “companies aren’t using AI right.” That’s backwards.
The real problem: they skipped the boring pre-work. They didn’t measure their baselines. They didn’t question whether workflows should exist at all. They bought tools instead of asking questions.
That 21% getting returns? They didn’t have better AI. They had better discipline.
Your One Move This Week
Run a trust audit on your most-used AI tool.
Pick one — the tool you rely on most — and answer these questions:
What is this tool’s business model, and how might that affect its recommendations?
What happens if this tool disappears or changes tomorrow? Do you have a backup?
Are you measuring what this tool actually delivers, or just assuming value?
Companies winning with AI in 2026 know exactly what each tool is good for (and what it’s not).
That’s the week. The honeymoon’s over. The real work starts now.
Good Luck - Dan


