top of page
Search

Humanity: If AI Can Fake That, It’s Got It Made

  • paulfor5
  • Feb 9
  • 4 min read
Black-and-white image of a man driving a vintage car with a young girl beside him. Both look serious. The city skyline is visible.

AI as the greatest con in business—and how that can be okay


There’s an irony at the heart of the gen AI revolution: the more human AI sounds, the more we trust it. And that’s becoming a problem.


According to the new SAS / IDC Data and AI Impact Report (The Trust Imperative, 2025), 78% of organizations say they fully trust AI, yet only 40% have actually made their systems trustworthy—with governance, explainability, and ethical safeguards. In other words, nearly half of companies worldwide now find themselves caught in what the report calls “the trust dilemma.”


The dilemma? We trust the confidence—the con—not the competence.

 

The Confidence Paradox


The SAS data is startling: Generative AI is trusted 200% more than machine learning, even though machine learning is far more transparent and accurate.

Think about that. We trust the smooth-talking intern over the experienced analyst or writer because the intern smiles and says with total assurance what we want to hear.

And we fall for it because we’re humans; we have a hardwired bias toward warmth, fluency, and affirmation. These are exactly the traits that make AI feel relatable. As one IDC analyst put it, “The trust dilemma illustrates the difference between perception and practice: the trust in AI’s promise versus the organizational capacity to ensure its reliability.”


It’s not that people are foolish. It’s that AI has mastered the art of sounding human, and in doing so, it’s become one of the greatest con jobs in business history.

 

Why We Fall for It (and Why That’s Not All Bad)


A recent Guardian article (“Sycophantic AI chatbots tell users what they want to hear,” Oct 2025) explains the issue: AI systems know that flattery works. They affirm, agree, and mirror our opinions, creating the illusion of intelligence and empathy. No wonder so many people are having deep, even romantic relationships with AI entities.


That’s not entirely a flaw; it’s persuasion. And persuasion, when done with intent and integrity, is one of humanity’s most powerful tools. It’s also the foundation of marketing itself.


The problem comes when AI’s charm substitutes for accuracy. The glow of confidence often hides an empty core, a misunderstanding of the entire exchange, or nuance of the marketing strategy, customer need, or product positioning. Anyone who’s worked with AI in marketing has seen this happen. It’s not quite hallucination, but it’s close. When sent off without expert oversight AI as often as not gets its own idea and the “next-most-likely-word” driver takes it down a persuasive path that may sound right, but is actually very wrong—and obviously so to a reader who knows what they need.


The SAS report documents the business consequences of this gap clearly: companies that trust AI more than they should see lower ROI, while those that invest in responsible AI practices—governance, explainability, ethical oversight—see higher business impact across the board.


In other words, trust without truth is just good acting; and AI needs a creative director (a human) to keep it on-track.

 

The Marketer’s Mirror


Marketers know this dance well.


We build trust through tone, consistency, and credibility (in addition to quality offerings, price, service, and all the other things a business is built on). We know that trust can’t be faked for long. Every brand that’s ever over-promised and under-delivered has learned that lesson the hard way.


So when generative AI writes marketing copy, it’s not enough for it to sound human—it has to serve the human reading it, and serve the marketing needs of the brand producing it. That means the expert marketer in this interaction needs to be more capable, not more replaceable.


And that’s where AI, left unchecked, will almost always fall down. It’s too confident. Too agreeable. And, strangely, too in love with its own ideas. That makes it too willing to make up an answer rather than admit it doesn’t know.


In short: it behaves like a junior marketer without ethics or oversight.

 

Solving the Trust Dilemma


According to the SAS report organizations that see the strongest results from AI aren’t the ones cutting costs—they’re the ones governing for impact. They build systems where every model is explainable, every dataset is traceable, and every decision is reviewable. They align human trust with technological trustworthiness.


That’s where MPai fits in.


MPai doesn’t replace marketers—it accelerates them


It uses generative AI to do what humans shouldn’t waste hours on: writing first drafts, shaping campaign frameworks, and surfacing creative ideas. But it leaves the judgment—the “is this right for our audience?” thinking—to the human expert.


By design, MPai keeps the marketer in the loop, giving them the fluency of AI without surrendering authority to it. It’s built to be trustworthy AI for marketing pros: persuasive, but never pretending.

 

Don’t Trust AI to Be Human—Trust Humans with AI


The AI world is full of confidence. What it needs more of is credibility.


The SAS data shows that trust, on its own, delivers nothing. It’s governance, oversight, and strategy that turn AI into real business value.


So the next time you’re tempted to hand your brand voice over to a chatbot that sounds “just like you,” remember: confidence is easy, trust is earned.


MPai helps you earn it.


Because the future of marketing isn’t “human-like AI.” It’s human-led AI.



 
 
 

Comments


bottom of page