Mar 2, 2026

Smarter than you, dumber than you: why ai isn’t what you were sold

AI is powerful, but it’s also flaky, biased, and weirdly overconfident. This piece cuts through the hype and shows why “smart” tools still need smarter humans in charge.

MIchael Ruocco

Lead Product Designer

Mar 2, 2026

Smarter than you, dumber than you: why ai isn’t what you were sold

AI is powerful, but it’s also flaky, biased, and weirdly overconfident. This piece cuts through the hype and shows why “smart” tools still need smarter humans in charge.

MIchael Ruocco

Lead Product Designer

AI can boost your output, but it also hallucinates, flattens ideas, and sneaks in bias. Here’s how to use it as leverage without letting it quietly lower the quality.

AI is sold like a new religion:
Smarter than your team. Faster than your tools. Cheaper than your salary.

And then you actually use it.

You get answers that sound brilliant and turn out to be wrong.
You get designs that look polished and fall apart when they meet a real user.
You get strategy decks that feel profound until you realise they could apply to any company on earth.

AI is doing exactly what it was built to do. Cheerlead.

Ai is not thinking: it’s guessing, very confidently

Most people still talk about AI like it “understands” things. It doesn’t.
It predicts the next word, the next pixel, the next token based on patterns in its training data.

That means:

  • It can imitate expertise frighteningly well.

  • It can also produce total fiction with the same confident tone.

Hallucinations aren’t a bug we’re about to fix; they’re baked into how these systems work.
When you ask an LLM for facts, it’s not checking a database. It’s performing a magic trick with probabilities.

So you get:

  • Citations to papers that don’t exist.

  • “User research” summaries based on nothing.

  • Design rationales invented after the fact.

If you treat AI as a flawless oracle, you get burned.
If you treat it as a very fast bullshitter, you’ll use it more safely and more creatively.

More output, same problems

The promise was productivity: do more with less.
In reality, most teams are just doing more of the same, just a lot faster.

You see it everywhere:

  • Ten versions of the same landing page instead of one.

  • Fifty “explorations” that all collapse back to the safest pattern.

  • Strategy documents that grew from 10 pages to 40, with no extra clarity.

  • 'This' workflow just replaced a whole design team, quote 'dumpster fire' and i'll send you the workflow yada yada yada snore

Studies and early reports are blunt: the productivity gains are real, but uneven and fragile. Give AI to someone who already knows their craft and they become faster and more ambitious. Give it to someone who’s lost, and they just generate more lost.

AI amplifies whatever is there:

  • Clear thinking becomes sharper.

  • Fuzzy thinking becomes an infinite slide deck.

The tool didn’t fail. We failed by assuming “more output” equals “better outcomes”.

The creativity flattening effect

People worried AI would kill creativity.
What it actually does, most of the time, is flatten it.

When everyone:

  • Uses the same models,

  • Prompts with the same phrases,

  • Asks for the same “clean, modern, minimal” look,

You get a world of products that out the world to sleep. Difference stands out. Its what kept us alive 100,000 years ago. When early morning scans for danger reveal a pair of eyes that might not have been there the previous evening. We're programmed to notice unusual things. It's a survival mechanism. Now, we're serving up the same and calling it progress.

LLMs are trained on the average of what already exists.
So unless you fight it, they pull you toward the centre of the bell curve, the safest, most statistically likely answer.

AI is incredible at getting you to “pretty good” quickly. But “pretty good” is a creative dead zone.

Automation without understanding

The scariest thing about AI isn’t that it replaces us. It’s that it lets us stop understanding our own work.

  • Developers paste AI‑generated code they barely read.

  • Designers ship flows the model suggested, without really knowing why they work.

  • PMs ask for “insights” and accept whatever chart or summary comes back.

Over time, that does two things:

  • Skill atrophy: you stop practising the muscle of deep thinking.

  • Fragile systems: you’re surrounded by code, decisions, and content no one fully owns that no one actually built.

We wanted autopilot. What we built is more like cruise control: helpful, but deadly if you stop steering.

The bias doesn’t go away, it just gets a nicer ui

AI inherits bias from the data it’s trained on, then wraps it in a friendly interface.

That means:

  • It can reinforce stereotypes while sounding “neutral”.

  • It can skew hiring, lending, or moderation decisions in ways that are hard to trace.

  • It can make biased outcomes feel more legitimate because “the AI suggested it.”

The danger isn’t just that models are biased.
It’s that we treat their outputs as objective so their bias gets less scrutiny than a human colleague would.

The more we hide AI behind clean dashboards and confident language, the easier it is to forget.

The hype bubble and the awkward middle

If you read headlines, AI is either going to save the world, or its going to end it.

Reality lives in a very awkward middle.

Yes:

  • AI can automate boring work.

  • It can unlock new interfaces, new products, new experiences.

  • It can give solo operators leverage they never had before.

Also yes:

  • It hallucinates.

  • It flattens creativity.

  • It can quietly embed bias and total nonsense at lightning scale.

Both things are true.

The problem is, “awkward middle” doesn’t sell well. So we get an overpromised future and an underwhelming present.

How to use AI without letting it use you

If AI isn’t a genius oracle, and it isn’t pure snake oil, what is it?

  • A reckless but brilliant junior who works at light speed.

  • A tool that’s as dangerous as it is helpful.

  • A mirror that reflects your thinking, sharp or sloppy, right back at you.

AI is not here to replace you. It’s here to pressure‑test whether you actually know what you’re doing.

If your work is just arranging words and pixels in familiar patterns, AI will absolutely eat your lunch. If your work is choosing which problems matter, which trade‑offs you’ll live with, and what “good” really means for real people, then AI is just another instrument in your hands.

Powerful.
Unreliable.
Not all it’s cracked up to be.

And exactly as useful as the person holding it.