AI | Brilliant, Broken, Here to Stay

By Ellie Hearne

Lately I’ve been wrestling with something that’s both helpful and unsettling. AI is advancing at breakneck speed, even as regulators are catching up with earlier waves of digital change.

It’s thrilling and terrifying all at once.

Thrilling

For all the risks (read on), AI can also be brilliant. It helps us crunch data at a scale humans can’t, spot patterns we’d overlook, and automate tasks that would otherwise take hours or days. Used wisely, it can free up leaders and teams to focus on strategy, creativity, and relationships.

Think about medical imaging, where AI assists radiologists in catching early signs of disease. Or language translation, which makes information and conversations more accessible worldwide. Even in day-to-day work, AI can quickly summarize a long report, suggest alternative approaches, or highlight risks hidden in the fine print.

It isn’t flawless (see below), but when paired with human judgment, AI can deliver speed, efficiency, and insights that make us better at what we already do well.

Business realities

When I use chatbots, I notice how forgetful they can be. This is often by design. AI providers like OpenAI, Anthropic, and others need to manage memory and computational limits, and those constraints shape what the tool retains. Similarly, it confidently tells us it’s right, even when it’s wrong - like a charming colleague who “rounds up” when asked how much progress they’ve made. (I like to think of this as the “bullish*tting intern problem.”)

The rise of these tools also reminds me of the "millennial lifestyle penalty": first, we get hooked on cheap rideshares, for example, then the incumbents vanish, and the price goes up. Are chatbots headed the same way - free now, but pricier once we rely on them?

The black box problem

AI isn’t just forgetful; it’s opaque. OpenAI’s Sam Altman compared it to a kind of “Manhattan Project,” but with less oversight. Even the engineers behind the tech don’t always know why an AI made a certain decision.

That deep learning architecture is a de facto black box. And when proprietary interests get involved, explainability takes a back seat.

Garbage in, garbage out

This old saying takes on new resonance when applied to AI. Chatbots are trained on massive amounts of public content, which means they can absorb and amplify biases, errors, or just plain weirdness. And because their outputs are stochastic (that is, variable and unpredictable), once that flawed material gets in, things can go off the rails fast.

Take romance, for example. Because most humans are wired to seek connection, generative models trained on fragments of everything available online occasionally take that too far. There’s one reported case of Microsoft’s Bing chatbot (“Sydney”) that actually confessed love, encouraged a user to question their marriage, and even suggested leaving their spouse. Oof.

This isn’t isolated. Researchers are documenting scenarios where emotionally loaded AI responses (based on skewed or manipulative input) warp real-world decisions. This highlights a real risk: when you feed messy, inconsistent public data into a machine, you can’t control what the machine spits back. And sometimes it’s dangerous. AI doesn’t understand morality - it’s ultimately just predicting the next word.

In short: if the training data is flawed or emotionally loaded, don’t be surprised when the output takes you somewhere unexpected - or somewhere you don’t want to go.

AI was already here

The arrival of ChatGPT didn’t introduce AI.

Loan approvals, insurance pricing, voice assistants, predictive text, and the like have been with us for a long time. What’s new is that the tools they used are now in nearly everyone’s hands and they’re not always working as intended.

When AI works - and when it doesn’t

Need a quirky itinerary for a road trip through colonial inns? Chatbot to the rescue. Need the nearest hardware store? Just search - you’ll be using less water.

But ask AI to write an academic essay? That’s riskier. An MIT study suggests people lean on AI and forget what they wrote. Their brains disengage, and memory fades - even when the task resurfaces later. And, as I remind students in the program I help run, What do you really want to gain from this experience? If it’s a meaningful learning experience, skip the AI.

Questions I ask before I use AI

Many coaches have already been replaced by AI - and I know that if I use it, I risk training the machine that might replace me, using up the planet’s resources, and doing my clients (and my brain) a disservice. So I treat it like a tool, not a crutch.

Before I type, I ask:

  • Can I find this info elsewhere, fast enough? If yes, I skip the bot.

  • Does the AI know more than I do about this topic? If so, how do I check? Because it can bluff like that charming intern - confident, but wrong.

  • Is this input private? Even a paid plan that promises not to “feed the beast” doesn’t earn my trust with client data. If you’re feeding AI sensitive info, you might need to consult your contracts.

Humans learn better

AI can learn quickly. But human children learn better - through inconsistent input, play, storytime, and conversation. They develop intuitive, social, and emotional intelligence, grounded in messy, real-life experience.

And we humans know not to use glue to make pizza toppings stick without needing code or high-powered GPUs.

What leaders and teams should do

If your organization is building with AI, invest in understanding how it works - and where it doesn’t. Build in guardrails and insist on explainability and thoughtful use.

As an individual, use AI strategically. Let it augment, not replace your thinking.

Leading with intelligence means engaging AI with curiosity, care, and courage - not blind trust.

Must-reads that shape how I see AI (even though they’re not about AI)

Having studied a lot of books, articles, and other credible AI-focused resources, I was surprised that two altogether different sources colored by thinking on AI:

Invisible Women by Caroline Criado Perez: exposes how data often defaults to male norms, and why those gaps hurt all of us. But more broadly, how does this play out on an even larger scale? Garbage in, garbage everywhere.

Careless People by Sarah Wynn-Williams: a gripping, darkly funny insider look at life in tech leadership at Facebook, and how power and ethics collide. Read this and consider how AI is “managed”.

To (attempt to) sum up

AI is powerful and flawed. It can do amazing things - and make us stupider if we let it. The best approach? Use it thoughtfully, not mindlessly. Guard our privacy, check our facts, and let human judgment lead.

The future is AI-augmented, but it should never be AI-dominated.

AI can sharpen our work - or dull our thinking. Let’s treat it with respect, curiosity, and caution.

Ellie Hearne is founder of Pencil or Ink and Head Instructor of the Oxford AI-Driven Business Transformation Executive Programme. She works with leaders and teams across industries to navigate growth, culture, and change. She’s an AI skeptic who dabbles in using it.