Thinkers360

The Real Question Leaders Are Asking Me About AI Isn’t Technical. It’s Human.

Jan


There’s a moment that keeps repeating itself in boardrooms, leadership offsites, and keynote briefings.

It usually comes after the slides.
After the AI roadmap.
After the future-of-work diagrams.

Someone leans forward and asks, quietly:

“Okay… but who actually decides?”

Not which system.
Not which vendor.
Not which model.

Who decides when humans and machines disagree.

That question sits underneath almost every conversation I’m having right now, across industries, sectors, and geographies. It’s the reason AI Ethics keeps surfacing as a leadership issue rather than a technology one.

And it’s why so many organisations feel uneasy, even when their AI strategy looks solid on paper.

Because the tension isn’t about capability.
It’s about authority, trust, and responsibility.

AI didn’t create this problem. It exposed it.

Long before AI entered the room, organisations were already struggling with:

  • Diffused accountability
  • Decision-by-committee paralysis
  • Overreliance on dashboards instead of judgement

AI didn’t invent these cracks. It widened them.

When leaders ask me about AI ethics, what they’re really asking is:

  • Where does human judgement stop and automation begin?
  • Which decisions can be accelerated without eroding trust?
  • What happens when speed outpaces wisdom?

These aren’t abstract philosophical questions. They show up in very practical ways:

  • In hiring systems filtering people out
  • In credit, insurance, and risk decisions
  • In healthcare diagnostics and triage
  • In performance management and surveillance

Every one of these decisions affects real humans. And once trust is lost, it’s almost impossible to rebuild.

The future of work isn’t about jobs. It’s about decisions.

For years we’ve framed the future of work around roles, skills, and workforce models.

That framing is now incomplete.

Work is dissolving into decisions and tasks, and those decisions are increasingly shared between humans, machines, and AI.

This is the core of my HUMAND™ framework:

  • What should remain human
  • What machines do best
  • Where AI genuinely adds value

The organisations getting this right aren’t asking, “How much can we automate?”

They’re asking, “Where must humans stay in the loop?”

That distinction matters.

Because the fastest way to destroy morale, trust, and culture is to remove human agency without noticing you’ve done it.

Leadership has entered a new phase

We’ve moved beyond digital transformation.

What leaders are navigating now is decision transformation.

That means:

  • Designing who decides what
  • Making decision rights visible
  • Defining escalation when systems fail
  • Building ethical muscle memory before a crisis hits

This is where leadership either matures or fractures.

The leaders who thrive are not the ones who predict the future best. They’re the ones who prepare their organisations to respond with clarity when the future arrives unevenly, unpredictably, and all at once.

That’s why foresight matters.

Not as prediction.
As preparation.

Why this keeps showing up across industries

Finance, healthcare, education, logistics, mobility, government, retail. On the surface these sectors look different.

Underneath, they’re wrestling with the same thing:

Who carries responsibility when systems scale beyond human speed?

That’s the connective tissue leaders often miss when they treat AI, future of work, and strategy as separate conversations.

They aren’t.

They’re one conversation viewed from different angles.

What leaders don’t need right now

They don’t need:

  • Another trend list
  • Another hype cycle
  • Another AI demo divorced from consequence

What they need is:

  • Language for difficult trade-offs
  • Frameworks that travel from stage to boardroom
  • Permission to slow some decisions down while accelerating others

This is the work I do. Not because it’s fashionable, but because it’s necessary.

A closing thought

Every organisation is already choosing how human it wants its future to be.

Some are choosing deliberately.
Most are choosing by default.

The difference won’t show up immediately.

But it will show up in trust, culture, resilience, and reputation.

You can’t predict tomorrow.
But you can prepare for it.

And preparation starts with deciding who decides.

Choose Forward


Morris Misel

By Morris Misel

Keywords: AI Ethics, Future of Work, Leadership

Share this article
Search
How do I climb the Thinkers360 thought leadership leaderboards?
What enterprise services are offered by Thinkers360?
How can I run a B2B Influencer Marketing campaign on Thinkers360?