Jan04
There’s a moment that keeps repeating itself in boardrooms, leadership offsites, and keynote briefings.
It usually comes after the slides.
After the AI roadmap.
After the future-of-work diagrams.
Someone leans forward and asks, quietly:
“Okay… but who actually decides?”
Not which system.
Not which vendor.
Not which model.
Who decides when humans and machines disagree.
That question sits underneath almost every conversation I’m having right now, across industries, sectors, and geographies. It’s the reason AI Ethics keeps surfacing as a leadership issue rather than a technology one.
And it’s why so many organisations feel uneasy, even when their AI strategy looks solid on paper.
Because the tension isn’t about capability.
It’s about authority, trust, and responsibility.
AI didn’t create this problem. It exposed it.
Long before AI entered the room, organisations were already struggling with:
AI didn’t invent these cracks. It widened them.
When leaders ask me about AI ethics, what they’re really asking is:
These aren’t abstract philosophical questions. They show up in very practical ways:
Every one of these decisions affects real humans. And once trust is lost, it’s almost impossible to rebuild.
The future of work isn’t about jobs. It’s about decisions.
For years we’ve framed the future of work around roles, skills, and workforce models.
That framing is now incomplete.
Work is dissolving into decisions and tasks, and those decisions are increasingly shared between humans, machines, and AI.
This is the core of my HUMAND™ framework:
The organisations getting this right aren’t asking, “How much can we automate?”
They’re asking, “Where must humans stay in the loop?”
That distinction matters.
Because the fastest way to destroy morale, trust, and culture is to remove human agency without noticing you’ve done it.
Leadership has entered a new phase
We’ve moved beyond digital transformation.
What leaders are navigating now is decision transformation.
That means:
This is where leadership either matures or fractures.
The leaders who thrive are not the ones who predict the future best. They’re the ones who prepare their organisations to respond with clarity when the future arrives unevenly, unpredictably, and all at once.
That’s why foresight matters.
Not as prediction.
As preparation.
Why this keeps showing up across industries
Finance, healthcare, education, logistics, mobility, government, retail. On the surface these sectors look different.
Underneath, they’re wrestling with the same thing:
Who carries responsibility when systems scale beyond human speed?
That’s the connective tissue leaders often miss when they treat AI, future of work, and strategy as separate conversations.
They aren’t.
They’re one conversation viewed from different angles.
What leaders don’t need right now
They don’t need:
What they need is:
This is the work I do. Not because it’s fashionable, but because it’s necessary.
A closing thought
Every organisation is already choosing how human it wants its future to be.
Some are choosing deliberately.
Most are choosing by default.
The difference won’t show up immediately.
But it will show up in trust, culture, resilience, and reputation.
You can’t predict tomorrow.
But you can prepare for it.
And preparation starts with deciding who decides.
Choose Forward
—
Morris Misel
By Morris Misel
Keywords: AI Ethics, Future of Work, Leadership
The Mean and the Margin: When Intelligence Is Trained on the Average, Who Does It Forget?
Governing Reputational Exposure Before It Becomes Impact
Why Governance Needs Two Standards of Rigor
The First Step in Succession Planning
Friday’s Change Reflection Quote - Leadership of Change - Change Leaders Leverage Strategic Alliances