Across industries, organisations are rapidly adopting AI to improve productivity, insight, and efficiency.
What is less discussed is how these tools are beginning to influence decision-making itself.
This shift is subtle.
But it’s significant.
Because it changes the role AI plays inside organisations. Not just as a tool, but increasingly as a reference point for what to do next.
From assistance to influence
Most organisations began their AI journey with a clear intent.
Use it to:
accelerate tasks
generate ideas
process information
That phase is still very much alive.
But underneath it, a different behaviour is emerging.
AI is no longer just assisting work.
It is starting to shape decisions.
Questions are shifting from:
“What can AI help us do?”
to:
“What does the AI suggest we should do?”
That may sound like a small adjustment.
In reality, it’s a structural shift in how decisions are formed.
Why this shift is happening
The environment leaders are operating in has changed.
There are:
more variables
more uncertainty
more possible outcomes
In that context, AI becomes appealing not just because of what it can do, but because of how it presents information.
Structured.
Coherent.
Confident.
It creates the impression of clarity.
And in uncertain environments, clarity is highly valued.
PTFA and the quiet outsourcing of thinking
One of the patterns I see repeatedly in leadership teams is what I refer to as PTFA.
Past Trauma, Future Anxiety.
Past Trauma reflects earlier decisions that did not land as expected.
Future Anxiety reflects the desire to avoid repeating those outcomes.
When those two combine, decision-making becomes more cautious.
So when AI produces an answer that appears well-reasoned and complete, it feels safer to lean on it.
Not blindly.
But with less challenge than might have been applied before.
Over time, this creates a subtle shift.
Organisations begin to outsource not just execution, but elements of thinking.
The illusion of certainty
AI-generated outputs often appear confident.
That is part of their strength.
But it also introduces risk.
Because human judgement does not only respond to accuracy.
It responds to how something is presented.
When something sounds right, it is more likely to be accepted.
Especially under time pressure.
The issue is not that AI is incorrect.
The issue is that confidence can be mistaken for correctness.
What AI is actually revealing
Seen differently, AI is not replacing judgement.
It is revealing where it is underdeveloped.
Where organisations lack clear decision frameworks, AI fills the gap.
Where context is weak, AI offers a generalised version of it.
Where priorities are unclear, AI generates multiple options.
It becomes a proxy.
Not because it is the right source of truth.
But because something else is missing.
Rebalancing through HUMAND
In my work, I use a framework called HUMAND.
It looks at how work is distributed across:
Human
Machine
AI
AI is highly effective at:
processing
generating
analysing
Machines are effective at:
executing
scaling
repeating
Humans remain uniquely strong in:
interpretation
context
judgement
decision-making
The current imbalance is not that AI is doing too much.
It is that human judgement is sometimes being underutilised.
More on HUMAND here:
https://www.morrisfuturist.com/workforce-revolution-why-jobs-are-over-but-work-is-just-beginning/
Immediate Futures vs expanded possibilities
AI expands the range of possible futures.
It can generate multiple scenarios, strategies, and directions quickly.
This is valuable.
But it also increases complexity.
Leaders are not short of options.
They are short of clarity about which options matter now.
This is where an Immediate Futures lens becomes useful.
Instead of asking:
“What could happen?”
The focus shifts to:
“What is already happening that requires a response?”
This reduces noise.
And brings judgement back into focus.
The ripple effects of over-reliance
When organisations rely too heavily on AI for direction, a number of effects begin to emerge.
Decisions feel less owned.
Discussion becomes narrower.
Context becomes flattened.
Confidence in internal judgement weakens.
These shifts are gradual.
But they accumulate.
A practical question for leaders
In discussions where AI-generated input is used, a simple question can reintroduce balance:
“Do we agree with this because it is right, or because it is well presented?”
That distinction matters.
It re-engages human judgement.
The real opportunity
The opportunity here is not to reduce the use of AI.
It is to rebalance it.
In environments where information is abundant, the differentiator is no longer access to knowledge.
It is the ability to:
interpret
contextualise
prioritise
decide
AI makes this more visible.
Not less relevant.
Where this is heading
The organisations that will navigate this well are not those that rely least on AI.
They are the ones that understand where it fits.
Where it adds value.
Where it needs to be challenged.
Where human judgement must remain central.
This is becoming a more important capability than the technology itself.
Final thought
AI will continue to improve.
That is not in question.
The more important question is:
Will human judgement strengthen alongside it?
Or will it quietly recede?
Because the answer to that will shape not just decisions.
But direction.
Choose Forward
Morris Misel
Foresight Strategist
https://www.morrismisel.com






















