Thinkers360

We Imagined AI Long Before We Built It

May

This written content was disclosed by the author as AI-augmented.

Why old science fiction fantasies are becoming engineering problems, and what leaders should do about it

A lot of the future arrives twice.

The first time, it arrives as fantasy.

A cartoon.
A comic book.
A film.
A story that sounds ridiculous until it doesn’t.

The second time, it arrives as engineering.

Not as magic.
Not as a perfect version of what we imagined.
As something more practical, more awkward, more expensive, and far more consequential.

That’s the shift I keep coming back to.

Because many of the things now being discussed seriously in boardrooms, labs, factories, and strategy rooms were once dismissed as science fiction. Humanoid robots. Machines that can work alongside us. Energy captured in one place and redirected to another in entirely new ways. Systems that can think, adapt, respond, and increasingly act.

The details are often different from what fiction promised.

But the deeper human cravings are not.

We wanted help.
We wanted ease.
We wanted speed.
We wanted less friction.
We wanted something beyond us doing some of the thinking.

That is why today’s AI conversation can feel strangely familiar.

Old science fiction was not a blueprint. It was a wish list

One of the mistakes people make is treating science fiction as failed forecasting.

That misses the point.

Science fiction was never mainly about accuracy. It was about projection. It gave shape to what people hoped, feared, or longed for. It turned human desires into characters, machines, and imagined worlds.

That matters, because those desires still drive innovation now.

Not the exact shape of the flying car.
Not the exact design of the robot.
Not the exact mechanics of the machine mind.

The desire underneath it.

That is what persists.

So when we look at current developments in AI, robotics, and energy systems, the better question is not whether old fiction “got it right”.

The better question is:

What old human wish is this technology finally making practical?

That is a far more useful strategic lens.

When fantasy becomes engineering, money starts moving

This is the moment leaders need to get better at recognising.

An idea can sit in the realm of fantasy for decades. It entertains people, inspires inventors, and quietly shapes cultural expectation. Then one day, it shifts category.

It becomes an engineering problem.

That is the turning point.

Because once that happens, the idea can be:

  • funded

  • patented

  • prototyped

  • tested

  • refined

  • commercialised

  • scaled

That is when the future begins to move from symbolic to operational.

And that is also when many leaders miss it.

Why?

Because at first it still looks faintly absurd.

It still sounds theatrical.
Still carries the scent of fantasy.
Still feels easier to laugh at than to interpret.

But by the time it looks normal, practical, and commercially viable, the real shift is already underway.

Humanoid robots are a good example

Take the current excitement around humanoid robots.

For years, people have imagined human-shaped machines as companions, assistants, servants, or threats. We have seen them in stories, cartoons, novels, films, and toy aisles. They are not new in the cultural imagination.

What is changing is not the fantasy.

It is the engineering.

Now we are seeing more serious attempts to build machines that can move through human environments, carry things, grip, balance, recover, crouch, and perform repetitive or awkward tasks with increasing reliability.

That does not mean the old fantasy has arrived intact.

In fact, the use case is changing.

Sci-fi gave us robot companions.

Business is far more interested in robot productivity.

That is the part many people miss.

The future often keeps the craving, but changes the commercial application.

And that matters for leadership because it shifts the conversation away from novelty and into operations.

Not “wouldn’t this be amazing?”

But:

  • where would this reduce friction?

  • what does this change about labour design?

  • what becomes cheaper?

  • what becomes scalable?

  • what new risks appear?

  • what trust thresholds need to be crossed?

Those are strategy questions, not science fiction questions.

The hand and knee problem is really a business problem

This is where things get interesting.

A humanoid robot is not commercially useful just because it exists.

It becomes useful when it can reliably function in the human-built world.

That means being able to:

  • grip with precision

  • apply light or heavy pressure appropriately

  • balance

  • crouch

  • lift

  • recover from instability

  • move through unpredictable environments

  • repeat those behaviours safely and consistently

That may sound technical, but it has direct strategic implications.

Because it tells us where the actual bottlenecks are.

The future rarely hinges on the glamorous headline.

It usually hinges on whether the thing can work reliably enough, cheaply enough, and safely enough to matter at scale.

That is the same question leaders should ask of every emerging technology.

Not “is this impressive?”

But “what still has to be solved before this becomes operationally real?”

AI is not magic. It is an accelerator

A lot of people talk about AI as if it is summoning the future out of nowhere.

That is not what is happening.

AI is doing something both simpler and more powerful.

It is accelerating the engineering process.

It helps humans model more quickly, compare more options, optimise systems faster, detect patterns earlier, and reduce the time between idea and test. That matters enormously.

Because many technologies do not fail for lack of imagination.

They fail because the hard bits take too long, cost too much, or remain too clumsy.

AI shortens parts of that journey.

So when people ask whether AI is making science fiction real, my answer is this:

Not directly.

But it is helping turn parts of fantasy into workable engineering challenges faster than before.

That is enough to change the pace of commercial reality.

Energy is part of this story too

One of the less romantic truths about the AI era is that intelligence is physical.

It needs power.

AI may feel weightless when we interact with it through a screen, but behind that smooth response sit data centres, compute loads, cooling demands, storage needs, and enormous electricity requirements.

That is why some old, strange ideas are returning to the conversation.

When energy demand grows large enough, even ideas that once sounded fanciful start to become commercially interesting again.

This is where leaders need to pay attention.

Because emerging technology does not just create new products.

It puts pressure on infrastructure.

And infrastructure pressure is often where the next wave of opportunity and disruption begins.

The leaders who see that early tend to ask better questions than everyone else.

Not just:

“What does this technology do?”

But:

“What does this technology need?”

That is where many second-order effects begin.

The deeper signal is human

This is the part I think matters most.

These are not just technology stories.

They are human stories.

Why do we keep wanting machines to look like us?
Why do we project intelligence into non-human things?
Why do we respond differently to a screen, a voice, a dog-shaped robot, and a humanoid one?
Why do some technologies feel intuitive before they are even useful?

Because technology does not arrive into a neutral landscape.

It arrives into human desire.

That means every emerging technology is shaped not just by capability, but by:

  • trust

  • familiarity

  • symbolism

  • comfort

  • fear

  • aspiration

  • cultural memory

This is why leadership teams need more than technical literacy.

They need interpretive literacy.

They need to understand not just what a thing can do, but what it means to people, what it represents, and what response it is likely to trigger.

That is where foresight becomes commercially valuable.

The Ripple Effects sit beyond the novelty

One of the reasons I keep using the language of Ripple Effects is that it stops us fixating on the surface event.

A robot is never just a robot.

It is also a signal about:

  • labour design

  • workplace trust

  • insurance

  • regulation

  • safety

  • capability expectations

  • skill redesign

  • what gets automated next

  • what still needs to remain deeply human

An energy breakthrough is never just an energy breakthrough.

It is also about:

  • cost

  • access

  • inequality

  • infrastructure

  • competition

  • resilience

  • who benefits first

  • what becomes viable because power becomes more available

This is the practical value of foresight.

It stops leaders asking only whether the thing itself matters.

It encourages them to ask what the thing changes next.

That is nearly always the more strategic question.

HUMAND becomes more useful here, not less

This is exactly where my HUMAND thinking helps.

For me, HUMAND is not a slogan. It is a practical design lens.

It asks:

What should be done by humans?
What should be done by machines?
What should be done by AI?
What should be done by some combination of all three?

That is the real work now.

Not the tired debate about humans versus machines.

But the more useful conversation about allocation, design, judgement, and fit.

As technologies become more capable, this question gets sharper.

Because capability alone is not enough.

We still need to ask:

  • what should remain human because it requires trust, care, context, or judgement?

  • what should be automated because repetition adds no value?

  • what should be augmented because the combination is stronger than either alone?

That is where leaders will increasingly win or lose.

Not in whether they adopt the latest thing first.

But in whether they redesign work, service, and value intelligently.

So what should leaders do now?

This is the part that matters most.

If you are leading a business, a team, or a strategy function, here are the practical next steps I would suggest.

1. Stop dismissing strange signals too early

Some of the most important future shifts look laughable before they look inevitable. Build the habit of watching odd ideas for commercial traction rather than dismissing them on first contact.

2. Ask what desire sits underneath the technology

Do not just analyse the product. Analyse the craving. Is it about convenience, speed, status, safety, productivity, companionship, trust, or something else? That tells you more about its likely trajectory.

3. Watch for the fantasy-to-engineering crossover

Once an idea starts attracting serious engineering effort, meaningful capital, and operational testing, it has moved into a different category. That is when leadership attention should increase.

4. Think in Ripple Effects, not isolated headlines

The novelty itself is rarely the whole story. Ask what becomes easier, cheaper, faster, more scalable, or suddenly outdated if this thing matures.

5. Redesign work through HUMAND

Do not ask only whether a machine can do something. Ask whether it should. Ask what remains human, what becomes automated, and what works best in combination.

6. Prepare culturally, not just technically

Adoption depends on trust and fit. The most elegant technology in the world will still fail if people do not accept it, understand it, or see where it belongs.

Final thought

We imagined a lot of this long before we could build it.

That is not trivial.

It means that many technologies arrive into a world that has already rehearsed them emotionally, culturally, and symbolically. People have already dreamed them, feared them, named them, and placed them in stories.

That is why the present can feel so familiar.

But the more useful question for leaders is not whether science fiction is coming true.

It is whether fantasy has crossed the line into engineering.

Because once that happens, the rest of the system tends to move.

Funding follows.
Patents follow.
Trials follow.
Products follow.
Competition follows.
And eventually, ordinariness follows.

That is when most people finally notice.

By then, the early signal is already gone.

If your organisation is trying to make sense of the signals reshaping technology, work, leadership, and decision-making, that’s the work I do through keynotes, strategy sessions, and advisory.

Choose Forward.

By Morris Misel

Keywords: AI, Future of Work, Robotics

Share this article
Search
How do I climb the Thinkers360 thought leadership leaderboards?
What enterprise services are offered by Thinkers360?
How can I run a B2B Influencer Marketing campaign on Thinkers360?