This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

As We See the Corporate World Adapt to AI, Are We Ignoring the Biggest Risk of All?

I have been advising an online tutoring company for the last 14 months or so, and I have seen them pivot in their AI strategy from ‘something we are trying to build for the future to support revenue’, to ‘ abandoning human tutoring entirely tin favour of adopting a full AI model’. 

For them, if they do not successfully integrate AI in this way in the very near future, their ability to trade on will be limited. It's a fast paced and commercially competitive issue. 

No one wants to be left behind – which is why the pace of adoption make risk prioritising speed over safety. Artificial intelligence and its possibilities are really quite impressive and unnerving in equal measure – it’s also treated like a silver bullet by:

  • Cutting costs
  • Boosting productivity
  • Redesigning teams around automation

I’ve worked in restructuring long enough to recognise transformational trends – but you don’t need to be an expert in your respective field to see how transformational AI is becoming and the furious pace in which it is reshaping our world. 

But what’s happening with AI isn’t just a trend — it’s a civilisational pivot point. Yet, amidst the excitement, we’re ignoring a much larger conversation:

What if the long-term risk from AI isn’t just economic disruption — but existential?

That might sound far-fetched. But consider this:

  • Over 1,000 AI researchers and technologists, including pioneers of the field, have signed public statements warning that AI poses extinction-level risks if left unchecked.
  • The median risk attributed to not being able to use Artificial General Intelligence (an AI that would match or surpass Human Intelligence) safely is between 5%-20% (taken from a 2022 study of 738 experts).
  • Some of the people building this technology believe there’s a meaningful chance we’ll lose control of it — and there’s no playbook for recovering from that.

So how does this feed into our day jobs through the restructuring lens?

Companies are moving fast to integrate AI. That’s understandable — but it’s also potentially risky if we don’t pause to consider who is building this technology, why, and with what guardrails? While I'm not an AI specialist, my experience in restructuring businesses through disruption shows this is a conversation that deserves attention. 

The same systems we’re trusting to optimise supply chains, financial controls, or M&A analytics could eventually evolve beyond our ability to steer.

We help restructure organisations to preserve long-term value and solve complex problems to provide commercially viable solutions – Companies using AI to assist in this process makes perfect sense.

But what good is long-term value if the structure of human civilization itself becomes unstable?

This isn’t a call to stop innovation — it’s a call to approach it more consciously.

If you’re in leadership, restructuring, or tech strategy, you already know the importance of planning for downside scenarios. This should be one of them.

Let’s engage in the real conversation — not just about AI as a tool, but as a force. Because this one might rewrite the rules completely.

If you’re in leadership, restructuring, or tech strategy, you already know the importance of planning for downside scenarios. This should be one of them.

Subscribe to our latest insights by topic here

Tags

AI, perspective