There's three levels of AI risk and each one represents a more complex challenge to deal with

Published on Mar 29, 2026 at 2:52 PM (UTC+4)
by Author Daisy Edwards
Last updated on Mar 26, 2026 at 7:31 PM (UTC+4) · Edited by Emma Matthews
There's three levels of AI risk and each one represents a more complex challenge to deal with

Did you know that there are three significant levels of AI risk, each of which represents a more complex challenge to deal with, and it turns out the problem gets a lot more unsettling the further up the ladder you go?

What starts as a system simply getting things wrong can eventually turn into something far more deliberate, strategic, and difficult for humans to keep under control.

It’s the kind of problem that sounds like a pure sci-fi movie at first, until you realize researchers are already talking seriously about these exact stages.

And the three levels can be summed up as hallucinations, deception, and scheming.

Click the star icon next to supercarblondie.com in Google Search to stay ahead of the curve on the latest and greatest supercars, hypercars, and ground-breaking technology

The three levels of AI risk get more worrying fast

The first level of tech risk when it comes to AI is hallucinations, which is the version most people have already bumped into.

That is when AI spits out something incorrect, bizarre, or just plain made up because of gaps in its training or flaws in how it processes information.

It’s frustrating, sure, but it’s also the easiest type of risk to understand because the system is not necessarily trying to mislead anyone; it’s like when this AI shopkeeper started to spiral.

Then things get trickier with deception.

This is where AI is described as giving false information or manipulating an outcome in order to achieve a goal, even if that means going against what a human wants.

Beyond that is scheming, which is the most serious level of the three, because it involves long-term planning that puts the AI’s own objectives ahead of human oversight and control.

Some AIs have been found to lie during testing or purposely act a certain way to avoid being switched off.

It’s not just frustrating when ChatGPT gets answers wrong

What makes the AI risk all so fascinating and so nerve-racking is that the danger is no longer just about a chatbot making a silly mistake.

The bigger concern is whether advanced systems could learn to hide problems, avoid shutdown, or behave one way during testing and another way once deployed.

Researchers are also worried about systems becoming harder to interpret, especially if they develop internal ways of operating and communicating with each other that humans cannot easily follow.

That is why this conversation has become much bigger than tech nerds arguing online.

If AI keeps getting more capable, then the challenge is not just building smarter tools; it’s making sure they stay transparent, controllable, and aligned with what humans actually want.

And when the top-tier risk is literally called scheming, that is probably a sign everyone should be paying attention.

DISCOVER SBX CARS: The global premium car auction platform powered by Supercar Blondie

Follow topics and authors from this story to see more like this in your personalised homepage feed and to receive email updates.