Europe hands out AI timeouts - Understanding the EU AI Act

The way the AI Act works is pretty clever. It doesn't treat every AI system the same. Instead, it uses a risk-based approach, similar to how other industries are regulated

SHARE

You've probably seen headlines warning that AI is coming for your job, spying on your face, or turning your fridge into a therapist. But beyond the sci-fi drama, a more grounded and important story is unfolding in Brussels: the EU AI Act. It's Europe's big move to ensure AI works for people — not against them. Whether you're a parent, a student, or just someone curious about how AI is creeping into everyday life, this new law is more relevant than it might seem. And don't worry — this article avoids the technical jargon. Think of it as the straightforward version for anyone wanting to understand what's going on.

At its core, the AI Act is about power — who has it, how it's used, and how to keep it in check. AI systems aren't just novelty apps or clever toys anymore. They're helping to decide who gets a mortgage, which CVs get shortlisted, and even which streets the police patrol more heavily. With that influence, it's no wonder the EU is stepping in to ensure AI doesn't go unchecked. The AI Act sets clear rules to prevent systems from manipulating people, reinforcing unfair biases, or secretly making harmful decisions. It's not about blocking innovation but protecting the public while allowing tech to grow responsibly.

The way the AI Act works is pretty clever. It doesn't treat every AI system the same. Instead, it uses a risk-based approach, similar to how other industries are regulated. A chatbot giving customer service advice carries less risk than a facial recognition system used in public places. So, AI systems are grouped into four categories based on their risk. Unacceptable risk systems are at the top of the list, and these are banned outright. Next are High-Risk systems, like AI used in hiring or law enforcement, which must follow strict rules. Then come systems with Transparency Risk, which need to tell users that they're AI — like chatbots or image generators. Lastly, there are Minimal Risk systems — things like photo filters or basic spam blockers that don't need to meet extra requirements.

But how does the Act decide what counts as AI in the first place? It focuses on systems that use a process called "inference" — in other words, tools that take data and draw conclusions or make predictions based on it. If it's just following a script or repeating instructions, like a calculator or thermostat, it's not considered AI under this law. But if the system is analysing patterns and making decisions — even something simple like recommending a product or scanning a CV — then it's likely covered.

Now, let's look at the types of AI that are completely banned under the Act. The EU considers these practices so risky or unethical that they're not allowed at all. First, there's the manipulative AI. These are systems designed to influence people in ways they don't fully realise — for example, using subtle psychological tricks to steer their behaviour online, often to buy something or adopt a particular view. Then there's AI that targets vulnerable people, such as children or older people, with the intention of causing harm or taking advantage of them — like a system that pushes gambling ads to teenagers or manipulates lonely pensioners.

Social scoring is another red line. This is when an AI system ranks or evaluates people based on their behaviour, beliefs, or background — like what they post on social media or who they associate with — and then uses that to decide whether they should get access to certain services or opportunities. It's been widely criticised as discriminatory and invasive. Similarly, predictive policing systems that try to guess who might commit a crime based purely on personal traits or past behaviour are banned. These systems raise significant concerns about profiling and unfair targeting.

Another banned practice is untargeted scraping — basically pulling facial images from the internet or public CCTV footage without people's permission. This kind of data is often used to train facial recognition software, but the privacy risks are huge. The AI Act says this can't happen without explicit consent and purpose.

When it comes to workplaces and schools, the Act stops emotion recognition systems — AI that tries to analyse facial expressions or voice tones to guess how someone is feeling. It's banned in professional and educational settings unless it's used for particular reasons, like medical diagnosis or safety monitoring. The same goes for biometric categorisation, which involves labelling people based on sensitive traits like race, sexuality, or political beliefs. The EU considers this a serious risk to personal dignity and equality.

Perhaps the most headline-grabbing ban is on real-time remote biometric identification in public spaces — essentially facial recognition used by police in real time. The Act says that this technology should only be used in exceptional circumstances, like searching for missing children or preventing a terrorist attack. Even then, it requires special approval.

Not everything falls under the Act, though. If you're using AI for personal reasons — like a smart speaker at home or a health app on your phone — the law doesn't apply. The same goes for systems used in early research or development or for purely scientific purposes. This shows the EU isn't trying to stop innovation — it's trying to guide it.

So, who's making sure the rules are followed? Each EU country will have its own AI authority to keep an eye on things. They'll be working with an EU-level AI Office that coordinates efforts and gives advice on tricky cases. There's also an AI Board, which brings together experts from across Europe to keep the rules clear and up to date. And while the AI Act is a law, it's designed to work alongside others, like GDPR, so that data protection and AI safety go hand in hand.

Ultimately, the AI Act is about drawing a line between what's acceptable and what's not — not just to stop abuse but to build public trust in a fast-changing world. Most people don't want AI to take over — they want it to be fair, safe, and respectful of human rights. By setting clear rules now, the EU is trying to shape a future where technology serves people, not the other way around.

More in Business