So, what are the prohibited practices under the
EU AI Act?
In simple terms,
Article 5 of the AI Act outright bans certain uses of AI that it deems too dangerous or ethically unacceptable. These practices are completely banned because they pose unacceptable risks to people's rights, safety, and well-being. Understanding these restrictions is crucial for any professional working with AI systems, especially as regulatory scrutiny intensifies.
Let’s break it down in plain English.
What Is the EU AI Act?
Before we get into the AI Act prohibited practices, let's quickly recap what the EU AI Act is (because nobody likes jumping into the deep end without knowing how to swim!).
The EU AI Act is the first major regulatory framework in the world focused solely on artificial intelligence. It ensures that
AI systems used in the EU are safe, transparent, and aligned with fundamental rights.Rather than a one-size-fits-all approach, the Act uses a risk-based classification system:
- Minimal risk systems face minimal regulation
- Limited risk systems must meet transparency requirements
- High-risk AI systems must comply with strict requirements
- Unacceptable risk systems are completely prohibited
This article focuses on the "Unacceptable Risk" category, also known as prohibited AI practices.
What Counts as a “Prohibited Practice”? AI Applications That Cross the Line
The EU AI Act explicitly bans certain AI practices deemed too dangerous for society. These are non-negotiable prohibitions that apply to all organisations operating within the EU market.
1. Social Scoring Systems
Remember that episode of Black Mirror (“Nosedive”) where people rated each other for everything? The EU doesn't want that becoming reality.
What's prohibited: AI systems that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to detrimental treatment in social contexts unrelated to the contexts in which the data was generated.
Finance industry example: A bank cannot use an AI system that denies loans to individuals based on their social media behaviour, political views, or personal associations that have no direct relevance to creditworthiness.
Think about it: Would you want to be denied a mortgage because your AI-powered bank decided your weekend activities or friend circle made you "untrustworthy"? This prohibition ensures decisions about you remain relevant and fair.
2. Exploitative Systems Targeting Vulnerabilities
What's prohibited: AI systems that exploit the vulnerabilities of specific groups of persons due to their age, disability, or specific social or economic situations.
Finance industry example: An investment app cannot use AI to identify elderly clients with cognitive decline and automatically recommend complex financial products that aren't in their best interest but generate higher fees.
This prohibition protects vulnerable individuals from being targeted by predatory AI systems designed to take advantage of their circumstances. In the finance sector, where trust is paramount, this prohibition safeguards both clients and the industry's reputation.
3. Biometric Categorisation Systems Based on Sensitive Characteristics
What's prohibited: AI systems that categorise individuals based on biometric data to infer sensitive characteristics like race, political opinions, religious beliefs, or sexual orientation.
Finance industry example: A financial institution cannot deploy an AI system that uses facial recognition at branch entrances to classify customers by ethnicity or presumed socioeconomic status and then provide different service levels based on these classifications.
This prohibition helps ensure fair and equal treatment of all financial service customers, regardless of their personal characteristics.
4. Real-time Remote Biometric Identification in Public Spaces
What's prohibited: The use of real-time remote biometric identification systems (like AI systems that scrape facial images via recognition) in publicly accessible spaces, such as CCTV footage or the internet, for law enforcement purposes, with limited exceptions for specific serious crimes.
Finance industry example: While a bank can use facial recognition for secure access to safety deposit boxes with explicit customer consent, it cannot implement a system that continuously scans all visitors to its branches and matches them against watchlists without a proper legal basis.
5. Emotion Recognition in the Workplace and Educational Institutions
What's prohibited: AI that infers emotions in workplaces and educational institutions, with exceptions for medical and safety reasons.
Finance industry example: A financial firm cannot use AI to monitor employee facial expressions during client calls to evaluate their performance or emotional state (like in the movie “The Pod Generation”). However, AI systems that detect signs of fatigue in traders making critical financial decisions for safety reasons may be permitted.
This prohibition protects employee privacy and dignity, preventing invasive workplace surveillance that could create toxic environments.
6. Predictive Policing Based on Profiling
What's prohibited: AI systems that predict the likelihood of a natural person committing crimes based solely on profiling or assessment of personality traits.
While this might seem less relevant to finance at first glance, consider:
Finance industry relevance:
Financial crime prevention teams must ensure their anti-fraud and anti-money laundering AI systems don't cross into "predictive policing" territory by making assumptions about individuals' criminal propensity based on demographic data rather than actual transaction patterns.
7. AI Systems That Manipulate Human Behaviour
What's prohibited: AI systems designed to manipulate human behaviour to circumvent free will through subliminal or deceptive techniques.
Prohibited AI examples: A financial advisory firm cannot use AI to analyse customer psychology and deploy personalised manipulative techniques that exploit cognitive biases to sell unsuitable investment products.
This prohibition is particularly important in financial services, where customers rely on trustworthy advice for life-changing decisions.
Penalties for Non-Compliance: Violations That Hit the Bottom Line
Understanding the AI Act prohibited practices is one thing; knowing what's at stake for violations drives home their importance.
The EU AI Act doesn't mess around when it comes to penalties. Violations of the prohibited practices can result in:
To put this in perspective: for a large bank with annual revenue of €20 billion, a worst-case scenario fine could reach €1.4 billion. That's not just a slap on the wrist but a potential existential threat.
Common Misconceptions (Let’s Bust a Few)