< AI Act Prohibited Practices (7 Simple Ways to AI Compliance)

EU AI Act Prohibited Practices: Compliance Tips for Professionals

13 June 2025
  •  
Let’s face it: regulations aren’t exactly light reading, especially for artificial intelligence (AI). But if you're working in a finance firm, international law practice, or a corporate setting with regulatory exposure, the EU Artificial Intelligence (AI) Act isn’t just another piece of red tape.
Compliance banner with man pointing
So, what are the prohibited practices under the EU AI Act?

In simple terms, Article 5 of the AI Act outright bans certain uses of AI that it deems too dangerous or ethically unacceptable. These practices are completely banned because they pose unacceptable risks to people's rights, safety, and well-being. Understanding these restrictions is crucial for any professional working with AI systems, especially as regulatory scrutiny intensifies.

Let’s break it down in plain English.

What Is the EU AI Act?

Before we get into the AI Act prohibited practices, let's quickly recap what the EU AI Act is (because nobody likes jumping into the deep end without knowing how to swim!).

The EU AI Act is the first major regulatory framework in the world focused solely on artificial intelligence. It ensures that AI systems used in the EU are safe, transparent, and aligned with fundamental rights.

Rather than a one-size-fits-all approach, the Act uses a risk-based classification system:

  • Minimal risk systems face minimal regulation
  • Limited risk systems must meet transparency requirements
  • High-risk AI systems must comply with strict requirements
  • Unacceptable risk systems are completely prohibited
This article focuses on the "Unacceptable Risk" category, also known as prohibited AI practices.

What Counts as a “Prohibited Practice”? AI Applications That Cross the Line

The EU AI Act explicitly bans certain AI practices deemed too dangerous for society. These are non-negotiable prohibitions that apply to all organisations operating within the EU market.

1. Social Scoring Systems

Remember that episode of Black Mirror (“Nosedive”) where people rated each other for everything? The EU doesn't want that becoming reality.

What's prohibited: AI systems that evaluate or classify individuals based on their social behaviour or personal characteristics, leading to detrimental treatment in social contexts unrelated to the contexts in which the data was generated.

Finance industry example: A bank cannot use an AI system that denies loans to individuals based on their social media behaviour, political views, or personal associations that have no direct relevance to creditworthiness.

Think about it: Would you want to be denied a mortgage because your AI-powered bank decided your weekend activities or friend circle made you "untrustworthy"? This prohibition ensures decisions about you remain relevant and fair.

2. Exploitative Systems Targeting Vulnerabilities

What's prohibited: AI systems that exploit the vulnerabilities of specific groups of persons due to their age, disability, or specific social or economic situations.

Finance industry example: An investment app cannot use AI to identify elderly clients with cognitive decline and automatically recommend complex financial products that aren't in their best interest but generate higher fees.

This prohibition protects vulnerable individuals from being targeted by predatory AI systems designed to take advantage of their circumstances. In the finance sector, where trust is paramount, this prohibition safeguards both clients and the industry's reputation.

3. Biometric Categorisation Systems Based on Sensitive Characteristics

What's prohibited: AI systems that categorise individuals based on biometric data to infer sensitive characteristics like race, political opinions, religious beliefs, or sexual orientation.

Finance industry example: A financial institution cannot deploy an AI system that uses facial recognition at branch entrances to classify customers by ethnicity or presumed socioeconomic status and then provide different service levels based on these classifications.

This prohibition helps ensure fair and equal treatment of all financial service customers, regardless of their personal characteristics.

4. Real-time Remote Biometric Identification in Public Spaces

What's prohibited: The use of real-time remote biometric identification systems (like AI systems that scrape facial images via recognition) in publicly accessible spaces, such as CCTV footage or the internet, for law enforcement purposes, with limited exceptions for specific serious crimes.

Finance industry example: While a bank can use facial recognition for secure access to safety deposit boxes with explicit customer consent, it cannot implement a system that continuously scans all visitors to its branches and matches them against watchlists without a proper legal basis.

5. Emotion Recognition in the Workplace and Educational Institutions

What's prohibited: AI that infers emotions in workplaces and educational institutions, with exceptions for medical and safety reasons.

Finance industry example: A financial firm cannot use AI to monitor employee facial expressions during client calls to evaluate their performance or emotional state (like in the movie “The Pod Generation”). However, AI systems that detect signs of fatigue in traders making critical financial decisions for safety reasons may be permitted.

This prohibition protects employee privacy and dignity, preventing invasive workplace surveillance that could create toxic environments.

6. Predictive Policing Based on Profiling

What's prohibited: AI systems that predict the likelihood of a natural person committing crimes based solely on profiling or assessment of personality traits.

While this might seem less relevant to finance at first glance, consider:

Finance industry relevanceFinancial crime prevention teams must ensure their anti-fraud and anti-money laundering AI systems don't cross into "predictive policing" territory by making assumptions about individuals' criminal propensity based on demographic data rather than actual transaction patterns.

7. AI Systems That Manipulate Human Behaviour

What's prohibited: AI systems designed to manipulate human behaviour to circumvent free will through subliminal or deceptive techniques.

Prohibited AI examples: A financial advisory firm cannot use AI to analyse customer psychology and deploy personalised manipulative techniques that exploit cognitive biases to sell unsuitable investment products.

This prohibition is particularly important in financial services, where customers rely on trustworthy advice for life-changing decisions.

Penalties for Non-Compliance: Violations That Hit the Bottom Line

Understanding the AI Act prohibited practices is one thing; knowing what's at stake for violations drives home their importance.

The EU AI Act doesn't mess around when it comes to penalties. Violations of the prohibited practices can result in:

To put this in perspective: for a large bank with annual revenue of €20 billion, a worst-case scenario fine could reach €1.4 billion. That's not just a slap on the wrist but a potential existential threat.

Common Misconceptions (Let’s Bust a Few)

Infographic to show AI Act expectations vs reality

Why You Need to Pay Special Attention

You might be wondering, "Why should I care more than professionals in other industries?"

Good question!

The finance sector faces unique challenges regarding AI regulation for several reasons:

  1. Data sensitivity: You handle some of the most sensitive personal data imaginable—financial records that reveal intimate details about people's lives.
  2. Algorithmic decision-making: Financial services increasingly rely on AI for critical decisions like credit approvals, investment recommendations, and fraud detection.
  3. Existing regulatory complexity: The finance industry already navigates complex regulations like GDPR, MiFID II, and Basel frameworks. The AI Act adds another layer.
  4. High-stakes outcomes: AI decisions in finance can profoundly impact individuals' lives, affecting their ability to buy homes, start businesses, or achieve financial security.

Practical Steps for Preparing

So, how can you protect your career and help your organisation navigate these prohibited practices? Here are some practical steps:

  1. Conduct a prohibited practices audit: Review all existing and planned AI applications against the prohibited list. If anything seems to operate in a grey area, seek legal counsel.
  2. Implement governance structures: Establish clear accountability for AI systems, with designated compliance officers who understand both the technology and regulatory requirements.
  3. Document decision-making processes: Maintain comprehensive records of how AI systems are designed, trained, tested, and deployed, with particular attention to avoiding prohibited applications.
  4. Train your teams: AI regulations are evolving rapidly. Regular training is essential to stay current on prohibited practices and their interpretations.
  5. Adopt ethics by design: Rather than treating compliance as a checkbox exercise, integrate ethical considerations into AI development from the beginning.

The Hidden Opportunity in Compliance

While compliance might seem like a burden, there's actually a hidden opportunity. Organisations that master AI implementation governance gain:

Enhanced consumer trust: Demonstrating responsible AI use builds confidence with increasingly privacy-conscious customers.

Competitive advantage: As regulations tighten, firms with robust compliance frameworks can move faster while competitors struggle to catch up.

Talent attraction: Top data scientists and AI specialists increasingly want to work for organisations committed to ethical AI.

Reduced legal exposure: Proactive compliance reduces the risk of costly enforcement actions and litigation.

In other words, understanding prohibited practices isn't just about avoiding penalties—it's about positioning yourself and your organisation for success in the AI-transformed financial landscape.

Bottom Line: AI Compliance Is Now Business-Critical

The EU AI Act represents just the beginning of a global trend toward stronger AI regulation. Other jurisdictions are already following Europe's lead, such as China's implementation of its own AI governance systems.

This regulatory convergence means that understanding prohibited practices under the EU AI Act provides valuable knowledge applicable beyond Europe. For multinational financial institutions, the EU regulations often become the de facto global standard to avoid maintaining different systems for different markets.

The prohibited practices in the EU AI Act shouldn't be viewed simply as limitations but as a roadmap toward responsible AI use in finance. By clearly defining what's off-limits, they help chart a course for innovation that respects fundamental rights and ethical principles.

Your understanding of these prohibitions positions you as a valuable asset in your organisation's compliance efforts. The financial institutions that will thrive are those that view prohibited practices not as obstacles but as guardrails that enable confident innovation within defined boundaries. By mastering this knowledge, you help your organisation navigate toward a compliant, ethical AI future.

Ready to become an EU AI Act Compliance expert?

If you're looking to deepen your expertise and advance your career, our comprehensive course on EU Artificial Intelligence (AI) Act Compliance is specifically designed to explore all aspects of the regulation, including prohibited practices, high-risk systems, and practical implementation strategies.

Don’t wait until compliance is urgent. Learn it while it’s still a competitive edge.

FAQ

What is Article 4 of the AI Act?

Article 4 of the EU Artificial Intelligence (AI) Act outlines general principles for AI systems, emphasising the importance of AI literacy among users. It mandates that AI systems be developed and used in a manner that ensures transparency, accountability, and human oversight. This includes providing users with clear information and training to understand and effectively interact with AI systems, thereby promoting informed decision-making and responsible use.

What is Article 13 of the AI Act?

Article 13 requires that such systems be designed and developed to ensure their outputs are understandable by users. Providers must make available clear, concise documentation explaining the system's capabilities, limitations, intended purpose, and how it should be used safely. This helps users make informed decisions and supports accountability and oversight.
Ready to master compliance with AI? Click below to find out more about Redcliffe Training’s course on the EU AI Act:

EU AI Act Compliance

Recently Viewed Courses

We use cookies

In order to show you courses tailored to your profession we use cookies.

To enjoy all the features of this website please accept.