AI Regulation in the EU: Why Most Businesses Won’t Have to Worry

AI Regulation in the EU

I admit the EU AI Act 2024 spooked me on first reading last summer. I thought it would be quite a task to track the risk and usage of all AI usage in an organisation.

But having read their guidance published this week, I realise that most of their measures won’t apply to 95% of organisations. (Commission publishes the Guidelines on prohibited artificial intelligence (AI) practices, as defined by the AI Act.)

You might need to tread carefully when navigating certain AI types but most organisations will only need to adhere to the advisory components.

Here are the AI activities that are outlawed, which came into effect from February 2025:

  • Harmful Manipulation & Deception – e.g. increasing pricing based on someone’s emotional state
  • Exploitation of Vulnerabilities – e.g. Taking advantage of protected characteristics (Age, disability and so on)
  • Social Scoring, e.g. Classifying people on their social behaviour or personality, that leads to unfair treatment
  • Predicting Crime based on Personal Traits, e.g. discriminatory arrests due to where someone has been or what they look like
  • Mass Scraping of Facial Images for Recognition Databases, e.g. Scraping Facebook photos or CCTV without consent
  • Emotion Recognition in Workplaces & Schools – e.g. Grade people differently based on facial expressions
  • Biometric Categorization for Sensitive Characteristics – e.g. Sorting people into different queues depending on their race or religion
  • Real-Time Remote Biometric Identification – e.g. real-time CCTV monitoring and tracking without justification

After outlawed AI, the EU classifies “High-risk”, “Limited Risk” and “Minimal Risk”.

High-Risk AI Use Cases Under the EU AI Act

  • Critical Infrastructure & Safety – AI in transport, energy, or healthcare (e.g., robot-assisted surgery) where failures could endanger lives.
  • Education & Employment – AI in exam scoring, university admissions, hiring, and workplace management (e.g., CV sorting).
  • Essential Services & Finance – AI systems determining access to loans, public benefits, or key services (e.g., credit scoring).
  • Biometric Identification & Surveillance – AI for facial recognition, emotion detection, or biometric categorisation (e.g., tracking individuals).
  • Law Enforcement & Security – AI predicting crimes, assessing evidence reliability, or aiding police investigations.
  • Migration & Border Control – AI for visa processing, asylum applications, and automated border screenings.
  • Justice & Democracy – AI influencing court rulings or political decision-making.

Any organisation using, what is deemed by the EU as high risk, will need to put the following management system in place to prevent bias, discrimination and human rights:

  • Risk Management – Assess and mitigate AI risks.
  • Bias Prevention – Use high-quality, fair datasets.
  • Traceability – Log AI activity for audits.
  • Documentation – Provide clear regulatory info.
  • Transparency – Inform users about AI use.
  • Human Oversight – Keep humans in control.
  • Security & Accuracy – Ensure robustness & reliability.
  • Why most businesses won’t need to worry

Under the EU AI Act, most organisations will find themselves dealing with limited-risk or minimal-risk AI, which means lighter regulatory obligations but still some key responsibilities. Understanding where your AI use case falls is crucial to ensuring compliance without unnecessary red tape.

Vertical Sectors and Industries most likely to be impacted

The following vertical sectors and industries are most likely to be impacted by the EU AI Act 2024:

Public Sector

  • Critical Infrastructure & Social Welfare: Transportation (traffic control, public transit), energy (public utilities), healthcare (state hospitals), welfare systems (benefits and social services).
  • Education: Public universities, schools, and testing/admissions systems.
  • Law Enforcement & Border Control: Predictive policing, biometric identification, automated border checks, asylum/visa processing.
  • Justice & Democracy: Court decision support, sentencing recommendations, and AI in political or legislative processes.

Private Sector

  • Infrastructure & Healthcare (Privately Operated): Energy (private power grids), healthcare (private hospitals, diagnostic AI).
  • Employment & Workplace: CV sorting, hiring tools, performance analytics, HR automation.
  • Finance & Insurance: AI for credit scoring, loan approvals, fraud detection, claims processing.
  • Biometric Identification & Surveillance: Facial recognition in retail, commercial security systems, private legal/compliance tools (document review, risk analysis).

Further Reading

Leave a Reply

Your email address will not be published. Required fields are marked *