On February 4, 2025, the European Commission published a set of draft guidelines[1] aimed at providing clarity on prohibited AI practices under the newly enacted EU Artificial Intelligence (AI) Act[2] (Draft Guidelines). The Draft Guidelines, which were released two days after the enforcement of the AI Act’s ban on AI systems leveraging prohibited practices,[3] are an important step toward ensuring the ethical deployment of AI technologies across the European Union. The guidelines address a range of AI systems considered to pose an unacceptable risk to human dignity, privacy, and public security and underscore the EU’s commitment to balancing technological innovation with safeguarding fundamental rights.
While these guidelines are non-binding and subject to formal adoption,[4] they offer critical insights into how AI systems should operate within the regulatory framework. An overview of this will be provided below.
What’s at Stake: the Prohibited Practices
Article 5 of the EU AI Act[5] outlines specific AI practices that are banned outright due to their harmful impact on human rights and values. The Draft Guidelines cover a variety of prohibited AI practices under the AI Act. Some of the most critical concerns include manipulative AI practices, harmful exploitation of vulnerabilities, and biometric surveillance. Below, we will examine key examples highlighted in the Draft Guidelines, shedding light on their implications for businesses and individuals.
Subliminal manipulation (Art. 5(1)(a))
AI systems that deploy manipulative techniques to deceive individuals by distorting their decision-making ability, often without their conscious awareness, fall under this prohibition. Examples include deepfakes, AI chatbots impersonating loved ones for scams, or subliminal messages embedded in media content.[6]
Key insights from the Draft Guidelines:
- Scope of the prohibition: Practices not likely to cause significant harm are excluded from this prohibition.[7]
- No need for intent to deceive: The guidelines emphasize that the manipulative techniques need not be intentional; the mere use of such techniques is sufficient to trigger the prohibition.[8]
- Turning to consumer protection law: The Commission looks to EU consumer protection law to assess what constitutes a „material distortion of behavior.”[9]
- Risk mitigation through labeling: Visible labeling, such as marking „deep fakes,” or chatbots, reduces the risk of deceptive impacts.[10]
Harmful exploitation of vulnerabilities (Art. 5(1)(b))
AI systems targeting individuals with vulnerabilities due to age, disability, or socio-economic factors, with the aim of exploiting these weaknesses to cause significant harm.[11] For instance, AI systems targeting older people with fraudulent medical schemes or addiction-inducing apps targeting younger users.[12]
Key insights from the Draft Guidelines:
- Vulnerabilities are broadly defined: Vulnerabilities encompass cognitive, emotional, and physical weaknesses affecting decision-making ability.[13]
- Exploitation examples provided: Exploitation refers to using these vulnerabilities to harm the individual or group. AI systems engaging in manipulative advertising or interactions that cause harm are covered by this prohibition.[14]
- Intersectionality of vulnerabilities: Socio-economic vulnerabilities intersecting with race or ethnicity may lead to systemic discrimination, which AI systems must avoid.[15] Migrants or refugees facing socio-economic instability are also protected.[16]
Social scoring (Art. 5(1)(c))
The evaluation or classification of individuals based on social behavior or personality traits that leads to harmful or unfair treatment is prohibited. This treatment must be either unrelated to the original context of the data or disproportionate to the severity of the behavior being assessed.[17]
Key insights from the Draft Guidelines:
- Scope of the prohibition: AI systems used to evaluate or classify individuals or groups based on their social behavior or personal characteristics (e.g., creditworthiness, social behavior) are prohibited if they result in harmful or unfair treatment. This restriction applies across both public and private sectors.[18]
- Data spanning over time: The prohibition only applies if the data used for evaluation spans a „certain period of time,” one-time evaluations may not be covered. However, continuous surveillance to assess behavior or characteristics falls within the scope of the prohibition.[19]
- Score usage: Even social scores used by a different organization than the one that generated them are prohibited.[20]
Predicting criminal behavior (Art. 5(1)(d))
AI systems predicting criminal behavior based solely on profiling or personality traits are prohibited. This applies to systems assessing or predicting the risk of a person committing a crime without objective, verifiable evidence. The prohibition is specifically aimed at preventing the use of AI to make these predictions based purely on personal characteristics such as nationality, debt level, or place of birth.[21]
Key insights from the Draft Guidelines:
- Scope of the prohibition: AI systems used solely for predicting criminal behavior based on profiling or personality traits are banned. This does not apply when the AI supports human assessments based on verifiable facts.[22]
- Private actors: The prohibition applies to private actors, particularly when they act on behalf of law enforcement,[23] or when they assess or predict the risk of criminal behavior for legal compliance, such as anti-money laundering.[24]
- Individuals in focus: The prohibition applies only to individual risk assessments and predictions of natural persons and does not cover systems that profile legal entities like companies or NGOs.[25]
Facial recognition scraping (Art. 5(1)(e))
The untargeted scraping of biometric data (e.g., images from the internet or CCTV) to create or expand facial recognition databases without consent is prohibited.[26]
Key insights from the Draft Guidelines:
- Facial recognition database: The database used for facial recognition may be temporary, centralized, or decentralized. Importantly, the database does not need to be solely used for facial recognition; it is sufficient that the database can be used for facial recognition.[27]
- Targeted scraping exclusion: The prohibition does not extend to systems targeting specific individuals or groups who have consented. For instance, scraping images of individuals who have explicitly consented is outside this scope.[28]
- Other biometric data: The prohibition does not apply to other biometric data (e.g., voice recognition) being scraped for similar purposes.[29]
Emotion recognition in the workplace or education (Art. 5(1)(f))
Using AI to infer emotions or intentions in sensitive settings like workplaces or schools, with exceptions only for specific medical or safety reasons.[30] However, using AI to generally monitor employee stress levels to predict productivity for example could fall under the prohibition and not under the health or safety exceptions.[31]
Key insights from the Draft Guidelines:
- Broad definition of emotion recognition: The Act defines emotion recognition in a wide sense, covering any AI system that infers emotions or intentions from facial expressions, body movements, or other biometric signals.[32]
- Exceptions for medical and safety purposes: Emotion recognition is allowed under exceptions for medical and safety purposes. For example, AI systems used to detect anxiety levels in workers operating dangerous machinery or to assist employees or students with autism or improve accessibility for those who are blind or deaf, are permitted under these exceptions.[33]
- Focus on employees, not customers: The prohibition is aimed at AI systems used on employees or students in workplace and educational settings. Systems targeting customers are not included under this prohibition.[34]
Biometric categorization for sensitive data (Art. 5(1)(g))
AI systems that categorize individuals based on sensitive characteristics, such as race, political opinions, or union membership, are prohibited, except when used for law enforcement purposes.[35]
Key insights from the Draft Guidelines:
- Categorization vs. identification: The key difference in this prohibition is that it’s not about identifying an individual or verifying their identity, but about assigning an individual to a particular category. For example, an advertising display may show different adverts based on characteristics like age or gender of the individual viewing it, which could be based on biometric data, like facial features, but is not linked to verifying their identity.[36]
- Discrimination: Categorizing people into certain groups based on biometric data without their consent can lead to discriminatory practices and is considered harmful. This is a key reason for the prohibition.[37]
- Individual categorization: The prohibition only applies when people are “individually” categorized.[38]
Real-time remote biometric identification (Art. 5(1)(h))
The use of real-time remote biometric identification (RBI) systems, like facial recognition in public spaces for general law enforcement purposes, is prohibited, except in narrowly defined cases, such as focused search for individuals who have been abducted or trafficked or preventing imminent threats.[39]
Key insights from the Draft Guidelines:
- National legislation requirement: Use of real-time RBI systems for the exceptions listed above must be authorized by national legislation. Without such legislation in place, law enforcement authorities and entities acting on their behalf are prohibited from deploying these systems.[40]
- Exclusion of biometric verification systems: The prohibition does not apply to biometric verification or authentication systems, which are used to verify individuals’ identities (e.g., scanning a passport photo at an e-gate). Such systems fall outside the scope of this prohibition.[41]
- Exclusion of retrospective use: Retrospective (non-real-time) use of RBI systems for law enforcement purposes is not prohibited but is subject to additional safeguards for high-risk AI systems.[42]
A Critical Step in AI Regulation: What’s Next?
For businesses involved in AI system deployment, these guidelines make it clear: compliance with the AI Act goes beyond just avoiding prohibited practices. Companies must actively assess and mitigate the risks that their AI systems might present to privacy, fairness, and safety.
The AI Act’s timeline for implementation includes significant milestones, such as the designation of national supervisory authorities by August 2025 and the commencement of compliance obligations for high-risk AI systems by August 2027.[43] These dates are critical for companies to prepare their AI systems and processes in line with EU regulations.
While these guidelines are currently in draft form, they will shape the EU’s regulatory approach moving forward. The formal adoption process, expected in the coming months, will further clarify the EU’s stance on AI practices. As the regulatory environment evolves, companies will need to stay informed and adjust their strategies to ensure ongoing compliance with these new requirements.
[1] European Commission. Draft Guidelines on Prohibited Artificial Intelligence Practices Established by Regulation (EU) 2024/1689 (AI Act). 4 Feb. 2025, C(2025) 884 final, Brussels, https://ec.europa.eu/newsroom/dae/redirection/document/112367 (hereinafter: EU Draft Guidelines on Prohibited AI Practices).
[2] European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 15 February 2024 on Artificial Intelligence and Amending Certain Union Legislative Acts. Official Journal of the European Union, 2024, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 (hereinafter: AI Act).
[3] European Commission. First Rules of the Artificial Intelligence Act Are Now Applicable. Digital Strategy, 3 Feb. 2025, https://digital-strategy.ec.europa.eu/en/news/first-rules-artificial-intelligence-act-are-now-applicable.
[4] European Commission. Commission Publishes the Guidelines on Prohibited Artificial Intelligence (AI) Practices, as Defined by the AI Act. Digital Strategy, 4 Feb. 2025, https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act.
[5] AI Act.
[6] Tobey, Danny, et al. European Commission Publishes Guidelines on Prohibited AI Practices: Navigating the EU’s AI Act: Key Guidelines on Prohibited AI Practices. DLA Piper, 5 Feb. 2025, https://www.dlapiper.com/en/insights/publications/ai-outlook/2025/european-commission-publishes-guidelines-on-prohibited-ai-practices (hereinafter: Tobey et al., 2025).
[7] EU Draft Guidelines on Prohibited AI Practices, ¶134.
[8] Ibid., ¶¶ 69, 73, 76.
[9] Ibid., ¶80.
[10] Ibid., ¶71.
[11] Agostini, Aurora, et al. The Commission’s Guidelines on Prohibited Artificial Intelligence Practices: General Analysis and Privacy Aspects. Lexia, 5 Feb. 2025, https://www.lexia.it/en/2025/02/05/guidelines-ai-prohibited-ai/ (hereinafter: Agostini et al., 2025).
[12] Tobey et al., 2025.
[13] EU Draft Guidelines on Prohibited AI Practices, ¶102.
[14] Ibid., ¶103.
[15] Ibid., ¶111.
[16] Ibid., ¶112.
[17] Tobey et al., 2025.
[18] EU Draft Guidelines on Prohibited AI Practices, ¶151.
[19] Ibid., ¶155.
[20] Ibid., ¶162.
[21] Ibid., ¶¶184-185, 197-198.
[22] Ibid., ¶185.
[23] Ibid., ¶¶207-208.
[24] Ibid., ¶209.
[25] Ibid., ¶215.
[26] Agostini et al., 2025.
[27] EU Draft Guidelines on Prohibited AI Practices, ¶226.
[28] Ibid., ¶228-229.
[29] Ibid., ¶234.
[30] Agostini et al., 2025.
[31] EU Draft Guidelines on Prohibited AI Practices, ¶257.
[32] Ibid., ¶¶245, 247.
[33] Ibid., ¶263.
[34] Ibid., ¶254.
[35] Tobey et al., 2025.
[36] EU Draft Guidelines on Prohibited AI Practices, ¶276.
[37] Ibid., ¶276.
[38] Ibid., ¶¶281-282.
[39] Tobey et al., 2025.
[40] EU Draft Guidelines on Prohibited AI Practices, ¶290.
[41] Ibid., ¶303.
[42] Ibid., ¶427.
[43] Agostini et al., 2025.