Clearview AI Fined €30.5M for Illegal Facial Recognition

Sep 2024 · €30.5M fine

By Karim El Labban · ZERO|TOLERANCE

EU GDPROctober 20, 202210 min read

# Clearview AI Fined EUR 30.5M for Illegal Facial Recognition

Clearview AI, the US-based facial recognition company, was fined a combined EUR 30.5 million by multiple European data protection authorities for illegally scraping billions of facial images from the internet and creating biometric identification templates without consent or any valid legal basis.

The French CNIL imposed a EUR 20 million fine in October 2022, the Italian Garante per la protezione dei dati personali imposed a EUR 20 million fine in March 2022, the Greek Hellenic Data Protection Authority (HDPA) imposed a EUR 20 million fine in July 2022, and the UK Information Commissioner's Office issued a GBP 7.5 million fine in May 2022. Each authority independently found that Clearview AI had no legal basis for its mass biometric data processing and had violated fundamental GDPR principles.

## Key Facts

  • .**What:** Clearview AI illegally scraped billions of facial images to build a biometric database.
  • .**Who:** Billions of individuals worldwide whose photos appeared on public websites.
  • .**Data Exposed:** 30 billion+ facial images, biometric templates, and associated metadata.
  • .**Outcome:** Fined a combined EUR 30.5M by French, Italian, Greek, and UK regulators.

## What Was Exposed

  • .Over 30 billion facial images scraped from publicly accessible websites, social media platforms, news sites, and public databases without the knowledge or consent of the individuals depicted
  • .Biometric templates--mathematical representations of facial geometry--generated from each scraped image, constituting special category biometric data under Article 9
  • .Associated metadata including source URLs, image context, and any publicly available identifying information linked to each facial image
  • .Search result profiles generated when law enforcement or private clients uploaded a probe image, linking it to matched identities across the scraped database
  • .Geolocation and temporal metadata associated with scraped images, enabling tracking of individuals' movements and activities over time

## Regulatory Analysis

The coordinated enforcement actions across multiple EU and EEA jurisdictions represent one of the most significant GDPR cases involving biometric data and the territorial scope of EU data protection law.

Each supervisory authority independently conducted investigations and reached substantially similar conclusions, creating a strong body of precedent on the application of GDPR to mass facial recognition operations conducted from outside the EU.

The fundamental violation across all decisions was the absence of any valid legal basis under Article 6 for Clearview AI's processing of personal data. Clearview AI argued that it relied on legitimate interest under Article 6(1)(f), but every DPA rejected this claim.

The CNIL, in its Deliberation No. SAN-2022-019, concluded that the massive, indiscriminate collection of facial images from the internet could not satisfy the balancing test required by Article 6(1)(f).

The scale of collection (billions of images), the absence of any relationship between Clearview AI and the data subjects, and the inability of individuals to reasonably expect such use of their photographs meant that the data subjects' fundamental rights decisively outweighed any legitimate interest claimed by the company.

More critically, Clearview AI's processing involved biometric data as defined in Article 4(14)--personal data resulting from specific technical processing relating to physical characteristics that allow or confirm the unique identification of a natural person.

Under Article 9(1), the processing of biometric data for the purpose of uniquely identifying a natural person is prohibited unless one of the exceptions in Article 9(2) applies. Clearview AI could not demonstrate that any Article 9(2) exception was applicable.

The company had no explicit consent (Article 9(2)(a)), the processing was not necessary for employment or social security purposes (Article 9(2)(b)), and it did not serve a substantial public interest under EU or member state law (Article 9(2)(g)).

Article 14 violations were found across all decisions.

Since Clearview AI collected facial images indirectly--from publicly available sources rather than from the data subjects themselves--it was required under Article 14 to provide information to data subjects about the processing, including the identity of the controller, the purposes of processing, and the existence of their rights.

Clearview AI failed entirely to provide this information to any of the billions of individuals whose images it scraped.

The company argued that providing individual notice was impossible given the scale of its operations, but the DPAs rejected this, noting that the impossibility of compliance is not a defense when the processing itself should not have occurred in the first place.

The Italian Garante additionally found violations of Article 5(1)(a) (lawfulness and fairness), Article 5(1)(b) (purpose limitation, as images posted for personal social media purposes were repurposed for commercial facial recognition without any compatible purpose), and Article 5(1)(e) (storage limitation, as Clearview AI retained images and biometric templates indefinitely).

The Greek HDPA's decision emphasized that publicly accessible data does not become "freely available" for any purpose--the fact that an image is posted on a public social media profile does not constitute consent to its use for biometric identification.

On territorial scope, Clearview AI argued that as a US company with no physical presence in the EU, it was not subject to GDPR jurisdiction.

All DPAs rejected this argument under Article 3(2), which extends GDPR applicability to controllers outside the EU who process personal data of EU residents where the processing relates to offering goods or services to EU data subjects or monitoring their behavior.

The DPAs found that Clearview AI's database included EU residents' facial images and that its facial recognition service was offered to clients including EU law enforcement agencies, establishing clear territorial jurisdiction.

The CNIL ordered Clearview AI to delete all data on French residents within two months, a directive the company is understood to have failed to fully comply with.

## What Should Have Been Done

The straightforward answer is that Clearview AI's business model is fundamentally incompatible with the GDPR. Mass scraping of facial images from the internet to build a biometric identification database cannot be conducted lawfully under European data protection law.

No amount of technical or organizational measures can remedy the absence of a valid legal basis and the inherent violation of biometric data processing restrictions.

Any company contemplating similar operations should recognize that the GDPR's protections for biometric data are effectively a prohibition on this type of processing without explicit consent from each individual.

If a facial recognition service were to operate lawfully under GDPR, it would require explicit consent from each individual whose image and biometric template is stored--Article 9(2)(a) consent that is freely given, specific, informed, and unambiguous, and which explicitly refers to the processing of biometric data.

This is practically impossible for a service premised on scraping publicly available images, which is precisely why the business model fails under GDPR. A compliant approach would involve a voluntary enrollment model where individuals actively upload their images and provide explicit consent for biometric processing.

For organizations considering using Clearview AI or similar services, the enforcement actions make clear that clients and customers of non-compliant data processors share responsibility.

Any EU law enforcement agency or private company that queried the Clearview AI database participated in the processing of biometric data without a legal basis.

Organizations must conduct thorough due diligence on the GDPR compliance of their technology vendors, particularly those processing special category data.

A Data Protection Impact Assessment under Article 35 should be mandatory before deploying any facial recognition technology, and the results of that DPIA should, in most cases, lead to consultation with the relevant supervisory authority under Article 36.

Technical alternatives exist for organizations with legitimate security or identification needs.

These include on-device facial recognition that does not transmit biometric data to external servers, consent-based biometric systems limited to enrolled individuals, and privacy-preserving approaches such as homomorphic encryption that enable comparison without exposing raw biometric templates.

The EU AI Act (Regulation 2024/1689), which entered into force in August 2024, imposes additional restrictions on real-time biometric identification in public spaces, further reinforcing the direction of European regulatory policy.

Clearview AI's coordinated fines across four jurisdictions establish an unambiguous precedent: mass facial recognition databases built from scraped images have no legal basis under GDPR. The fact that a photograph is publicly visible does not make it available for biometric processing, and any organization using such services shares the regulatory liability.

RELATED ANALYSIS

USPTO GovDelivery Scam: How Fraudsters Weaponize Real .gov Emails to Steal From Trademark Filers
Apr 1, 2026 · 77K+ victims · 60+ domains · First-person investigation
Free Mobile Fined EUR 42M After 24.6 Million Customer Records Stolen
Jan 16, 2026 · EUR 42M fine
Illuminate Education: FTC Action Over 10.1 Million Student Records Breach
Dec 1, 2025 · $5.1M settlement
Capita Fined £14M After Black Basta Ransomware Exposes 6.6M Records
Oct 1, 2025 · £14M fine
SHEIN Fined €150M for Cookie Consent Violations
Jan 23, 2025 · €150M fine
MORE REGULATORY ENFORCEMENT →