
High-risk triggers identified under EU AI Act Annex III — Biometric Identification & Categorization
DeepFace exposes a real-time face recognition pipeline via `DeepFace.stream()` and `DeepFace.find()`. These functions perform live biometric identification of natural persons in publicly accessible spaces — a Category 1 prohibited use case under Annex III §1(a) without explicit exemption.
Implement mandatory human-in-the-loop override gate before any identification result is acted upon. Add Article 14 oversight logging.
The `analyze()` function explicitly categorizes individuals by race, gender, age, and emotion from facial images. Categorization by race and emotion from biometric data is a high-risk use case under Annex III §1(b) and triggers mandatory conformity assessment.
Disable race and emotion classification by default. Require explicit opt-in with documented legal basis under GDPR Art. 9 and EU AI Act Art. 10.
The verification and recognition APIs can be integrated into HR screening pipelines. No guardrails prevent deployment in employment contexts. Annex III §4 classifies AI systems used for recruitment, promotion, or termination decisions as high-risk.
Add deployment context detection and block or warn when integrated into HR/ATS pipelines. Require Annex IV technical documentation for such deployments.
Article 12 (Logging) & Article 14 (Human Oversight) mechanism scan
EU AI Act Article 12 requires high-risk AI systems to automatically generate logs of every inference event, including input data characteristics, output decisions, timestamps, and operator identity. DeepFace has no native logging layer.
“Art. 12(1): Logging must capture at minimum — date/time, input data reference, output, operator ID, and system version.”
Article 14 mandates that high-risk AI systems be designed to allow human oversight, including the ability to override, interrupt, or disregard system outputs. DeepFace provides no override interface, confidence threshold gates, or human-in-the-loop controls.
“Art. 14(4): Operators must be able to decide not to use the AI system output in any given situation.”
Article 13 requires high-risk AI systems to be sufficiently transparent so that deployers can interpret outputs correctly. DeepFace provides model accuracy metrics in documentation but lacks per-inference explainability or uncertainty quantification.
“Art. 13(3)(b): Instructions for use must include performance metrics in the specific deployment context.”
Sensitive attribute profiling flags: Race, Emotion, Gender
The race classifier outputs a 7-class ethnicity probability distribution from facial geometry. Processing racial origin from biometric data constitutes special category data under GDPR Art. 9 and triggers mandatory data governance controls under EU AI Act Art. 10.
Emotion recognition from facial images is flagged in EU AI Act Recital 44 as a high-risk practice. The model infers internal psychological states from biometric data — a practice with documented scientific validity concerns and significant potential for discriminatory misuse.
Binary gender classification from facial features raises Art. 10 data quality concerns. The model uses a binary classification schema that does not reflect the full spectrum of gender identity, creating systematic accuracy disparities for non-binary individuals.
Final summary — DeepFace EU AI Act readiness assessment
Or 7% of total worldwide annual turnover — whichever is higher. Applies to prohibited practice violations under EU AI Act Article 5 and high-risk system non-compliance under Articles 8–15.
Full EU AI Act enforcement for high-risk AI systems begins August 2, 2026. Deployers of systems like DeepFace in regulated contexts must achieve full compliance before this date or face immediate enforcement action.
This sample report demonstrates what a real Sentry 48 audit delivers. Our Scout Agent reads your actual repository — file trees, configs, dependency manifests, and documentation — to generate a legally defensible compliance verdict in 48 hours.