
EU AI Act Article 6 — A Developer's Checklist for High-Risk AI Systems
I pushed a feature last month that uses GPT-4 to rank job applicants. Two weeks later I found out that's textbook "high-risk" under EU AI Act Article 6. The regulation defines exactly what counts as high-risk, but the legal text is 144 pages of cross-references. So I wrote three Python scripts that do the checking for me. Here's the walkthrough. What Article 6 defines Article 6 creates two paths to "high-risk": Path 1 (Annex I): Your AI is a safety component inside a product covered by EU harmonised legislation — medical devices, vehicles, machinery, toys. If that product needs a third-party conformity assessment, your AI is high-risk automatically. Path 2 (Annex III): Your AI falls into one of eight sensitive categories the EU flagged as inherently risky. This is the path most software developers hit. The eight Annex III categories: ANNEX_III = { " biometrics " : [ " facial recognition " , " emotion detection " , " voice id " ], " critical_infrastructure " : [ " energy grid " , " wate
Continue reading on Dev.to Python
Opens in a new tab



