
The Human Gap in the AI Security Framework Landscape
The map was incomplete. And no one had noticed.
When I came across the AI Security Framework Landscape published by H. Sheikh, I stopped for a few minutes. It's serious work — it clearly organizes the major security frameworks for AI systems: NIST AI RMF, EU AI Act, ISO/IEC 42001, MITRE ATLAS, OWASP Top 10 for LLMs, Google SAIF, Microsoft SAIF, CISA Secure-by-Design.
Governance. Risk. Compliance. Threat. Model Security. Supply Chain.
Everything there. Well-structured. Visually impeccable.
And yet, something was missing.
Let me ask you a question.
Imagine your company's AI system causes a personal data breach. The data of 40,000 customers is exposed. The Data Protection Authority knocks on your door.
You open the NIST AI RMF. It tells you how to identify, measure, and monitor risks. Correct.
You open the EU AI Act. It tells you which risk categories apply to your system. Correct.
You open ISO/IEC 42001. It tells you how to govern the system's lifecycle. Correct.
But none of them answer the question the Authority will ask first:
"What did the human do — or fail to do — for this to happen?"
And more importantly: "Was it an error? A conscious risk decision? Negligence? Or sabotage?"
That question is not in any of the frameworks on the map.
This is exactly where SHELL-Privacy™ and MEDA-Privacy™ come in.
SHELL-Privacy™ is a systemic privacy incident analysis framework that maps failures across 5 interfaces: Software (systems, algorithms, configurations), Hardware (physical and digital infrastructure), Environment (organizational culture and work environment), Liveware (the individual human), and Liveware-Org (the relationship between the human and the organization).
When an AI system causes a privacy incident, SHELL doesn't ask "who made the mistake?". It asks: "at which interface did the system fail — and why?"
MEDA-Privacy™ goes one step further. Inspired by aviation accident investigation methodology, it classifies human behavior into 4 categories: unintentional error, conscious risk decision, negligence, and sabotage. Each classification leads to a proportional and fair response — not automatic punishment.
Why does this matter specifically for AI?
Because AI systems amplify the human factor — they don't eliminate it.
A language model trained on biased data reflects the human decisions of those who selected the data. A facial recognition system that discriminates was configured by humans with human parameters. A chatbot that leaks confidential information was deployed by a team that made human decisions about access controls.
AI doesn't act alone. It executes, at scale, the intentions and errors of those who built, trained, and deployed it.
No AI security framework resolves this without a layer of human factor analysis.
SHELL-Privacy™ and MEDA-Privacy™ do not compete with the frameworks in H. Sheikh's map.
They complement them.
While the NIST AI RMF governs system risk, SHELL analyzes the incident when that risk materializes. While the EU AI Act categorizes regulatory liability, MEDA classifies the human behavior that generated the breach. While ISO/IEC 42001 structures governance, SHELL maps where governance failed in practice.
They are complementary layers. And the human layer was the one missing from the map.
Discover the frameworks: 🌐 www.shellprivacy.com 📚 Books: https://amzn.to/4ob03qY 📺 Channel: youtube.com/@SHELLPrivacy
Be inspired and fly.
Read the original on LinkedIn
Engage with the community — comments, reactions, and shares.
About how this content was produced
It would be inconsistent to advocate for the ethical and responsible use of artificial intelligence without practising it. So I am transparent: this article was developed with the support of Manus as an AI assistant. The entire process was led by me: I chose the topic, defined the angle, identified the sources to be consulted, reviewed each version, and rewrote the sections that did not accurately reflect what I intended to communicate. The AI handled the research, organisation, and drafting. I handled what no model can do on its own: assess what is technically accurate, what is relevant to the reader, and what is faithful to my professional experience in the field. That is how I understand the role of AI — not as a replacement for the expert, but as an amplifier of what the expert already knows.
Anderson Andrade · DPO · Author · Founder of SHELL-Privacy™ & MEDA-Privacy™ · www.shellprivacy.com