How We Engineered IAMONES for EU AI Act Compliance
An Introduction to Our Approach
With the adoption of the EU's Artificial Intelligence Act in 2024, a new standard for AI regulation has been set.
From the very beginning of our journey in building IAMONES, our goal was not just to innovate, but to do so responsibly. This meant engineering our platform with a "privacy-by-design and compliance-by-default" philosophy.
Under the AI Act, we are classified as a deployer of AI systems, meaning we integrate leading Large Language Models (LLMs) into our services. This role comes with clear responsibilities, particularly under Article 26 (human oversight) and Article 50 (transparency), which we have taken to heart.
This article provides an inside look at the deliberate architectural decisions we made to ensure IAMONES has a low-to-limited risk profile, allowing our clients to adopt Conversational IGA with confidence.
Our Foundational Principles for Compliance
Our compliance posture is built on several key decisions about what we do, and just as importantly, what we choose not to do.
- A Core Principle: We Do Not Train Models on Customer Data.
This is a critical distinction for us. The reasoning engines within IAMONES operate purely on inference. We do not perform any form of automatic or "silent" learning from user actions or data. This means your information is never used to train or modify our models. This was a fundamental choice to guarantee that our outputs are shaped by controlled, secure prompts, not by ongoing data capture.
- How We Protect Your Data: Programmatic PII Stripping.
We built a non-negotiable safeguard into our data flow: before any information is processed by an external LLM, all Personally Identifiable Information (PII) is programmatically stripped. This ensures that sensitive data, such as names or email addresses, never leaves your enterprise boundary in its raw form. For example, a dataset like { "name": "John", "surname": "Doe", "entitlement": "Admin Access"} is instantly transformed into { "name": [NAME], “surname”: [SURNAME], "entitlement": "Admin Access"} before the LLM sees it.
- Keeping Humans in Control: Our Approach to Oversight.
We firmly believe that AI should assist, not replace, human judgment. That’s why we designed IAMONES to treat all AI-generated output as strictly advisory. Any action that could trigger a change in an IGA system—like revoking a permission—is always subject to human-in-the-loop validation before execution. For critical functions, we intentionally pause the process at defined checkpoints to require manual inspection and explicit approval.
The Security Layers We've Built
To support our compliance principles, we have implemented multiple layers of security and governance:
- An Integrated LLM Firewall: We developed a semantic firewall that intercepts all input queries. It’s designed to block malicious prompt injections, privilege escalation attempts, and inappropriate content before they can be processed.
- Data Residency and Segregation: We support EU data residency for all LLM interactions and provide each client with their own dedicated Temporal Identity Graph (TIG). This guarantees that your IGA data is never shared or exposed across tenants.
- Strict, Granular Access Controls: Our platform translates natural language visibility policies into strict Row-Level Security (RLS) policies at the database level, ensuring users can only see the data they are explicitly authorized to view.
Our Ongoing Commitment
Compliance isn't a one-time checklist. It's an ongoing commitment.
We conduct annual AI compliance reviews and continuously expand our documentation to ensure we not only meet but exceed regulatory standards as they evolve. This is our promise to our customers: to be a trusted partner in their journey toward a secure and compliant AI-powered future.