Appearance
Operationalizing in Modulos
FEAT is principle-based. The work is to translate principles into controls and monitoring signals that can be audited.
Recommended project structure
Most organizations use:
- One organization project for governance foundations (shared control library, templates, review cadence).
- AI system projects for product/deployment execution (fairness tests, evidence, approvals, monitoring).
Where in Modulos
Project → Controlsfor fairness, oversight, and transparency measuresProject → Testingfor evaluation signals and historyProject → Evidencefor methodology and approvalsProject → Requirementsfor compliance trackingProject → Risksfor treatment decisions and residual risk acceptance
A sequence that works
1
Define what fairness means
Choose metrics and thresholds that reflect your use case
2
Attach controls and tests
Create controls for governance and tests for monitoring signals
3
Link evidence
Attach methodology, results, and approvals to controls
4
Review and remediate
Use review flows to approve decisions and track remediation
5
Re-test and report
Repeat as the system changes and export packages for audits
Evidence, tests, and remediation (diagrams)
Make FEAT auditable by linking controls, evidence, and testing signals.
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
Scheduled run
Runs on schedule (e.g., daily)
Evaluation
1
Fetch latest datapoint
2
metric < threshold3
Emit result
Passed
Failed
Error
Tests evaluate the most recent signal available in the window.
Continuous remediation
1
Detect
Failed or error result
2
Triage
Data issue vs real drift
3
Fix
Change system or control implementation
4
Record
Update evidence and audit trail
5
Re-verify
Re-run test or monitor
When tests are linked to controls, failures route to control owners and keep governance aligned with reality.
Evidence
Control Components
Controls
model_validation.pdf
Component A
Component B
Component C
Component D
Component E
CTRL-001Model Validation
CTRL-002Data Quality
Same evidence reused across controls
Attach evidence to the smallest meaningful claim.
Exports for audit and stakeholders (diagram)
Use exports to create point-in-time snapshots of controls and supporting evidence.
Project PDF export
Top controls (PDF exports)
Evidence files (attachments)
Key assets (Markdown exports)
Audit pack
Exports are snapshots. Keep scope stable before exporting.
Common pitfalls
- adopting FEAT as policy text without owners, gates, and evidence
- running fairness evaluations once (instead of on cadence and on change)
- collecting results in files without linking them to controls and approvals
- changing data/model/population without triggering re-review
Related pages
Principles
Understand what to implement under Fairness, Ethics, Accountability, and Transparency
Testing operating model
Make evaluations a repeatable governance loop
Risk quantification
Prioritize treatment and investment with monetary outputs
Disclaimer
This page is for general informational purposes and does not constitute legal advice.