Appearance
Operationalizing in Modulos
The NIST AI RMF becomes actionable when it turns into a repeatable operating model: scope work, execute controls, collect evidence, review decisions, and monitor signals over time.
Recommended project structure
Most organizations use:
- One organization project for AI governance foundations (policies, shared control library, oversight cadence).
- AI system projects for product/deployment governance work where risks, tests, and evidence become system-specific.
Where in Modulos
Project → Requirementsto track what is fulfilled and what is blockedProject → Controlsto execute governance work and link evidenceProject → Runtime Inspectionto capture evaluation signals over timeProject → Evidenceto maintain an evidence library used across controlsProject → Risksto quantify, prioritize, and document treatment decisions
A sequence that works
1
Govern: set the rules
Assign ownership, define approval gates, and set risk acceptance criteria
2
Map: scope the system
Capture boundary, stakeholders, data flows, and intended use/misuse
3
Measure: add signals
Define evaluations, thresholds, monitoring cadence, and owners
4
Manage: treat risk
Implement mitigations via controls and track residual risk decisions
5
Export and iterate
Create audit-ready snapshots and re-review on meaningful changes
How framework work becomes execution work
In Modulos, NIST AI RMF typically lands as a set of project requirements that map to controls and evidence.
Framework mapping
Four layers, one reusable spine.
Frameworks
EU AI Act
ISO 42001
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Components
Risk identification
Impact analysis
Evidence
Risk register
Test results
Controls
The reusable spine
One control satisfies many requirements across many frameworks, and groups the components and evidence beneath them.
Risk assessment process
Data validation checks
Edge from any layer card crosses into the Controls spine — the same control may serve a regulatory article, a standards clause, a downstream component, and the evidence that closes it.
Measurement and remediation loops (diagram)
To keep “Measure” and “Manage” real, link tests to controls and remediate with a traceable loop.
Runtime test
A schedule fires, an operator evaluates, a result is emitted.
Schedule
Runs on cadence (e.g. daily)
Evaluation
1
Fetch latest datapoint
2
metric < threshold3
Emit result
Passed Failed Error
Tests evaluate the most recent signal available within the lookback window. Outside the window, the result is Error, not Failed.
Remediation loop
A continuous five-step cycle, not a ticket queue.
Continuous
Remediation
1
Detect
Failed or error result
2
Triage
Data issue vs real drift
3
Fix
Change system or control implementation
4
Record
Update evidence and audit trail
5
Re-verify
Re-run test or monitor
When tests are linked to controls, failures route to control owners and keep governance aligned with reality.
Exports and stakeholder packages (diagram)
Exports are point-in-time snapshots. They are most useful when scope is stable and evidence is linked.
Audit pack
How four export types collapse into one shippable bundle.
Inputs
Project PDF export
Top controls (PDF exports)
Evidence files (attachments)
Key assets (Markdown exports)
Audit pack
Single shippable bundle
All four input types, versioned together, ready for the auditor.
Snapshot Exports are snapshots. Keep scope stable before exporting — the bundle freezes whatever was in place at export time.
Common pitfalls
- treating NIST AI RMF as a one-time assessment rather than continuous governance
- collecting evidence in drive folders without linking it to controls and decisions
- running tests without thresholds and clear action rules
- changing model/data/deployment without triggering re-review of risks and approvals
Related pages
Core functions and profiles
Use profiles to define a target state and turn gaps into governance work
Controls
Execute governance work and attach evidence at the control-component level
Runtime Inspection
Make evaluations part of the governance record over time
Risk quantification
Prioritize treatment with monetary risk signals when needed