Appearance
Operationalizing in Modulos
The NIST AI RMF becomes actionable when it turns into a repeatable operating model: scope work, execute controls, collect evidence, review decisions, and monitor signals over time.
Recommended project structure
Most organizations use:
- One organization project for AI governance foundations (policies, shared control library, oversight cadence).
- AI system projects for product/deployment governance work where risks, tests, and evidence become system-specific.
Where in Modulos
Project → Requirementsto track what is fulfilled and what is blockedProject → Controlsto execute governance work and link evidenceProject → Testingto capture evaluation signals over timeProject → Evidenceto maintain an evidence library used across controlsProject → Risksto quantify, prioritize, and document treatment decisions
A sequence that works
1
Govern: set the rules
Assign ownership, define approval gates, and set risk acceptance criteria
2
Map: scope the system
Capture boundary, stakeholders, data flows, and intended use/misuse
3
Measure: add signals
Define evaluations, thresholds, monitoring cadence, and owners
4
Manage: treat risk
Implement mitigations via controls and track residual risk decisions
5
Export and iterate
Create audit-ready snapshots and re-review on meaningful changes
How framework work becomes execution work
In Modulos, NIST AI RMF typically lands as a set of project requirements that map to controls and evidence.
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
Measurement and remediation loops (diagram)
To keep “Measure” and “Manage” real, link tests to controls and remediate with a traceable loop.
Scheduled run
Runs on schedule (e.g., daily)
Evaluation
1
Fetch latest datapoint
2
metric < threshold3
Emit result
Passed
Failed
Error
Tests evaluate the most recent signal available in the window.
Continuous remediation
1
Detect
Failed or error result
2
Triage
Data issue vs real drift
3
Fix
Change system or control implementation
4
Record
Update evidence and audit trail
5
Re-verify
Re-run test or monitor
When tests are linked to controls, failures route to control owners and keep governance aligned with reality.
Exports and stakeholder packages (diagram)
Exports are point-in-time snapshots. They are most useful when scope is stable and evidence is linked.
Project PDF export
Top controls (PDF exports)
Evidence files (attachments)
Key assets (Markdown exports)
Audit pack
Exports are snapshots. Keep scope stable before exporting.
Common pitfalls
- treating NIST AI RMF as a one-time assessment rather than continuous governance
- collecting evidence in drive folders without linking it to controls and decisions
- running tests without thresholds and clear action rules
- changing model/data/deployment without triggering re-review of risks and approvals
Related pages
Core functions and profiles
Use profiles to define a target state and turn gaps into governance work
Controls
Execute governance work and attach evidence at the control-component level
Testing
Make evaluations part of the governance record over time
Risk quantification
Prioritize treatment with monetary risk signals when needed