Skip to content

Principles

Principle-based AI governance is about translation: from values to operational controls.

How to read principles pragmatically

For each principle, ask:

  • what could go wrong if we violate it
  • which controls prevent or detect that failure mode
  • what evidence shows the controls exist and work

From principle → controls → evidence (example)

Translate a principle into execution
Identify failure modeswhat harm or misuse looks like in your context
Misleading outputs
Hidden automation
Unfair outcomes
Choose controlsprocess + technical measures with owners
Approval gates
Guardrails
Monitoring
Add signalstests and thresholds that detect drift
Evaluation suite
Incident tracking
Regression tests
Attach evidenceprove what you did and when
Design docs
Test results history
Review records

Principle 1: Fairness

What it means in practice: AI outcomes should not systematically disadvantage groups without justification.

Typical controls:

  • define fairness expectations for the use case (what is “unfair” here)
  • review training/validation data for representativeness and bias risks
  • run fairness evaluations on a schedule and on material changes

Evidence and signals:

  • fairness metrics + thresholds + historical results
  • data documentation and bias analysis notes
  • remediation records when drift or disparity is detected

Principle 2: Accountable AI

What it means in practice: accountability is explicit: owners, reviewers, and escalation paths exist and are used.

Typical controls:

  • assign system owner, risk owner, and reviewers (with decision rights)
  • define approval gates for launch, major change, and retirement
  • document residual risk acceptance decisions

Evidence and signals:

  • RACI / responsibility assignment
  • review history and sign-offs
  • risk treatment decisions with dates and rationale

Principle 3: Transparent AI

What it means in practice: stakeholders can understand where AI is used and what it is used for.

Typical controls:

  • disclose AI use where relevant (internal users, customers, impacted persons)
  • provide clear usage guidance and constraints (“do” / “don’t”)
  • track changes to intended purpose and deployment contexts

Evidence and signals:

  • disclosure text and versions (product UI text, policy statements)
  • system scope statement and change log
  • training and user guidance materials

Principle 4: Explainable AI

What it means in practice: explanations are as good as technically possible for the context — and are actionable.

Typical controls:

  • define what explanation is required by user and decision context
  • implement explanation patterns (e.g., feature-level, example-based, policy-based)
  • require human review for high-impact outcomes when explainability is limited

Evidence and signals:

  • explanation approach doc + limitations
  • user-facing guidance and escalation paths
  • review records for high-impact uses

Principle 5: Robust, safe and secure AI

What it means in practice: systems are engineered to withstand errors, misuse, and security threats.

Typical controls:

  • evaluation plan with thresholds (quality, robustness, safety)
  • monitoring for drift and failures, with incident response playbooks
  • security controls for access, logging, and supplier governance

Evidence and signals:

  • test results history and monitoring dashboards
  • incident tickets and postmortems linked to control fixes
  • security reviews for dependencies and vendors

Principle 6: Human-centred AI

What it means in practice: systems preserve human agency and are designed for human outcomes.

Typical controls:

  • human oversight where decisions affect people materially
  • clear user workflows for escalation, appeal, and override
  • user experience guidance that prevents overreliance

Evidence and signals:

  • human oversight procedure and logs of escalations
  • user guidance and training artifacts
  • periodic reviews of outcomes and complaints

Principle 7: Sustainable and environmentally friendly AI

What it means in practice: sustainability is considered in model choice, deployment, and operations.

Typical controls:

  • right-size model selection and compute budgets
  • monitor cost/usage and optimize unnecessary spend
  • document tradeoffs (performance vs compute) and re-review periodically

Evidence and signals:

  • cost/usage reports with targets
  • architecture decisions and tradeoff notes
  • monitoring alerts for runaway usage

Principle 8: Privacy-preserving AI

What it means in practice: privacy is protected across training, inference, logs, and vendor relationships.

Typical controls:

  • data minimization and retention rules for prompts and logs
  • access controls and encryption for sensitive stores
  • vendor governance for model providers and subprocessors

Evidence and signals:

  • data map + retention schedule + deletion run logs
  • vendor reviews and contractual controls where applicable
  • privacy impact assessments when risk is high

Disclaimer

This page is for general informational purposes and does not constitute legal advice.