Appearance
Key principles and obligations (AI systems playbook)
This page is a practical lens on GDPR for teams building and operating AI systems.
Start with two scoping decisions
Before you write documents, clarify two things:
- Does the AI system process personal data?
- If prompts, logs, identifiers, user accounts, HR data, customer records, or telemetry touch the system, assume yes.
- What is your role: controller, processor, or both?
- Your obligations (and what you can contractually delegate) depend on your role in the processing chain.
These decisions drive what “good” looks like for artifacts like DPIAs, notices, and vendor governance.
Principles you will run into (Art. 5)
Common GDPR principles that map directly to AI work:
- Lawfulness and transparency: clear basis and clear communication
- Purpose limitation: data used for specified purposes, not silently repurposed
- Data minimization: collect and retain only what you need
- Accuracy: data quality matters, including in training and monitoring
- Storage limitation: retention and deletion are governance decisions
- Integrity and confidentiality: security and access control are core
- Accountability: you must be able to demonstrate compliance
Translate principles into decisions and evidence
Use the principles as an implementation checklist:
| Principle | The decision you must make | What evidence usually exists |
|---|---|---|
| Lawfulness & transparency | What lawful basis applies for each purpose? What do you tell people? | Lawful basis record; privacy notice; training data provenance notes |
| Purpose limitation | What are the allowed purposes (training, support, fraud, analytics)? | Processing purpose statement; change log for new purposes |
| Data minimization | What inputs/logs are essential? What is explicitly excluded? | Data map; logging policy; field-level minimization notes |
| Accuracy | How do you correct/refresh data and handle model drift? | Data quality checks; monitoring results; issue tickets |
| Storage limitation | What is retained, for how long, and why? | Retention schedule; deletion run logs; backups policy |
| Integrity & confidentiality | What technical measures protect data end-to-end? | Access controls; encryption configs; vendor security docs |
| Accountability | Who approved what, and when? | DPIA approval record; review history; export/audit package |
Common obligations and artifacts (what teams actually maintain)
Different organizations will require different artifacts, but common ones include:
1) Lawful basis and purpose (Art. 6, 9)
For AI systems, split lawful basis by purpose, not by “the system”:
- training/fine-tuning
- inference/user interaction
- logging and monitoring
- security and fraud detection
- product analytics and improvement
If special category data (Art. 9) or biometrics are involved, treat it as a separate, explicit decision with stricter controls.
2) Transparency (Art. 13–14)
Make it easy to answer:
- “What data do you use, for what, and for how long?”
- “Do you use automated decision-making or profiling?”
- “Who do you share data with (vendors/subprocessors)?”
Common artifacts:
- privacy notices (user-facing) and internal “processing summaries”
- training data source notes and dataset documentation (when applicable)
3) Data subject rights (Art. 15–22)
You need an operable DSAR process, not only policy:
- access, rectification, erasure, restriction, portability, objection
- ability to locate data across systems (including logs and vendor systems)
- response workflow with approvals and audit trail
4) Automated decision-making and profiling (Art. 22)
When AI meaningfully affects individuals (e.g., eligibility, pricing, employment decisions), clarify:
- whether Art. 22 restrictions apply in your scenario
- what human review/override exists in the real workflow
- how you explain outcomes at the appropriate level of detail
5) Records of processing (RoPA) (Art. 30)
RoPA becomes manageable when it’s treated as a living index of:
- purposes, categories of data/subjects, recipients, transfers
- retention and security measures
- links to system-level evidence (DPIA, vendor contracts, notices)
6) DPIA for higher-risk processing (Art. 35)
DPIAs show up frequently in AI programs because of:
- large-scale processing, vulnerable subjects, or systematic monitoring
- new technology with uncertain impacts
- meaningful effects on individuals (especially if automated)
Treat the DPIA like a decision record: risks, mitigations, residual risk acceptance, and approval.
7) Security and breach readiness (Art. 32–34)
GDPR security is not separate from engineering:
- access controls and least privilege
- encryption in transit/at rest
- logging, incident response, and breach notification readiness
- supplier security posture (especially model providers and hosting)
8) Vendors, subprocessors, and transfers (Art. 28; Chapter V)
AI systems often depend on a vendor stack. Governance typically includes:
- DPAs and subprocessor lists (who processes what)
- security reviews and periodic reassessment
- international transfer mechanism decisions (e.g., SCCs) and supporting assessments
What “good evidence” looks like
Evidence should attach to the smallest meaningful claim (a control component), and be reusable across controls.
Evidence
Control Components
Controls
model_validation.pdf
Component A
Component B
Component C
Component D
Component E
CTRL-001Model Validation
CTRL-002Data Quality
Same evidence reused across controls
Attach evidence to the smallest meaningful claim.
Examples of evidence artifacts used across GDPR controls:
- DPIA (or privacy impact assessment) with approval record
- RoPA entry (or link to the RoPA system) for the AI system
- privacy notice versions and change history
- retention and deletion run logs (including backups policy)
- vendor DPAs/subprocessor lists and security review outputs
Go deeper: Operationalizing GDPR in Modulos.
Disclaimer
This page is for general informational purposes and does not constitute legal advice.