Building the data layer for AI governance.
AI governance today produces documents — Model Cards, System Cards, risk assessments, control narratives — when it should produce data. These artifacts live in PDFs and wiki pages that no pipeline can read, no auditor can query, and no agent can consume. I'm an information professional (MLIS) treating that gap as a cataloging problem: structured schemas, stable identifiers, and crosswalks between threats (MITRE ATLAS) and controls (NIST AI RMF, ISO 42001, SR 11-7), exported in machine-readable formats (OSCAL) so governance becomes something CI/CD gates, GRC platforms, and agents can actually run on.
I call the working synthesis the Governance Card Stack — Model, System, and Agent Cards as a unified, machine-readable spine.
| Project | What it is |
|---|---|
| mltrack | CLI for AI model inventory & compliance tracking. Maps model metadata to NIST AI RMF, ISO 42001, and SR 11-7 controls. |
Controlled Vocabulary — AI governance and safety through a library and information science lens. The thinking behind the code above.
| Certification | Focus |
|---|---|
| ISO 42001:2023 Lead Auditor | AI Management Systems |
| ISO 27001:2022 Lead Auditor | Information Security Management Systems |
| CompTIA Security+ | Security Fundamentals |
- Building the Governance Card Stack — a machine-readable synthesis of Model, System, and Agent Cards, mapped to MITRE ATLAS and NIST AI RMF, exported in OSCAL
- Extending MLTrack — model registry discovery and Card-format export in development
- Mapping the CRI Financial Services AI RMF (230 controls, 4 functions) against MITRE ATLAS realized threats — feeding the Card Stack's threat-to-control crosswalk
- Substack: controlledvocabulary.substack.com
- LinkedIn: linkedin.com/in/joseruiz1571
- Location: Austin, Texas
