Ship Compliant AI Without Slowing Down
Integrate compliance into your ML workflows, not around them. Governum provides the tooling AI engineers need to meet EU AI Act requirements while maintaining development velocity.
ai_system:
name: "fraud-detection-v2"
risk_category: "high"
annex_iii_ref: "5(b)"
documentation:
technical_spec: "./docs/tech-spec.md"
risk_assessment: "./docs/risk.md"
monitoring:
drift_threshold: 0.05
performance_metrics:
- accuracy
- fairness_score
alert_channels:
- slack: "#ml-alerts"
Built for Developer Experience
Compliance as code, not compliance as bureaucracy
CLI First
Register systems, run compliance checks, and generate docs from your terminal.
Git Integration
Version control for compliance docs. PR-based review workflows for changes.
API Native
Full REST & GraphQL APIs. Integrate compliance checks into any workflow.
SDK Support
Python, JavaScript, and Go SDKs for programmatic compliance management.
Fits Your ML Stack
Native integrations with the tools you already use
MLflow
Weights & Biases
Kubeflow
SageMaker
Vertex AI
Azure ML
CI/CD Pipeline Integration
Add compliance gates to your ML pipelines. Automated checks ensure models can't reach production without proper documentation and risk assessment.
- GitHub Actions workflow
- GitLab CI templates
- Jenkins plugins
- Azure DevOps extensions
- name: Governum Compliance Check
uses: governum/compliance-action@v2
with:
api-key: ${{ secrets.GOVERNUM_KEY }}
system-id: fraud-detection-v2
- name: Generate Documentation
run: governum docs generate
- name: Upload Artifacts
run: governum artifacts push ./model
Model Registry Sync
Automatic synchronization with your model registry. Every model version is tracked with its corresponding compliance documentation.
- Auto-discovery from registry
- Version-linked documentation
- Deployment tracking
- Lineage preservation
Model Versions
| Version | Stage | Compliance |
|---|---|---|
v2.3.1 |
Production | Complete |
v2.4.0 |
Staging | Review |
v2.5.0-rc1 |
Dev | Incomplete |
Documentation That Writes Itself
Stop context-switching between code and compliance docs. Governum auto-generates technical documentation from your codebase, model metadata, and training artifacts.
Code Analysis
Extract model architecture, dependencies, and data flows from source code.
Data Lineage
Automatic tracking of training data sources, transformations, and quality metrics.
Performance Capture
Import metrics from your experiment tracking tools automatically.
Auto-Generated Documentation
Model Architecture
Extracted from model.pyTraining Data Summary
Extracted from data pipelinePerformance Metrics
Synced from MLflowIntended Use Description
Manual input requiredML Observability + Compliance
Monitor model performance and compliance status together
Drift Detection
Monitor data drift and model drift in real-time. Automatic alerts when drift exceeds compliance thresholds.
drift_threshold: 0.05
Fairness Metrics
Track fairness metrics across protected attributes. Built-in support for demographic parity, equalized odds, and more.
fairness_score: 0.92
Decision Logging
Automatic logging of model predictions with full context. Queryable audit trail for regulatory requirements.
retention: 10years
Explainability
Integration with SHAP, LIME, and other explainability tools. Generate explanations that satisfy transparency requirements.
explainer: shap
Alert Integration
Compliance alerts to Slack, PagerDuty, or any webhook. Configure severity levels and escalation paths.
slack: #ml-alerts
Human-in-the-Loop
Configure human oversight requirements. Automatic routing for review when confidence is low.
review_threshold: 0.7
Python SDK
Integrate compliance checks directly into your training scripts. A few lines of code to ensure every model is compliant.
View Full Documentationfrom governum import GovernumClient, AISystem
# Initialize client
client = GovernumClient(api_key=os.environ["GOVERNUM_KEY"])
# Register or update AI system
system = client.systems.upsert(
id="fraud-detection-v2",
name="Fraud Detection Model",
risk_category="high",
purpose="Detect fraudulent transactions",
owner="ml-team@company.com"
)
# Log training run
with system.training_run() as run:
run.log_dataset(train_data, name="training_set")
run.log_params(model.get_params())
model.fit(X_train, y_train)
run.log_metrics({
"accuracy": accuracy_score(y_test, y_pred),
"fairness": fairness_score(y_test, y_pred, sensitive)
})
# Check compliance before deployment
result = system.check_compliance()
if not result.passed:
raise ComplianceError(result.issues)
Ready to Ship Compliant AI?
Join ML teams at leading companies building responsible AI with Governum.