ProductFrontierAI Registry + Assurance Fabric

The compliance infrastructure
advanced AI requires.

OnTargetCompliance is a six-layer assurance platform — from AI system registry to regulator-ready evidence export. Every layer is connected. Every output is signed. Every claim is verifiable.

Platform Overview

FrontierAI Registry
+ Assurance Fabric

The FrontierAI Registry is the authoritative record of every AI system in your organisation. It captures model identity, version lineage, deployment context, risk classification, and responsible ownership — the single source of truth from which all compliance activity flows.

The Assurance Fabric is the six-layer compliance infrastructure that sits above the Registry. It connects evidence collection, node-based verification, policy alignment, human oversight, and regulator export into a single, auditable workflow — for every AI system, across every applicable framework.

Together, they replace scattered spreadsheets, manual audit preparation, and one-off compliance projects with a continuous, verifiable system of record — one that produces evidence regulators accept and customers trust.

AI systems registered
Full inventory, not a sample
Evidence type
Cryptographically signed artifacts
Framework coverage
EU AI Act, GDPR, UK GDPR, SOC 2, ISO 27001, Clarity Act
Output formats
Regulator ZIP, Board PDF, Audit Report, Evidence Ledger, Trust Portal
Verification model
Trust-Verified Nodes — modular, signed, auditable
Human oversight
Built-in review gates at defined lifecycle stages
See the platform

Register your first AI system in under 30 minutes.

FrontierAI Registry is designed for immediate deployment. Book a demo to see how quickly your AI inventory can be structured and documented.

Architecture

Six-layer compliance stack

Each layer has a distinct function. Together they form a complete, connected assurance infrastructure — from first registration to final regulator submission.

FrontierAI Registry
AI System Record

The authoritative record of every AI system in your organisation. Each system entry captures model identity, version lineage, deployment context, intended use, risk classification, and responsible owner. The Registry is the foundation — nothing enters the compliance lifecycle without a Registry entry.

Key data captured
System ID & version hash
Model card & architecture
Deployment scope
Risk tier (EU AI Act)
Owner & review chain
Architecture deep dive

Six layers. One continuous compliance workflow.

From Registry to Export, every layer produces evidence that feeds the next. Book a demo to walk through the full architecture with a compliance specialist.

Lifecycle Workflow

From intake to regulator submission

The Assurance Fabric manages a seven-stage lifecycle for every AI system. Each stage produces signed evidence. Each gate requires explicit sign-off. Nothing advances without a complete, verifiable record.

Stage 01

Intake

Register the AI system in the FrontierAI Registry. Capture system identity, model card, deployment context, intended use, and initial risk classification. Assign a responsible owner and compliance lead.

Registry entryRisk tier assignmentOwner assignment
Stage 02

Build Provenance

Document the full provenance of the AI system — training data sources, dataset inventory, preprocessing steps, model architecture, and version lineage. Execute the Dataset Inventory Node and Model Registration Node.

Dataset inventory recordModel provenance chainSigned architecture log
Stage 03

Evaluate

Execute the Benchmark Execution Node. Run capability evaluations, safety benchmarks, bias assessments, and performance tests. All results are captured as signed evidence artifacts with full methodology documentation.

Benchmark results (signed)Bias assessment reportPerformance baseline
Stage 04

Safety Case

Build the structured safety case. Execute the Safety Mitigation Node — document identified risks, implemented mitigations, residual risk assessment, and human oversight mechanisms. Assign for independent technical review.

Safety case documentRisk register (signed)Mitigation evidence
Stage 05

Release Gate

Execute the Independent Review Node. An independent reviewer assesses the safety case, benchmark results, and compliance documentation. The Policy Engine validates framework alignment. Executive sign-off completes the gate.

Independent review sign-offPolicy compliance checkRelease authorisation
Stage 06

Monitor

Post-deployment monitoring captures operational performance, incident reports, and drift indicators. The Registry entry is updated with monitoring data. Triggers are configured to initiate re-evaluation if thresholds are breached.

Monitoring log (ongoing)Incident recordsDrift alerts
Stage 07

Submit / Disclose

Execute the Submission Packet Node. Generate the regulator-ready evidence bundle, board summary, and customer assurance export. Submit to the EU AI Act database, provide customer Trust Portal access, and file with relevant regulators.

Regulator ZIPBoard PDFTrust Portal link
End-to-end lifecycle

From intake to regulator submission — automated.

The 7-stage lifecycle workflow ensures no compliance step is missed. See a live run-through of an AI system moving from intake to signed evidence package.

Book a Demo See the Nodes
Policy Engine

Framework alignment, automated

The Policy Engine evaluates each AI system against the specific obligations of every applicable framework. Based on risk tier, deployment jurisdiction, and use case, it generates a compliance gap analysis and routes the system to the appropriate node workflows.

Trigger condition
Risk Tier = HIGH
Framework
EU AI Act
Engine action
Mandatory: Articles 9–15 obligations. Requires full node workflow A–F. Independent review required. EU database registration mandatory.
Trigger condition
Deployment = UK
Framework
UK GDPR / ICO
Engine action
Mandatory: DPIA under UK GDPR Article 35. ICO AI auditing framework alignment. Legitimate interest assessment if Article 22 applies.
Trigger condition
Customer Contract = Enterprise
Framework
SOC 2 / ISO 27001
Engine action
Required: SOC 2 Type II evidence package. ISO 27001 Annex A control mapping. Customer Trust Portal access provisioned.
Trigger condition
Jurisdiction = US Federal
Framework
Clarity Act (Proposed)
Engine action
Proactive: Algorithmic impact assessment. Transparency disclosure preparation. Automated decision-making documentation.
Trigger condition
Capability = Systemic Risk
Framework
EU AI Act GPAI
Engine action
Mandatory: GPAI systemic risk assessment. Adversarial testing. Incident reporting mechanism. Model evaluation against benchmarks.
Policy Engine output

For each AI system, the Policy Engine produces a compliance obligation map — a structured list of every applicable article, control, and requirement, with current status (met / gap / not applicable), evidence references, and the node workflows required to close any gaps. The obligation map is updated automatically as the system progresses through the lifecycle.

Framework alignment

Policy Engine maps every control to the right framework.

EU AI Act, GDPR, UK GDPR, SOC 2, ISO 27001, and Clarity Act — the Policy Engine automatically maps your AI system's evidence to the applicable framework requirements.

Data Model

Structured for audit, built for scale

The platform data model is designed for compliance integrity — every entity is uniquely identified, every relationship is explicit, and every record carries a complete provenance chain.

AI System1:N → Nodes, Evidence, Exports
system_id (UUID)nameversionmodel_cardrisk_tierdeployment_scopeowner_idcreated_atstatus
Trust-Verified NodeN:1 → AI System; 1:N → Evidence Artifacts
node_id (UUID)system_id (FK)node_type (A–F)statusinputs[]outputs[]reviewer_idsigned_atsignature_hash
Evidence ArtifactN:1 → Node; referenced by → Exports
artifact_id (UUID)node_id (FK)artifact_typecontent_hashauthor_idtimestampsignaturechain_of_custody[]
Policy MappingN:1 → AI System; N:M → Evidence Artifacts
mapping_id (UUID)system_id (FK)frameworkarticle_refobligationstatusevidence_ids[]gap_notes
Export PackageN:1 → AI System; references → Evidence Artifacts
export_id (UUID)system_id (FK)export_typeversiongenerated_atsignaturerecipientaccess_url
Data model

Structured evidence from day one.

The FrontierAI data model is designed for regulatory requirements — not retrofitted. Every field maps to a specific obligation under EU AI Act, GDPR, or SOC 2.

Trust-Verification

How trust is verified, not claimed

Every compliance output in OnTargetCompliance is backed by a cryptographically signed evidence chain. The Trust Verification Envelope is the mechanism — a structured record that makes every claim independently verifiable.

Step 01

Input Capture

All inputs to a node are captured and hashed before execution. The input hash is included in the Trust Verification Envelope.

Step 02

Process Execution

The node executes its defined workflow in a controlled environment. Every step is logged with timestamp, actor, and outcome.

Step 03

Output Signing

On completion, the node output is cryptographically signed. The signature binds the output to inputs, process log, reviewer identity, and timestamp.

Step 04

Envelope Assembly

The Trust Verification Envelope is assembled: input hash, process log, signed output, reviewer identity, timestamp, and chain-of-custody reference.

Step 05

Chain Verification

Any regulator, auditor, or customer with access can verify the chain — confirming every compliance claim is backed by a signed, tamper-evident evidence trail.

Trust Verification Envelope — structure
input_hash
SHA-256 hash of all node inputs
process_log
Append-only log of all execution steps
signed_output
Cryptographically signed findings
reviewer_identity
Verified identity of human reviewer
timestamp
ISO 8601 timestamp of signing event
chain_ref
Reference to prior node in the chain
Next step

See the platform in action
for your AI systems.

Book a technical demo to walk through the FrontierAI Registry, Assurance Fabric, and Trust-Verified Nodes for your specific AI deployment context and regulatory obligations.