Sovereign AI Platform · Private inference for regulated enterprises

AI you can put in front of your auditor.

Private LLM inference with a tamper-evident audit log, dual-host cryptographic attestation per response, NIST post-quantum signatures, and on-shore hardware. Your auditor can verify any inference offline, without contacting us.

Designed against HIPAA, SOC 2 Type II, OCC SR 11-7, and EU AI Act Article 12 logging requirements.

Why this exists

Hosted AI fails the audit.

Your data passes through a shared tenant. Your audit log is the vendor's database. The model version can change silently between your evaluation and your production traffic. There is no cryptographic proof that the response you got is the response that was sent — only the vendor's word.

For the kinds of decisions a regulated business automates with AI — credit, claims, diagnostic suggestions, sanctions screening, contract review — that gap is the gap a regulator asks about. Sovereign AI Platform closes it.

The four guarantees

Each layer independent. None sufficient alone.

01

Server-side authorization

Policies stored as data, not code. Every read and write path is intercepted before any database operation. Deny rules outrank allow rules. Permissions are scoped by tenant, team, or individual ownership.

02

Hard tenant isolation

Authenticated identity is verified against the claimed tenant before a query is constructed. Schema-level separation in production. The audit log uses denormalized tenant identifiers so it survives tenant deletion for retention requirements.

03

Dual-attested audit log

Every inference event is signed by two physically separate servers using NIST FIPS-204 post-quantum signatures. Every audit row carries a hash chain over the previous row's bytes. Forging an entry requires compromising both hosts and breaking the chain at every subsequent row.

04

On-chain anchor

Once an hour the platform commits a Merkle root of the audit log to a public blockchain. Your auditor independently rebuilds the tree from the rows, recomputes the root, and verifies on-chain. Tampering becomes mathematically detectable, not vendor-attested.

How a request flows

One request. Two signatures. One on-chain anchor.

  1. Authentication. JWT or API key. Identity is verified before any further processing.
  2. Tenant check. The authenticated identity must belong to the claimed tenant, or the request is rejected with no database access.
  3. Policy decision. The policy engine returns allow or deny based on role, scope, and resource. Both outcomes are logged.
  4. Inference on private hardware. The prompt is sent to your dedicated private compute. The response is returned with a canonical receipt.
  5. Dual signing. Two physically separate servers each sign the canonical receipt with independent ML-DSA-87 keypairs.
  6. Audit row. The event is written to an append-only log with both signatures, both pubkeys, and a hash chain link to the previous row.
  7. Hourly anchor. A scheduled job builds a Merkle root of the new rows and commits it to a public blockchain. Your auditor verifies offline.

Compliance mapping

One platform. Four regulator mappings.

Each regulator's audit log requirement is satisfied by the same underlying architecture. Procurement teams can map requirements line by line.

Regulator / frameworkSpecific requirementPlatform feature that satisfies it
HIPAA Security RuleAudit controls (§164.312(b)) — record and examine activity in systems that contain ePHI.Append-only, hash-chained audit log; per-tenant isolation; long-term R2 archival with object retention.
SOC 2 Type II — CC7System monitoring, change management, evidence of detection and response.Every privileged action is logged with cryptographic chain link; chain verification endpoint exposed for auditor.
OCC SR 11-7 / Model RiskDocumented evidence of model version and inputs for every model decision.Receipt binds model identifier and content hash; signed by two independent hosts; on-chain timestamp.
EU AI Act, Art. 12Automatic recording of events for high-risk AI systems, retained for traceability.Per-event log with actor, time, outcome, model, tokens; tenant-scoped retention; auditor-verifiable chain.
NIST AI RMF — Manage 4.1Mechanisms to ensure system actions are traceable and verifiable post-hoc.Dual-host post-quantum signatures (FIPS-204); offline verifier published.
FedRAMP / IL-4 (in roadmap)Tamper-evident logs, on-shore data residency, FIPS-validated cryptography.FIPS-204 ML-DSA-87 signatures; on-shore enterprise-grade GPU hardware; no third-country routing.

Pricing

Predictable monthly pricing. No usage cliffs.

SMB

$2,500/mo

  • Hosted private inference
  • OpenAI-compatible API
  • Single workspace
  • No audit-grade receipts
  • No on-chain anchor
Contact sales

Mid-Market

$6,000/mo

  • Everything in SMB
  • Up to 10 tenants
  • SSO (OIDC / SAML)
  • Audit-grade receipts (add-on)
  • No on-chain anchor
Contact sales

Regulated

$18,000/mo

  • Everything in Enterprise
  • ≥ 7-year R2 retention
  • SOC 2 Type II letter
  • HIPAA BAA + named DPO
  • Dedicated hardware partition
Contact sales

Annual contract pricing on request. Volume tiers above $50K/mo.

Trust signals

What you can verify in five minutes.

Every platform claim above corresponds to a public endpoint or a published artifact. Procurement teams can verify without an account, before any contract is signed.

  • GET /v1/pubkeys — published platform pubkeys.
  • POST /v1/verify — independently verifies any receipt.
  • GET /audit/anchors — every Merkle root with its on-chain transaction hash.
  • GET /audit/verify-chain — recomputes the chain and reports the first broken link, if any.
  • FIPS-204 ML-DSA-87 specification — no proprietary crypto.
  • Public blockchain anchor contract source — no proprietary on-chain logic.

Request enterprise access

Tell us where you want to deploy.

A response within one business day. We respond from a named human at ZC Technologies, not a queue.