Verify what AI is. Reveal who's behind it only when it matters.
AIP is not a global identity system. It is a verification and accountability layer with controlled, proportional disclosure — and a draft concept for the infrastructure AI is missing.
"People already know internet traffic is tracked. AI traffic is the same conversation — it's just being whispered instead of addressed."
// The concern this draft is trying to name publicly
01 / The Problem
What happens when you can't tell what an AI is?
Right now, any AI can claim authority it doesn't have — and there is no protocol-level way to challenge that claim.
// Real scenario
A consumer asks an AI for financial advice. The system presents itself as "regulated." There is no standard way to verify that claim — not at the protocol level, not in real time, not without trusting the operator's word.
⚠️
AI can claim any authority
hover or tap to go deeper →
The Spoofing Problem
An agent can present itself as a "regulated medical assistant" or "licensed financial advisor" with no mechanism to verify that claim. Current tools check if a signature is valid — not whether the claimed class is real.
TCP/IP routes packets. TLS encrypts connections. DNS resolves names. None of these tell you anything about the AI generating the content or taking the action. That layer doesn't exist yet.
IP → TCP → TLS → HTTP → ??? → AI
🔒
Accountability vs. surveillance
hover or tap to go deeper →
The Design Paradox
Full public identity exposes operators unfairly. Total anonymity shields bad actors. The answer is a graduated disclosure model — identity revealed only when proportional to the harm, under quorum control.
verify → escrow → reveal (tiered)
02 / The Protocol
Four layers. One accountability stack.
The stack starts at the model itself — a signal baked in at generation — and builds upward through verification, attribution, and audit. Each layer depends on the one below it. Inspired by protocol layers like HTTP, TLS, and DNS.
Layer 0 — Foundation
Signal Generation
The provenance signal is baked into the model at generation — not applied after the fact. AIP is signal-agnostic: DistSeal, C2PA, cryptographic signatures, or any verifiable standard. Without this layer, Layer 1 has nothing to read.
hover to go deeper →
Signal-Agnostic by Design
AIP does not require any specific signal method. It defines what a verifiable provenance signal must do — not how it is generated. DistSeal (latent-space watermarking) and C2PA are two well-researched examples. The protocol reads the signal regardless of origin.
Why in-model matters
Post-hoc pixel watermarks: ~63ms, bypassable with one line of code.
Latent-space distillation: ~3ms, weight-native, survives fine-tuning when decoder-distilled. The signal L1 reads must exist before L1 can function.
layer: 0 // signal generation signal_type: agnostic example: latent_decoder_distillation
Layer 1
Canonical Class Verification
Anyone can verify what an AI system publicly claims to be — without accessing private identity.
hover to see the verdict →
What L1 answers
Is the signature valid? Does the claimed class match the registered class? Is the manifest current? Is the system operating within its declared authority? Has it been spoofed, replayed, or downgraded?
Five checks. Machine-readable verdict. Under 100ms.
Who stands behind this AI? Revealed only under lawful, auditable conditions. No single entity can unmask an operator alone — ever.
hover to see the gate →
Quorum-gated. Not centralized.
Identity is split across independent shard holders. Unmasking requires a threshold quorum plus a valid legal or safety trigger. This is not a kill switch. It is a due-process gate with an immutable audit trail. No registry, no provider, no government acts alone.
RegistryProviderEscrow TrusteeRegulator→ 3-of-5 to unmask
Layer 3
Transparency Log
The log proves that something happened — without exposing what was protected.
hover for the architecture →
Split Log Design
Two parallel logs: a public event log (proves something happened, when, and with what verdict) and a protected detail log (accessible only to authorized auditors). Linked cryptographically. Sealed disclosures supported for active investigations.
Who gets to issue identities? Who can force disclosure? What happens when jurisdictions conflict?
hover for the principles →
Constitutional Rules
No single-party unmasking. Minimum disclosure only. Immutable logging. Proportional response. Cross-jurisdiction conflict defaults to the stronger protection rule — not the weakest privacy regime. Federated, not monopolized.
The chain starts inside the model and ends with an auditable accountability record.
L0 Signal
In-model mark weight-native
→
System
AI artifact model, agent
→
Action
Material act contract, call
→
L1 Verdict
Accept / Reject public, <100ms
→
L2 Escrow
Break glass quorum required
→
L3 Log
Audit trail immutable
Example: AIP-ACTION Envelope
What a signed material action looks like in the protocol. This is what a receiving platform verifies before accepting an AI-generated act.
// AIP-ACTION — signed envelope attached to every material AI action
{ "proto": "AIP/1.0", "system_id": "aip:sys:7f2c9d...", "declared_class": "financial-advisor", "action_type": "investment-recommendation", "manifest_hash": "sha3-512:abc123...", "issuer_did": "did:example:provider-org", "policy_class": "regulated-high-impact", "signal_type": "latent-watermark", // agnostic — any verifiable signal accepted "issued_at": "2026-04-11T20:14:03Z", "signature": { "alg": "ML-DSA-65", "sig": "base64..." }
}
// L0 signal read → Layer 1 verdict returned in <100ms from public registry
{ "signature_valid": true, "manifest_match": true, "class_match": "CLASS_MISMATCH", // declared: judicial-agent / canonical: marketing-bot "scope_match": false, "spoof_risk": "high", "verdict": "reject"
}
04 / What This Is Not
Accountability without surveillance
The goal isn't a global ledger of who built what. It's proportional accountability — public enough to trust, private enough to protect.
❌ Not this
🔍
A surveillance database that publicly exposes every AI operator's identity
⛓️
Every token or action written to a public blockchain
🏛️
One centralized authority with a master key — or a kill switch for AI
🔒
A DistSeal wrapper — AIP is signal-agnostic and works with any verifiable provenance method
✓ This
🔑
Public class verification without exposing operator identity by default
⚖️
Identity revealed only under threshold quorum + valid legal or safety trigger
🌐
Federated root model — multiple authorities, no monopoly, no single point of failure
📜
A starting point — naming a concern the industry is whispering but not addressing
05 / Where It Applies
High-stakes AI needs verifiable identity
The need is sharpest where AI decisions carry real consequences and where misrepresentation of authority causes real harm.
🏦
Finance
hover →
Finance
AI agents executing trades, giving advice, or underwriting risk need verifiable class and scope. Who authorized this action?
🏥
Healthcare
hover →
Healthcare
Medical AI must be verifiably registered in the right class. A consumer chatbot cannot claim clinical authority without detection.
🤖
Agent Marketplaces
hover →
Agent Marketplaces
When agents hire sub-agents or act on behalf of humans in multi-step workflows, lineage and authority need to be traceable end-to-end.
⚖️
Legal & Compliance
hover →
Legal & Compliance
Contracts signed by agents, regulatory filings, and compliance decisions need an auditable identity trail that holds up in court.
06 / Research Foundation
Built on evidence, not assumptions
AIP's architecture draws on a 104-source research corpus spanning six domains — from latent-space watermarking and agent communication protocols to post-quantum cryptography and 6G semantic networking.
104
Primary sources reviewed across six research domains to inform the protocol architecture and technical whitepapers.
Four Layers. One Accountability Stack — L0 Signal → L1 Verdict → L2 Escrow → L3 Log
Architecture Visual
Origin Story
From Stack ID to AIP
AIP didn't start as a four-layer protocol. It started as a simpler question: what if every AI had a VIN number? The original Stack ID Registry design was written in January 2025. What follows is how the idea evolved.
January 2025
Stack ID Registry
A "VIN system for AI" — unique IDs for every module, centralized registry, lifecycle tracking, compliance reporting. The core insight: AI needs identity infrastructure the way vehicles need registration.
104-source corpus reviewed across six domains. Key realizations: centralized registries create single points of failure, identity must be privacy-preserving, and the signal must be baked into the model — not bolted on after.
The Agentic AI Foundation forms. Pax Silica Initiative launches. The industry begins naming the problem publicly. AIP's timing shifts from speculative to urgent.
↓ validated: the governance gap is real and acknowledged
April 2026
AIP Draft Architecture
Four-layer accountability stack. Signal-agnostic L0 foundation. Quorum-gated escrow. Split transparency logs. No single party can unmask an operator. The Stack ID concept, rebuilt for a decentralized world.
07 / Engage
This is a draft. Serious people welcome.
Not a product. Not a finished standard. A public architecture concept for a concern that needs to be addressed — not whispered. We want reviewers who will challenge the assumptions.