Compliance Guide
Developer Docs SV-10 Standard Dashboard
For Compliance and Risk Teams

The SealVera Compliance Guide

This guide is written for compliance officers, risk managers, legal teams, and auditors — not engineers. It tells you what SealVera captures, where to find it, how to use it when an audit arrives, and what it proves.

You do not need to understand how AI models work to use this guide. You need to understand what questions you will be asked and where the answers live.

The core promise: If an AI decision made by your organization is ever challenged — by a regulator, in court, or by a customer — SealVera is the record that makes your defense possible. This guide shows you how to use it.

What SealVera Does

Your organization uses AI agents to make or assist with consequential decisions — loan approvals, insurance claims, hiring screens, fraud flags, medical authorizations. SealVera sits alongside these agents and captures a complete, tamper-evident record of every decision they make.

Think of it as the difference between a security camera and a sign that says "CCTV in use." SealVera is the camera. It records what actually happened, not what you believe happened.

What it captures for every decision

WhatWhy it matters
The exact inputsThe data the AI saw when it made the decision — credit score, claim amount, applicant information. Exactly as submitted, not reconstructed later.
The factors it weighedEach signal the AI considered, the actual value it observed, and whether it was a risk or safe indicator. Not a summary — the actual reasoning chain.
The outcomeThe decision: APPROVED, REJECTED, FLAGGED, or any other outcome your system uses.
A cryptographic signatureMathematical proof that this record has not been altered since it was created. Independently verifiable by any third party.
A chain linkEvidence that no records have been deleted. If any entry is removed, the chain breaks and the gap is immediately detectable.
TimestampExactly when the decision was made, server-stamped at the moment of logging.

What it watches over time

  • Approval rates — If your AI's approval rate suddenly drops or spikes, you are alerted before a customer or regulator notices.
  • Decision patterns — If the mix of outcomes shifts significantly from historical norms, SealVera flags it.
  • Confidence levels — If the AI becomes less certain than usual, that is a warning sign. SealVera surfaces it.
  • Activity volume — Unusual spikes or drops in decision volume are detected automatically.

Your First Login

Go to your SealVera dashboard URL — provided by your engineering team. Log in with the credentials they created for you.

When you first arrive, you will see:

  • The left sidebar — Lists every AI agent your organization has connected. Click an agent name to filter everything to that agent's decisions.
  • The main area — The decision log. Every AI decision, newest first. Each card shows the agent name, action, decision outcome, and timestamp.
  • The right panel — Alert rules and alert channels. This is where you configure what gets flagged and where notifications go.
  • The top bar — Date filters, decision type filters, and the Compliance Report button.
Start here: Click on any decision card to expand it. You will see the full record — every factor the AI considered, every value it observed, the outcome, and the cryptographic signature. This is what you show an auditor.

Reading the Dashboard

The decision log

Each row in the decision log represents one AI decision. The colored left border tells you the outcome at a glance:

  • Green border — Positive outcome (APPROVED, ADVANCE, CLEAR)
  • Red border — Negative outcome (REJECTED, DENIED, DECLINED)
  • Amber border — Flagged for review (FLAGGED, PENDING_REVIEW, HOLD)

Evidence badges

When you expand a decision card, you will see a badge next to the evidence section:

  • Agent-provided — The AI itself returned this reasoning. It is what the model actually computed, traceable directly to the input data. This is the highest-fidelity evidence for compliance purposes.
  • SealVera inferred — SealVera reconstructed the reasoning from the output. Useful for monitoring but carries less legal weight than agent-provided evidence.

The Risk tab

Click "Risk" in the section tabs to see behavioral anomalies. Each card represents an automated detection — a time when your AI behaved differently from its established baseline. Cards show the severity, the agent, what changed, and when it was detected.

You can acknowledge anomaly cards after review. Acknowledged anomalies are logged with a timestamp — evidence that your team reviewed and responded to the alert.

The Traces tab

When multiple AI agents process the same request — for example, a fraud screener, then a risk scorer, then a final approval agent — SealVera links them into a single trace. The Traces tab shows these multi-agent decision chains. Click any trace to see the full sequence: which agent made which decision, in what order, with what inputs.

Understanding a Decision Record

When you expand a decision card, you see the full record. Here is what each section means.

The header

Agent name, action performed, decision outcome, and timestamp. This is the summary — what happened, when, and who made the call.

The evidence trail

This is the most important section for compliance purposes. Each row is a factor the AI considered:

  • Factor — The field or signal the AI looked at (e.g., "credit_score", "claim_amount", "prior_history")
  • Value — The actual value it observed from your data (e.g., "748", "$14,200", "clean")
  • Signal — Whether that factor pushed the decision toward risk or safety
  • Explanation — One sentence from the AI explaining why this factor mattered

Every factor in the evidence trail is traceable to the actual input data in the same record. A compliance officer or auditor can verify each claim independently.

Verify button

Click "Verify" on any decision record to confirm the cryptographic signature is valid. A green result means the record is exactly as it was when logged — it has not been modified. A red result would indicate tampering, which has never occurred in a correctly configured SealVera deployment.

Replay button

Click "Replay" to re-run the AI decision with the original inputs. The result shows whether the decision is consistent — whether the AI produces the same outcome when given the same data. This is useful when challenging a decision: you can demonstrate the AI's reasoning is stable and reproducible.

When You Receive an Alert

SealVera sends alerts when your AI agents behave differently from their established baseline. When you receive one, here is what to do.

Types of alerts you may receive

Alert typeWhat it meansWhat to check
Approval rate shift Your AI is approving or rejecting at a significantly different rate than normal Review recent decisions for the affected agent. Look for a change in inputs or a model update.
Confidence drop The AI is less certain than usual — decisions are closer to 50/50 than historical normal Often indicates the AI is seeing inputs it has not encountered before, or the underlying model changed.
Unusual activity Decision volume is much higher or lower than normal for this time of day Volume spike may indicate a runaway process. Silence may indicate the agent is down.
New outcome type The AI produced a decision outcome that has not appeared before Confirm this was an intentional change. If not, investigate what changed in the system.
Rule-based alerts A specific condition you configured was triggered — e.g., a high-value denial, a consecutive rejection streak Review the specific decisions that triggered the rule. Take appropriate action per your internal playbook.

Documenting your response

When you review an anomaly alert, click Acknowledge on the alert card in the Risk tab. This records:

  • That the alert was reviewed
  • When it was reviewed
  • That a human saw it

This acknowledgment log is visible in your alert history and demonstrates active human oversight of your AI system — a requirement under several regulatory frameworks.

Do not ignore alerts. An unacknowledged alert is evidence that your monitoring detected an anomaly and no one acted on it. That is more damaging in a regulatory context than the anomaly itself.

Responding to an Audit Request

This is the scenario SealVera was built for. A regulator, auditor, or legal team contacts you and asks for records related to your AI decisions. Here is your playbook.

Common request
"We need a complete record of all AI-assisted decisions your system made between January 1 and March 31, including the reasoning behind each one."
Response time with SealVera: under 10 minutes. Set the date filter, click Compliance Report. The report includes every decision in that window, the full evidence trail for each, and cryptographic verification that records are unaltered.
Common request
"Show us the specific decision record for customer ID 4421, including what data your AI used and why it reached the conclusion it did."
Response time with SealVera: under 2 minutes. Search by applicant ID or customer ID in the dashboard. Open the decision record. Export or screenshot the full evidence trail showing every factor, actual value, and explanation.
Common request
"How do you know your records haven't been altered? Can you prove these are the original decision records?"
Response: cryptographic proof. Every record is RSA-signed at the time of logging. Click Verify on any entry — or run the chain integrity check for the entire agent — to produce a verification report showing the signature is valid and no entries have been deleted.
Common request
"Were there any anomalies or unexpected behaviors in your AI systems during this period? What did you do about them?"
Response: the alert and anomaly history. Navigate to the Risk tab filtered to the relevant date range. All detected anomalies are logged with timestamps. All acknowledged anomalies show when your team reviewed them. This demonstrates active, continuous oversight.

Generating a Compliance Report

1

Set the date range

Use the From and To date filters at the top of the dashboard. If the request covers a specific agent only, click that agent in the left sidebar first.

2

Click Compliance Report

The button is in the top right of the dashboard header. A report covering the selected date range and agent filter will open in a new tab.

3

Review what is included

The report contains: all decisions in the window, full evidence trail for each, per-agent summary statistics, RSA signature verification results, chain integrity status, and retention coverage. Read the report before sending it — confirm the date range and agent coverage are correct.

4

Save and deliver

Use your browser's Print to PDF to save the report. Deliver to the requesting party. The HTML format is readable by any browser and acceptable to most regulatory bodies. For legal teams requiring structured data, your engineering team can export JSONL or CSV via the API.

Generate reports proactively. Do not wait for an audit request. Generate a quarterly compliance report and archive it. If an audit arrives, you already have the documentation — you are not scrambling to produce it under time pressure.

Proving Record Integrity

Two questions auditors frequently ask:

  1. How do I know this individual record has not been altered?
  2. How do I know records have not been deleted?

SealVera answers both.

Record-level verification

Every decision record is cryptographically signed at the moment of logging using RSA-2048. The signature covers the input data, output, reasoning steps, agent name, and timestamp. If any field changes after logging — even one character — the signature verification fails.

To verify any record: open it in the dashboard, click Verify. A green result confirms the record is unaltered. The public key used for verification is available at /api/public-key — any third party can independently verify your records without depending on SealVera.

Collection-level verification (chain integrity)

Individual record signing protects against modification. But what about deletion? If someone deleted unfavorable records, individual signatures would still be valid on the remaining records.

SealVera chains every record to the previous one using a hash. If any record is removed, the chain breaks — and that break is immediately detectable. To run a chain integrity check for an agent, navigate to that agent in the dashboard and click Chain integrity. The result shows: total records checked, any detected gaps, and whether the chain is intact.

For auditors who want to verify independently: Your engineering team can provide the RSA public key and the raw record data. Any cryptographer can verify the signatures without access to the SealVera system.

Record Retention

EU AI Act Article 12 requires high-risk AI systems to retain decision records for 10 years. Other regulations have their own retention requirements. SealVera tracks your coverage and tells you exactly where you stand.

Checking your retention coverage

From the dashboard: click the gear icon (top right) → Organization → Usage. You will see your current retention window and coverage days. Your engineering team can also view the full retention status at /api/retention-status.

Retention by plan

PlanRetention periodEU AI Act (10yr)FINRA (6yr)
Free30 daysNot sufficientNot sufficient
Design Partner1 yearPartialPartial
EnterpriseConfigurable up to indefiniteFully coverableFully coverable
Critical: You cannot retroactively create decision records for periods before SealVera was connected. The retention clock starts at your first logged decision. If you are subject to the EU AI Act and your agents are already in production, connect SealVera immediately — every day without logging is a day of records you will never have.

EU AI Act

Enforcement begins: August 2, 2026. The EU AI Act applies to high-risk AI systems — defined broadly as systems used in employment, credit, insurance, healthcare, law enforcement, migration, and administration of justice.

Key requirements and how SealVera covers them

RequirementArticleSealVera coverage
Keep logs of all outputs from high-risk AIArt. 12Full — every decision logged automatically
Retain records for 10 yearsArt. 12(1)Enterprise plan — configurable retention
Provide transparency about AI decision-makingArt. 13Full evidence trail with factor-level reasoning
Enable human oversight — detect anomaliesArt. 9, 14Behavioral monitoring + alert rules
Demonstrate ongoing conformanceArt. 9Chain integrity + compliance reports
Right to explanation for affected personsArt. 13Per-decision evidence trail exportable per request
Consult your legal counsel for a determination of whether your specific AI systems qualify as high-risk under the EU AI Act. SealVera provides the technical infrastructure — the legal determination of scope is your organization's responsibility.

FINRA / SEC

Financial services firms using AI in customer-facing or trading decisions must maintain supervisory records. FINRA Rule 4511 requires records to be kept for a minimum of 6 years. SEC Rule 17a-4 has similar requirements for broker-dealers.

What auditors typically request

  • Records of all automated decisions affecting customer accounts
  • Evidence that decisions were made consistently and without bias
  • Records showing human supervision was in place
  • Documentation of any anomalies and how they were addressed

SealVera coverage

RequirementSealVera coverage
Decision records with full contextFull — every decision with inputs, reasoning, outcome
Tamper-evident records (WORM equivalent)RSA signatures + hash chain
6-year retentionEnterprise plan — configurable
Supervisory evidence (human oversight)Alert acknowledgment log + behavioral monitoring history
Audit trail of system access and changesPartial — decision audit trail complete; system access logs require your existing SIEM

HIPAA

If your AI systems process Protected Health Information (PHI) in making decisions — prior authorizations, clinical screening, patient triage — HIPAA's Security Rule audit controls apply.

Relevant controls

  • Audit controls (§164.312(b)) — Hardware, software, and procedural mechanisms to record and examine access and activity in systems that contain or use PHI.
  • Integrity controls — Protect PHI from improper alteration or destruction.
  • Transmission security — Guard against unauthorized access to PHI transmitted over networks.

SealVera coverage

ControlSealVera coverage
Activity logging for AI decisions involving PHIFull decision records with tamper-evident signatures
Integrity protectionHash chain + RSA signatures detect any modification
Access controls for audit recordsOrg-scoped API keys — only authorized users access org data
Transmission securityTLS encryption in transit; encryption at rest on Enterprise
Business Associate Agreement: If your AI decisions involve PHI, you may require a BAA with SealVera before connecting. Contact us at hello@sealvera.com to discuss.

GDPR Article 22

Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects — and the right to obtain human intervention, express their point of view, and contest the decision.

What "right to explanation" requires in practice

When an individual requests an explanation of an automated decision affecting them, you must provide "meaningful information about the logic involved, as well as the significance and the envisaged consequences." Vague language like "the model assessed you as higher risk" does not meet this standard.

What SealVera provides

For every AI decision logged with a full evidence trail, you have:

  • The specific factors the AI considered
  • The actual values it observed from the individual's data
  • Whether each factor contributed to a favorable or unfavorable outcome
  • A plain-language explanation for each factor

This is exactly what Article 22 requires. You can share a decision record's evidence trail directly with the individual who requests an explanation.

Data minimisation note: Be mindful of what personal data appears in your decision records. SealVera logs whatever your AI agent receives as input. Work with your data protection officer to confirm your input data is appropriately minimised before logging begins.

SOC 2

SOC 2 Type II audits assess whether your organization's controls for security, availability, processing integrity, confidentiality, and privacy were operating effectively over a defined period (typically 6-12 months). AI systems are an emerging area of focus for SOC 2 auditors.

What auditors are increasingly asking for

  • Evidence that AI systems log their outputs and decisions
  • Evidence that anomalies are detected and responded to
  • Evidence that access to AI decision systems is controlled and logged
  • Evidence that records are tamper-evident

SealVera coverage for SOC 2

Trust criterionControlSealVera
CC7.2 — System monitoringMonitor AI systems for anomaliesBehavioral baseline + drift detection + alert rules
CC7.3 — Evaluate and respondDocument anomaly responseAlert acknowledgment log with timestamps
CC4.1 — COSO principle 16Communicate control deficienciesOn-demand compliance reports
PI1 — Processing integrityOutputs complete, valid, accurateFull decision records with integrity verification

SV-10 Self-Assessment Checklist

The SealVera SV-10 standard defines ten requirements for accountable AI agent systems. Use this checklist to assess where your organization stands.

For each item, the question to ask is: if an auditor asked about this today, could you demonstrate it?

  • AA-01 — Every AI decision produces a complete record automatically at the time it is made (not reconstructed)
  • AA-02 — Decision records include factor-level reasoning tied to actual input values — not just the outcome
  • AA-03 — Records are cryptographically signed — any modification is detectable by any third party
  • AA-04 — Deleted records are detectable — the record set is provably complete
  • AA-05 — Records are retained for the full duration required by applicable regulation (EU AI Act: 10 years)
  • AA-06 — Agent behavior is monitored against a documented baseline — you know what "normal" looks like
  • AA-07 — Anomalies are detected and alerted internally before external parties report them
  • AA-08 — Multi-agent workflows are traceable as a single decision chain — not separate isolated logs
  • AA-09 — Any past decision can be replayed from its original inputs to verify consistency
  • SV-10 — A compliance report covering any time window can be produced in minutes, not weeks

Run the interactive self-assessment →

Common Questions

How do I know what I am looking at is the original record and not an edited version?

Every record is signed with a private key that only SealVera holds. The signature covers every field in the record. If any field is changed — by anyone, including SealVera staff — the signature fails verification. You can verify any record yourself using the Verify button, or independently using the public key at /api/public-key.

Can SealVera staff delete or modify our records?

Records can be deleted from the database, which is why the hash chain matters. Any deletion breaks the chain in a way that is immediately detectable. Regular chain integrity checks — available on demand and automatically logged — would surface any gap. On Enterprise plans with private cloud deployment, SealVera staff have no access to your data at all.

What if our AI does not return structured reasoning — can we still use this for compliance?

Yes. SealVera's auto-reasoning feature injects a minimal instruction asking the AI to return structured evidence. It does this automatically, without your engineers needing to change anything. The result is Agent-provided evidence that the AI itself produced. If you need maximum control over the exact format, your engineering team can configure the prompt explicitly — but for most compliance purposes, auto-reasoning is sufficient.

How do we handle a subject access request where someone wants to see the AI decision made about them?

Search for the decision by customer ID, applicant ID, or any identifier present in the decision input. Open the record. The evidence trail shows exactly what factors were considered and what values were observed for that individual. You can share this record — or an export of it — directly with the requestor. It constitutes a meaningful explanation under GDPR Article 22.

We have multiple AI agents across different teams. Can compliance see all of them in one place?

Yes. All agents connected to your SealVera account appear in the left sidebar. The top-level view shows decisions across all agents. You can filter by agent, by date, by decision type, or search across all records. The compliance report can cover all agents or a specific agent over any time range.

What happens if the AI makes a decision while SealVera is temporarily offline?

This is a gap. If your AI agent makes decisions while SealVera is unreachable, those decisions are not logged. For regulated environments, your engineering team should configure your agents to queue or hold decisions if SealVera is unavailable, or to fail open with a human review flag. Discuss your specific requirements with your engineering team and SealVera support.

We are approaching an audit and we just connected SealVera. How far back do our records go?

SealVera begins logging from the day you connect. It cannot create records for decisions made before connection. Check your retention coverage in the dashboard (Organization tab) to see your first logged date. If you have a gap, your engineering team should prioritize running a historical data import if your AI system logged its own outputs in a format that can be ingested.