Structured evidence for EU AI Act readiness

Turn fragmented AI documentation into structured, reviewable evidence in 72 hours. Defined-scope engagements for Annex IV preparation and internal review.

From USD 3,500 one-time per system Human-reviewed release Defined scope · 72-hour delivery Article 26 logging-ready outputs

No commitment required. Scope is confirmed first.

system_name: "HireVue Assessment"
risk_classification: "High-Risk (Annex III, 5a — Employment)"
intended_purpose: "Candidate evaluation for hiring decisions"
bias_testing: "Algorithms tested for age, gender, ethnicity" VERIFIED
human_oversight: "Recruiters review prioritised candidate list"
training_data: null — flagged for review

consensus_rate: 0.90
payload_hash: "c57505252901929ba15f6dab4..."
status: "released"

EU AI Act enforcement begins August 2, 2026, with evolving timelines under current regulatory proposals. Most organisations remain unprepared for structured evidence requirements.

AnnexLayer prepares structured evidence outputs before formal audit, legal validation, or governance platform onboarding. It is not a consulting service. It is not legal advice. It is not a certification provider. It is the layer that turns fragmented documentation into usable evidence before external review begins.

The bottleneck

Most teams cannot produce reviewable Annex IV evidence today.

You need structured evidence before you need a bigger budget.

Before committing to a full governance platform, an extended advisory engagement, or an internal build programme, most teams need one thing first: a clear, bounded evidence output that shows where the gaps actually are.

That is what AnnexLayer delivers. One AI system. Defined inputs. Structured output. 72 hours.

When to use AnnexLayer

The right time for a fast evidence baseline.

Before engaging consultants or advisory firms

Before internal audit review or governance board sessions

When documentation exists but is not structured or reviewable

When timelines are compressed and evidence is needed quickly

When you need a fast evidence baseline for internal decision-making

How the workflow works

From documentation to release-ready evidence.

01
Documentation intake
You provide vendor model cards, terms of service, data processing agreements, or public transparency pages. Any format accepted.
02
Independent extraction
Two independent extraction engines process documentation simultaneously. A deterministic comparison flags every disagreement.
03
Human-reviewed release
Every flagged discrepancy is reviewed before release, with reviewer ID, timestamp, and rationale recorded to support traceability and deployer obligations.
04
Evidence delivery
Professional evidence report and structured JSON pack delivered with SHA-256 cryptographic integrity hash. Traceable, reviewable, audit-ready.
What you receive

Every engagement delivers four components.

Evidence Report

Professional human-readable PDF with executive summary, field-by-field findings, gap identification, and remediation context.

JSON Evidence Pack

Machine-readable structured evidence object with all extracted fields, suitable for governance platforms, internal systems, and audit ingestion.

Audit Verification

SHA-256 cryptographic hash of the complete evidence payload. Timestamped records support EU AI Act Article 26 logging and retention workflows.

Human Validation

Complete corrections log with reviewer ID, UTC timestamp, and decision rationale for every reviewed field.

Evidence engagement stages

Defined scope. Clear deliverable. No hidden exposure.

Every engagement covers one AI system with defined documentation inputs, structured evidence output, and human-reviewed release. Scope is confirmed before work begins. Larger environments move into extended scope or portfolio pricing.

Pre-Evidence Engagement

Defined-scope 72-hour evidence workflow for one AI system. Identifies Annex IV documentation gaps and produces structured evidence outputs. The bounded starting point before deeper work.

From USD3,500 one-time per system
One AI system, defined inputs Annex IV gap identification Risk classification mapping Evidence report (PDF) + JSON pack Human-reviewed release 72-hour delivery window
Request pre-evidence engagement

Follow-on systems move into Full Evidence Pack pricing.

Full Evidence Pack

Complete Annex IV structured evidence documentation with audit trail and gap analysis. Deeper extraction, broader field coverage, and remediation priorities.

USD5,500 one-time per system
Complete Annex IV documentation Bias testing and validation evidence Human oversight documentation Training data source mapping Sub-processor inventory Remediation priorities Evidence report (PDF) + JSON pack SHA-256 audit trail
Request full evidence pack
Conformity Pre-Check

Gap analysis against EU AI Act requirements with prioritised remediation insights. The “are we ready before external review?” engagement.

USD7,500 one-time per engagement
Conformity gap analysis Prioritised remediation insights Risk exposure mapping Board-ready summary
Request conformity pre-check
Evidence Maintenance

Ongoing monitoring, delta tracking, and continuous evidence updates. Monthly re-scan of your AI vendor ecosystem with change reporting.

USD5,000 per month
Monthly vendor ecosystem scan Delta tracking and change reports New risk identification Continuous evidence updates
Start evidence maintenance

Managing multiple AI systems or regulated environments?

Extended scope and portfolio pricing available for multi-system evidence orchestration, procurement framework workflows, and partner delivery programmes. Request portfolio pricing →

What happens after your report

Your evidence, your next step.

Internal governance review

Use the evidence outputs to brief your governance team, board, or risk committee on current AI system documentation status.

Share with advisors

Provide the structured evidence pack to legal counsel, external auditors, or advisory firms as a prepared input for their review.

Prepare for audit

Use the gap analysis and remediation context to prepare for internal audit, readiness review, or regulatory engagement.

Decide next steps

Determine whether to remediate internally, expand into additional systems, or engage further support based on clear evidence.

Scope and limitations

Clear boundaries. No ambiguity.

What is included

  • Structured evidence extraction for one defined AI system
  • Annex IV-aligned field mapping
  • Evidence classification: documented, partial, or missing
  • Gap identification and remediation guidance
  • Audit-ready PDF report and structured JSON evidence pack
  • Human-reviewed release with corrections log

What is not included

  • Legal advice or regulatory interpretation
  • Notified body assessments
  • Full conformity assessments
  • On-site audits or implementation work
  • Unlimited systems without defined scope

What we need from you

  • System documentation and relevant materials
  • System context and intended use case
  • Existing policies or logs where applicable

Delivery and terms

  • Standard delivery within 72 hours after confirmed input completeness
  • Delivery includes PDF report and structured JSON evidence pack
  • Payments are non-refundable once processing has started
  • AnnexLayer does not determine regulatory compliance — it provides structured evidence outputs to support review and audit preparation
Built for review and release

Every output is reviewable, traceable, and structured for governance workflows.

Independent Cross-Checking

Every data point is independently extracted by two separate engines. Disagreements are flagged for review — never silently resolved. Full transparency on evidence agreement and divergence.

Human-Reviewed Release

No output is released without human review. Every correction is recorded with reviewer ID, UTC timestamp, and decision rationale. Complete corrections log included with every deliverable.

Auditability & Retention

All evidence outputs include timestamped audit logs, reviewer actions, and structured traceability. Records are retained to support EU AI Act Article 26 deployer logging obligations and audit workflows.

Evidence Retention & Logging

Every engagement produces a complete audit trail including extraction records, human review actions, and final evidence states. Outputs are structured to support minimum retention expectations under EU AI Act deployer obligations.

No client data is used for AI model training. All processing is configured with explicit training data opt-outs and controlled data handling.

Frequently asked questions

What teams ask before engaging.

What is included in an engagement?

One AI system, defined documentation inputs, structured evidence output (PDF report + JSON pack), and human-reviewed release within a 72-hour delivery window.

What counts as one AI system?

One distinct AI application, model, or workflow with a defined intended purpose. If a vendor bundles multiple AI capabilities under one product name, scope is assessed and confirmed before work begins.

What if our documentation is larger or more complex?

Environments with multiple systems or extensive documentation move into extended scope or portfolio pricing, discussed and agreed before engagement starts.

When does the 72-hour window start?

The delivery window begins when documentation inputs are received and confirmed as complete.

Is this legal advice or certification?

No. Outputs are structured evidence drafts designed to support internal review and external advisory processes. They do not constitute legal advice, certification, or formal regulatory determination.

What happens after the first engagement?

Most teams move into Full Evidence Packs for additional systems or Evidence Maintenance for ongoing monitoring and delta tracking.

Is client data used for AI model training?

No. All processing is configured with explicit training data opt-outs. Client documentation is never used for model training and never shared with third parties.

What is the EU AI Act enforcement timeline?

EU AI Act enforcement begins August 2, 2026, with evolving timelines under current regulatory proposals. A realistic compliance runway is 32–56 weeks for most mid-market organisations.

Do you provide audit logs and evidence retention?

Yes. Every engagement includes timestamped audit logs, reviewer decisions, and structured evidence outputs designed to support EU AI Act Article 26 deployer obligations.

Important disclaimer: AnnexLayer provides structured evidence drafts designed to support internal review and external advisory processes. Our deliverables do not constitute legal advice, certified assessments, formal conformity determinations, or regulatory certification. All outputs are clearly labelled as structured evidence requiring client validation and appropriate professional review.