Architecture

EU data intelligence infrastructure. Layered security architecture, automated task discovery, and multi-reviewer quality gates. PQ Ready EU Only

Overview

Pauhu® infrastructure uses a self-organizing architecture where each processing unit operates independently within its security zone, communicates via structured messages, and adapts to changing workload through automated task discovery and health monitoring.

Key principles:

Security architecture

Pauhu implements industrial-grade security with four distinct zones and controlled data paths between them.

  +-----------+    +------------+    +----------+
  | Protected | -> | Controlled | -> | External |
  |  Zone     |    |   Zone     |    |   Zone   |
  +-----------+    +------------+    +----------+
                         |
                   ==============
                   || Data Path ||
                   ==============
                         |
  +-----------+    +----------+
  | Business  | <- |  Audit   |
  |  Zone     |    |   Zone   |
  +-----------+    +----------+

Each zone is assigned a security level appropriate to its function, ranging from protection against casual violation (external-facing) to protection against state-sponsored attack (protected zone). Data paths between zones are controlled and audited.

Request orchestration

A central orchestrator routes each request through zone-specific security checks. All validation passes before any ML model runs (Model Last principle).

                 +-- Protected ------+-- Architecture
                 |   (Constraints)    +-- ML Pipeline
                 |                    +-- Data Engineering
                 |                    +-- Security Audit
                 |                    +-- Legal Review
                 |
Orchestrator ----+-- Controlled -----+-- Development
                 |   (Data Flow)     +-- Operations
                 |                    +-- Infrastructure
                 |
                 +-- External -------+-- Frontend
                      (Actions)      +-- Documentation
                                     +-- Internationalization

  Model Last: All security checks pass FIRST → then AI inference

Validation classification

Each validation step enforces a specific level of strictness, determining how it controls data flow:

LevelBehaviourExample
BlockReject if violated (MUST NOT)Reject requests containing PII in search queries
RequireRequire completion (MUST)Enforce data license terms before export
AllowApprove if proposed (MAY)Allow optional semantic ranking add-on
Pass-throughNo action needed (EXEMPT)Pass-through for static documentation

Document extraction on EU infrastructure

Document extraction runs on a dedicated server in Helsinki (EU jurisdiction). It provides server-side document extraction, PDF rendering, and accessibility tree snapshotting.

  +-----------------------+        +----------------------+
  | Edge Services          |        | Helsinki Server      |
  | (EU edge)              |        | (EU datacenter)      |
  |                       |        |                      |
  |  API Router           |        |  Document Extraction  |
  |    |                  |        |    |                  |
  |    +-- /extract ------+------->|    +-- Chrome CDP     |
  |    +-- /pdf-render ---+------->|    +-- Tab lifecycle  |
  |    |                  |        |    +-- Text extract   |
  |  Vision Service       |        |    +-- PDF render     |
  |    +-- annotate ------+--+     |                      |
  |    +-- terminology ---+--+     +----------------------+
  |    |                  |  |
  |  Annotation Service   |  |     +-----------------------+
  |    +-- sidecar -------+--+---->| Object Storage (EU)   |
  |                       |        | {product}/            |
  |  Index Service        |        |   {hash}.json         |
  |    +-- DB + semantic index |        +-----------------------+
  +-----------------------+

Data flow

  1. Client sends URL to /extract or /pdf-render via the API router
  2. The vision service opens a Chrome tab on Document extraction (Helsinki) via authenticated bridge token
  3. Chrome navigates to the URL, extracts text (or renders PDF)
  4. Tab is closed in background - stateless, no data retained
  5. If annotate: true, text is sent to the annotation service for topic classification
  6. If terminology: true, IATE terms are extracted via the terminology service
  7. For /extract-and-index, annotation sidecar JSON is written to object storage for indexing

Security

Automated task discovery

The health system automatically discovers work from four signal sources and creates tasks in the registry. Each task is routed to the appropriate team based on its zone and type.

SourceSignalRouting
Error collector>10 same error in 24hBy phase: model→ML team, auth→security, ui→frontend, api→development
Git logFIXME / TODO / HACK in recent commitsDevelopment team
CI failuresSame workflow fails 3+ times in 7dOperations team
Stale PRsOpen >7 days, no updatesOriginal author

Failed tasks are automatically retried up to 3 times. After 3 failures, the task is marked as blocked and requires human intervention.

Multi-reviewer quality

Every pull request is reviewed by three independent reviewers:

A consensus job runs after all three reviewers complete. If 2 out of 3 flag critical issues, the PR is blocked until the issues are resolved.

Health monitoring

A periodic health check runs across the entire infrastructure:

Results are written to a JSON report and committed back to the repository automatically.

Data pipeline

Every EU document flows through a 6-stage pipeline from source to searchable index. This is the same pipeline for all 20 data sources - only the seed script and product code differ.

  Seed Script         Object Storage       Queue              Annotation Service
  (per source)        (per product)        (per product)      (topic classification)
       |                   |                    |                    |
  Fetch from EU    Upload with         Storage event         Classify:
  institution      metadata            notification          - language detection
  (SPARQL, REST,   (celex_id,          triggers              - topic domain (1-21)
   SDMX, OAI-PMH)  product, lang)      consumer              - word/char count
       |                   |                    |                    |
       v                   v                    v                    v

  Annotation Sidecar  Index Service        Database            Semantic Index
  (.json)            (hybrid search)      (per product)       (multilingual embeddings)
       |                   |                    |                    |
  Annotation         Index into           Structured           Semantic search
  stored next to     database +           metadata for         across 20 indexes
  source doc         semantic index       SQL queries          via relevance scoring
  in storage         (70% semantic,
                     30% BM25)                                                

Stage 1: Seed

Source-specific seed scripts fetch documents from EU institutions via their official APIs (SPARQL for EUR-Lex/CELLAR, REST for TED/ECHA/EMA, SDMX for ECB/Eurostat, OAI-PMH for CORDIS). Each document is uploaded to its product-specific storage bucket with metadata containing the CELEX ID, product code, language, and source URL.

Stage 2: Queue

Storage event notifications trigger a queue message for each new or updated document. The 20 products are split across two queue consumers for load balancing (products A–E and products E–W).

Stage 3: Annotate

The annotation worker classifies each document:

Stage 4: Annotation sidecar

Annotations are stored as sidecar JSON files next to the source document in object storage. The stand-off annotation format keeps annotations separate from source text, enabling non-destructive updates and full provenance tracking.

Stage 5: Index

The indexing service reads annotated documents and indexes them into:

Stage 6: Search

The search engine fans out queries across all 20 semantic indexes simultaneously, combining semantic similarity with keyword matching. Results include DSA Article 27 ranking transparency metadata.

Dead letter queue

Documents that fail annotation after 3 retries are routed to a dead letter queue for manual inspection. The /backfill admin endpoint can re-index documents after the issue is resolved.

ML pipeline

Pauhu uses programmatic ML pipelines for all prompt optimization and orchestration. Rather than traditional prompt engineering, typed input/output contracts define each processing step, and composable processing units can be assembled into larger orchestration flows.

The optimization engine handles quality tuning across all services, ensuring consistent output quality and EU AI Act transparency compliance.