Contents
  1. Risk Classification
  2. Why Reva Is Not High-Risk
  3. Transparency Obligations
  4. General-Purpose AI Models
  5. Provider & Deployer Obligations
  6. GDPR & BetrVG Alignment
  7. Compliance Timeline
  8. Technical Guardrails

Risk Classification

Reva is an AI system under Article 3(1) of the EU AI Act — it uses LLM inference to interpret natural language queries and generate responses. The central question is which risk category applies.

Limited / Minimal Risk
Not a high-risk AI system under Annex III
Risk CategoryApplies?Rationale
Unacceptable Risk (Art. 5) No No emotion recognition, social scoring, subliminal manipulation, or biometric identification
High-Risk (Art. 6 + Annex III) No Not intended for worker performance monitoring; Article 6(3) exception applies (see below)
Limited Risk (Art. 50) Yes AI system interacting with users — transparency obligations apply
Minimal Risk Yes Core functionality (data retrieval and display) carries minimal risk

Why Reva Is Not High-Risk

The only relevant Annex III category is 4(b) — Employment, workers management, which covers AI systems intended to “monitor and evaluate the performance and behaviour of persons” in work relationships. Here is why Reva does not fall under this category:

Intended Purpose: Release Management, Not Worker Management

Reva’s intended purpose is release management data retrieval and display. It queries Digital.ai Release and Jira APIs, presents structured results, and helps teams manage software releases. It does not make employment decisions, evaluate individual performance, or allocate tasks based on personal traits.

Privacy by Design Prohibits Individual Monitoring

Reva’s system prompt rules explicitly prohibit individual performance monitoring. Activity logs are anonymized — summaries attribute actions to “a team member” rather than named individuals. These guardrails are enforced at the system level, not as optional configuration.

Article 6(3) Exception Applies

Even under a conservative reading, Reva qualifies for the Article 6(3) exception because it performs:

Critically, Reva does not perform profiling of natural persons — the condition that would override the Article 6(3) exception. It retrieves factual data (task assignments, release status) without evaluating personal aspects or predicting behaviour.

Other Annex III Categories

CategoryApplies?
1. BiometricsNo — no biometric or emotion recognition
2. Critical infrastructureNo — software release management is not critical infrastructure
3. EducationNo
5. Essential public servicesNo
6. Law enforcementNo
7. Migration / border controlNo
8. Justice / democracyNo

Transparency Obligations

Article 50 of the AI Act applies to all AI systems that interact with natural persons.

Article 50(1) — AI Disclosure

Users must be informed they are interacting with an AI system. Reva operates as a named bot in Microsoft Teams — its AI nature is contextually apparent. Nevertheless, the bot’s welcome message and help text clearly identify Reva as an AI-powered assistant.

Article 50(2) — AI-Generated Content

Providers of systems generating synthetic content must mark it as AI-generated. Reva generates text responses and Adaptive Cards based on data from Release and Jira. These responses are informational — not deepfakes or synthetic media. Responses are attributed to “Reva” as a bot, providing clear provenance.

Reva’s transparency measures: AI disclosure in bot welcome message, all responses attributed to the Reva bot identity, AI-generated content clearly distinguishable from source system data.

General-Purpose AI Models

The GPAI regulation (Chapter V, Articles 51–53) targets providers of general-purpose AI models — the organizations that develop and distribute the models themselves.

X-idra Systems GmbH is not a GPAI model provider. X-idra integrates existing open-source models (Qwen, Llama, Gemma) into Reva. The GPAI obligations fall on the upstream model providers (Alibaba, Meta, Google).

The open-source model exemption (Article 53(2)) benefits the upstream model providers, not downstream integrators. There is no blanket open-source exemption for AI systems built on open-source models — Reva must comply with its own AI system obligations regardless.

Provider & Deployer Obligations

X-idra Systems GmbH = Provider (Article 3(3))

X-idra develops Reva and places it on the market under its own brand. As the provider, X-idra is responsible for:

ObligationArticleStatus
AI literacy for staff and usersArt. 4In force since Feb 2025
Transparency — AI disclosureArt. 50(1)By Aug 2026
AI-generated content markingArt. 50(2)By Aug 2026
Risk classification documentationArt. 6(3)By Aug 2026
Intended purpose & usage restrictionsBy Aug 2026

Since Reva is not high-risk, the heavy obligations of Article 16 (conformity assessment, CE marking, EU database registration, post-market monitoring) do not apply.

Customer = Deployer (Article 3(4))

Organizations deploying Reva on their infrastructure are deployers. Their obligations are limited to:

On-premise deployment does not create exemptions. The AI Act applies regardless of deployment model. On-premise affects the deployer’s control, not the regulatory obligations.

GDPR & BetrVG Alignment

Reva’s privacy-by-design architecture supports compliance with both the AI Act and existing data protection regulations.

GDPR (EU 2016/679)

BetrVG (German Works Constitution Act)

Reva’s privacy guardrails are a key compliance asset. They strengthen the Article 6(3) exception argument, support GDPR compliance, and facilitate works council approval.

Compliance Timeline

Feb 2025
Prohibited practices & AI literacy Verified: no prohibited practices (Art. 5). AI literacy measures in place (Art. 4). Already in force.
Aug 2025
GPAI model obligations No direct impact — X-idra is not a GPAI model provider. Upstream providers (Alibaba, Meta) responsible for model-level compliance.
Aug 2026
Full application — transparency & risk classification Implement transparency measures (Art. 50). Document risk classification rationale. Publish intended purpose statement and usage restrictions.
Aug 2027
High-risk obligations for regulated products Not applicable to Reva — applies to AI in medical devices, machinery, etc. (Art. 6(1) + Annex I).

Technical Guardrails

To maintain Reva’s non-high-risk classification and prevent feature drift into regulated territory, the following guardrails are enforced:

Enforced at the System Level

Feature Review Process

Before adding new capabilities, each feature is assessed for potential impact on risk classification:

Deployer Agreements

Customer agreements include clauses that:

This website uses Umami for anonymous page view statistics (no cookies, no personal data). Fonts are self-hosted; no data is shared with third parties. For details, see our Privacy Policy.