Contents
  1. Prerequisites
  2. Azure Bot Registration
  3. Download & Authenticate
  4. Generate Secrets
  5. Configure Environment
  6. Deploy
  7. Teams App Sideloading
  8. Verify Installation
  9. Optional Integrations
  10. Troubleshooting

1 Prerequisites

Hardware Requirements

ComponentMinimum (Dev / Eval)Recommended (Production)
CPU4 cores8+ cores
RAM16 GB32 GB
Storage50 GB SSD100 GB SSD
GPU (for local LLM)NVIDIA GPU, 16 GB VRAMNVIDIA GPU, 24+ GB VRAM

The GPU is required for running the local LLM (Ollama with qwen3:14b). Alternatively, you can use a remote Ollama instance or a cloud LLM provider (OpenAI, Anthropic).

Production Sizing Guide

LLM inference is the primary bottleneck. A single request takes ~20 seconds (4 agent steps, 1 tool call). Ollama serializes GPU requests, so concurrent users experience linear latency increase.

Team SizeLLM BackendGPUNotes
1–3 usersOllama16 GB VRAM (e.g. RTX 5070 Ti)Sufficient for evaluation and small teams. Sequential processing.
5–20 usersvLLM24+ GB VRAM (e.g. RTX 5090)Batched inference enables concurrent requests. Significant throughput improvement over Ollama.
20+ usersvLLM or Cloud LLM48+ GB VRAM or multi-GPUConsider cloud LLM providers (OpenAI, Anthropic) for large teams to avoid GPU infrastructure complexity.

Why vLLM for production? Ollama processes requests sequentially — with 5 concurrent users, response times exceed 2 minutes and requests start timing out. vLLM uses continuous batching and PagedAttention to serve multiple requests simultaneously, but requires at least 24 GB VRAM for the KV cache headroom needed by qwen3:14b.

To enable vLLM, set these environment variables:

REVA_VLLM_URL=http://gpu-server:8100/v1
REVA_VLLM_API_KEY=any-string

The Ollama router model (llama3.2:3b) must remain on a separate Ollama instance to avoid GPU contention. See src/reva/vllm_client.py for details.

Recommended Models

Reva requires models with strong tool-calling (function calling) capabilities and multilingual support. Not all models perform equally well — the following recommendations are based on extensive testing with Reva's agent loop:

ModelSizeRatingNotes
qwen3:14b14BBest choiceExcellent tool calling, reliable output format, strong multilingual (DE/EN/FR/ES/NL). Default agent model.
qwen3:32b32BExcellentEven better quality, but requires 24+ GB VRAM and is slower. Good for vLLM deployments.
gemma3:27b27BGoodGood tool calling, but occasional formatting inconsistencies in structured output.
llama3.3:70b70BGoodStrong general performance, requires significant GPU resources. Better for cloud/multi-GPU setups.
mistral-small3.224BModerateUsable but less reliable tool calling compared to Qwen3.
qwen2.5-coder:32b32BModerateGood quality, but too slow for interactive use on a single GPU.

Cloud LLM models: When using OpenAI or Anthropic, models like gpt-4o or claude-sonnet-4-5-20250514 deliver excellent results. Cloud models eliminate GPU requirements entirely and scale to any team size.

The router model (llama3.2:3b) classifies incoming messages and does not require tool-calling capabilities. It runs on Ollama regardless of the agent model backend.

Software Requirements

DevelopmentTest / Acceptance / Production
RuntimeDocker Engine 24+ & Docker Compose v2k3s (lightweight Kubernetes)
BuildDocker (builds locally)Docker (for image build) + kubectl
LLMOllama (local or remote) — or a cloud LLM API key
RegistryAccess to ghcr.io/x-idra-systems-gmbh/reva (credentials provided by X-idra)

Network Requirements

2 Azure Bot Registration

Reva communicates with Microsoft Teams through the Azure Bot Framework. You need to register a bot in your Azure tenant.

2.1 Create an App Registration

  1. Go to Azure PortalMicrosoft Entra IDApp registrationsNew registration
  2. Name: Reva
  3. Supported account types: Accounts in this organizational directory only (Single Tenant)
  4. Click Register
  5. Go to Certificates & secretsNew client secret
  6. Copy the secret value immediately — it is shown only once

Record these three values — you will need them later:
Application (client) ID, Directory (tenant) ID, and the Client secret.

2.2 Create an Azure Bot Resource

  1. In Azure Portal, search for Azure BotCreate
  2. Bot handle: Reva
  3. Pricing tier: F0 (free — sufficient for Teams)
  4. App type: Single Tenant
  5. Use existing app registration → enter the App ID from step 2.1
  6. Data residency: Local (EU: westeurope recommended)

2.3 Configure Messaging Endpoint

In the Azure Bot resource, go to Configuration and set the messaging endpoint:

https://your-domain.example.com/api/messages

2.4 Enable Teams Channel

  1. In the Azure Bot resource, go to Channels
  2. Click Microsoft Teams (Commercial) → Apply

3 Download & Authenticate

3.1 Extract the Deployment Package

Download the deployment package provided by X-idra and extract it:

tar xzf reva-1.0.4.tar.gz
cd reva-1.0.4

3.2 Authenticate with the Container Registry

Log in to the GitHub Container Registry using the credentials provided by X-idra:

echo "$GHCR_TOKEN" | docker login ghcr.io -u "$GHCR_USER" --password-stdin

3.3 Pull the Reva Image

docker compose pull

Pin the version in your .env file: REVA_VERSION=1.0.4. The docker-compose.yml uses this variable to select the image tag.

4 Generate Secrets

All sensitive credentials are stored as files in secrets/ (gitignored). Both Docker Compose and Kubernetes deployments use these same files.

./bin/generate-secrets.sh

The script auto-generates a PostgreSQL admin password, SSL certificates (self-signed CA + server cert), a database URL secret file, and a webhook secret, then prompts for:

Secret FileDescriptionSource
reva_microsoft_app_passwordAzure App client secretStep 2.1 above
reva_release_passwordDigital.ai Release passwordRelease admin (basic auth)
reva_release_tokenDigital.ai Release API tokenRelease admin (token auth)
reva_ldap_bind_passwordLDAP bind passwordYour LDAP administrator
reva_jira_tokenJira Cloud API tokenAtlassian API tokens

You only need to provide the secrets relevant to your setup. Skip any that don't apply.

5 Configure Environment

cp .env.example .env

Required Settings

VariableDescriptionExample
OLLAMA_URLOllama API endpointhttp://localhost:11434
REVA_MICROSOFT_APP_IDAzure App ID (from step 2.1)d57d6dd5-4399-...
REVA_MICROSOFT_APP_TENANT_IDAzure Tenant ID (from step 2.1)847e08d2-a338-...
REVA_RELEASE_BASE_URLDigital.ai Release API URLhttp://release:5516
REVA_RELEASE_AUTH_TYPEbasic or tokentoken

LLM Model Setup

Reva uses two models: a small router model for message classification and a larger agent model for tool-calling:

ollama pull llama3.2:3b     # Router (fast, small)
ollama pull qwen3:14b       # Agent (tool-calling, multilingual)
ollama pull nomic-embed-text  # Embeddings (for cross-session memory)

Do not set AGENT_MODEL in your .env file. The agent model is configured per role in config/agent_roles.yaml. Setting AGENT_MODEL would override the router model and degrade performance.

Cloud LLM Providers (Alternative)

Instead of Ollama, you can use a cloud LLM provider:

REVA_OPENAI_API_KEY=sk-...          # OpenAI
REVA_ANTHROPIC_API_KEY=sk-ant-...   # Anthropic Claude

6 Deploy

Choose your deployment target:

Docker Compose (Development & Evaluation)

The fastest way to get Reva running. MCP servers are spawned as Docker containers via the Docker socket.

docker compose up -d

Pin the version in your .env file: REVA_VERSION=1.0.4

This starts three services:

Docker Compose ├─ reva (Python app) → :3978 → :8000 │ └─ spawns MCP containers via Docker socket │ ├─ release-mcp (stdio) │ └─ jira-mcp (stdio) ├─ postgres (pgvector:pg16) → :5432 (SSL) │ ├─ admin: postgres (superuser) │ └─ app: reva (restricted) └─ redis (redis:7-alpine) → :6379

Wait for all services to become healthy:

docker compose ps

Expected output — all services show Up (healthy):

NAME            STATUS              PORTS
reva            Up (healthy)        0.0.0.0:3978→8000/tcp
reva-postgres   Up (healthy)        5432/tcp
reva-redis      Up (healthy)        6379/tcp

Docker Compose mounts the Docker socket (/var/run/docker.sock) to spawn MCP server containers. This approach is simple but not suitable for production Kubernetes environments.

Reverse Proxy (TLS)

Place a reverse proxy in front of Reva to terminate TLS. Example HAProxy backend:

backend reva
    server reva 127.0.0.1:3978 check

Database Backup

# Manual backup
./bin/backup-db.sh

# Automated (add to crontab)
0 2 * * * /path/to/reva/bin/backup-db.sh

Backups are saved to backups/ with 30-day retention.

Kubernetes (Test / Acceptance / Production)

Production-grade deployment on k3s. MCP servers run as sidecar containers in the same pod using SSE transport — no Docker socket required.

6.1 Install k3s

# Install k3s (single-node, includes Traefik ingress)
curl -sfL https://get.k3s.io | sh -

# Set up kubeconfig
mkdir -p ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
sudo chown $(id -u):$(id -g) ~/.kube/config

# Verify
kubectl get nodes

k3s adds only ~512 MB RAM overhead and includes Traefik as the default ingress controller. No additional load balancer setup required.

6.2 Deploy with the Deploy Script

The deploy script handles everything: image build, secret import, manifest application, and health check.

# Full deploy (build image + create secrets + apply manifests)
./bin/k8s-deploy.sh

# Or step by step:
./bin/k8s-deploy.sh --build-only    # Build image and import to k3s
./bin/k8s-deploy.sh --secrets       # Create/update Kubernetes secrets
./bin/k8s-deploy.sh --apply-only    # Apply manifests only (no build)

Architecture

Kubernetes (k3s) ├─ Namespace: reva │ ├─ Deployment: reva (1 pod, 3 containers) │ │ ├─ reva (main app) → :8000 │ │ ├─ release-mcp (sidecar) → :8080 (SSE, localhost) │ │ └─ jira-mcp (sidecar) → :8081 (SSE, localhost) │ ├─ StatefulSet: postgres (10 Gi PVC, SSL) │ │ ├─ admin: postgres (superuser) │ │ └─ app: reva (restricted) │ ├─ Deployment: redis │ ├─ Ingress: Traefik → reva:8000 (TLS) │ ├─ NetworkPolicies (default deny + allow rules) │ └─ CronJob: db-backup (daily 02:00 UTC) └─ TLS: terminated at Traefik (websecure entrypoint)

Security hardening: Both Docker Compose and Kubernetes deployments include PostgreSQL user restriction (app user has no superuser privileges), PostgreSQL SSL encryption, and secrets management via Docker secrets / K8s Secrets. See docs/security-hardening.md for the full hardening guide.

Key difference from Docker Compose: MCP servers run as sidecar containers in the same pod, communicating via SSE on localhost instead of Docker stdio. The k8s-specific MCP config (config/mcp_servers.k8s.yaml) is automatically mounted.

6.3 Verify Deployment

# Check all pods are running
kubectl get pods -n reva

# Expected: 3/3 READY for reva, 1/1 for postgres and redis
NAME                     READY   STATUS    RESTARTS   AGE
reva-xxxxx-yyyyy         3/3     Running   0          2m
postgres-0               1/1     Running   0          2m
redis-xxxxx-yyyyy        1/1     Running   0          2m

6.4 Operations

# View logs
kubectl logs -n reva -l app=reva -c reva -f           # App
kubectl logs -n reva -l app=reva -c release-mcp -f    # Release MCP
kubectl logs -n reva -l app=reva -c jira-mcp -f       # Jira MCP

# Restart
kubectl rollout restart deployment/reva -n reva

# Manual database backup
kubectl create job --from=cronjob/db-backup db-backup-manual -n reva

# Rollback to previous version
kubectl rollout undo deployment/reva -n reva

6.5 Ingress & TLS

TLS is enabled by default via the Traefik websecure entrypoint. The k8s/ingress.yaml routes traffic through HTTPS. For custom hostnames, edit k8s/ingress.yaml. For production certs, use cert-manager or provide your own TLS secret.

# Generate self-signed TLS certificate
./bin/generate-k8s-tls.sh

# Or use cert-manager for production
# traefik.ingress.kubernetes.io/router.entrypoints: websecure
# cert-manager.io/cluster-issuer: letsencrypt-prod

6.6 ConfigMap Customization

Non-secret environment variables are stored in k8s/configmap.yaml. After editing, apply changes:

kubectl apply -k k8s/
kubectl rollout restart deployment/reva -n reva

7 Teams App Sideloading

7.1 Prepare the App Package

Run the manifest configurator script. It prompts for your Azure App ID and bot endpoint domain, then creates the Teams app package:

./bin/configure-manifest.sh

You only need to provide two values:

The developer section (company name, URLs) stays as X-idra Systems GmbH.

Manual alternative: Edit appPackage/manifest.json directly — replace {{AZURE_APP_ID}} and {{BOT_DOMAIN}}, then zip: cd appPackage && zip ../reva-teams-app.zip manifest.json color.png outline.png

7.2 Enable Custom App Uploads

In the Teams Admin CenterTeams appsSetup policies → enable Upload custom apps.

7.3 Sideload the App

  1. Open Microsoft TeamsAppsManage your appsUpload an app
  2. Select Upload a custom app → choose reva-teams-app.zip
  3. Click Add

8 Verify Installation

Health Check

curl http://localhost:3978/api/health
kubectl exec -n reva deploy/reva -c reva -- \
  python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8000/api/health').read().decode())"

Expected response:

{
  "status": "ok",
  "adapter_initialized": true,
  "db": true,
  "mcp": {
    "release": { "connected": true, "tools": 38 },
    "jira": { "connected": true, "tools": 29 }
  }
}

Functional Test

In Microsoft Teams, send these messages to Reva:

  1. “Hello” — Reva should greet you
  2. “List all active releases” — Reva should query Release and show results
  3. “Show my dashboard” — personal release overview

9 Optional Integrations

Jira Cloud

REVA_JIRA_URL=https://yourorg.atlassian.net
REVA_JIRA_USERNAME=your-email@company.com

Ensure the reva_jira_token secret file contains a valid Atlassian API token.

LDAP Group Resolution

REVA_LDAP_URL=ldap://ldap.company.com:389
REVA_LDAP_BIND_DN=cn=reva,ou=services,dc=company,dc=com
REVA_LDAP_GROUP_BASE=ou=groups,dc=company,dc=com

Release Webhook Notifications

Copy plugins/xlr-reva-notify-plugin-1.0.4.jar to your Digital.ai Release server's plugin directory and restart Release. Then point the webhook to:

https://your-domain.example.com/api/notify

Autonomous Monitoring

REVA_MONITOR_ENABLED=true
REVA_MONITOR_INTERVAL=300
REVA_MONITOR_OVERDUE_THRESHOLD_HOURS=24
REVA_MONITOR_STALE_APPROVAL_HOURS=48

User Authorization

REVA_AUTH_ENABLED=true
REVA_AUTH_USER_MAP=TeamsName1=releaseUser1,TeamsName2=releaseUser2

Cross-Session Memory

Reva can remember user preferences and context across conversations. Memories are stored per user with semantic deduplication and automatic cleanup.

MEMORY_ENABLED=true
MEMORY_EXTRACTION_ENABLED=true

Requires the nomic-embed-text embedding model and pgvector (included in the default PostgreSQL image).

GDPR: Users can type show memories to see what Reva remembers, and forget memories to delete all stored memories. Memories are soft-deleted with an audit trail.

10 Troubleshooting

Cannot pull image

If docker compose pull fails with an authentication error:

Health check returns 503

MCP servers failed to connect. Check credentials, network access, and logs:

docker compose logs reva | grep -i "mcp\|error"
kubectl logs -n reva -l app=reva -c reva | grep -i "mcp\|error"
kubectl logs -n reva -l app=reva -c release-mcp
kubectl logs -n reva -l app=reva -c jira-mcp

Teams messages not arriving

MCP containers not starting (Docker Compose)

docker ps                    # Check Docker socket access
docker pull xebialabsearlyaccess/dai-release-mcp:25.3.1-beta.212
docker pull ghcr.io/sooperset/mcp-atlassian:0.21.0

MCP sidecars not ready (Kubernetes)

# Check sidecar status
kubectl describe pod -n reva -l app=reva

# Verify ConfigMaps exist
kubectl get configmap -n reva

LLM responses are slow

Getting Help

Contact us at info@x-idra.de.

What’s Next

Installation complete? Here’s what to do next:

This website does not use cookies or tracking technologies. All fonts are self-hosted; no data is transferred to third parties. See our Privacy Policy for details.