Core Dependencies
| Component | Supported | Tested With | Notes |
|---|---|---|---|
| Python | 3.11 | 3.11 | Bundled in Docker image |
| PostgreSQL | 15, 16 | 16 | pgvector extension required |
| Redis | 7.x | 7 (Alpine) | Used for caching |
| Docker Engine | 24+ | 27.x | Docker Compose v2 required |
| Docker Compose | v2.20+ | v2.32 | v1 not supported |
Container Orchestration
| Platform | Supported | Tested With | Notes |
|---|---|---|---|
| Docker Compose | v2.20+ | v2.32 | Dev / eval deployments |
| k3s | 1.28 – 1.32 | 1.32 | Production recommended |
| Kubernetes | 1.28+ | 1.32 (k3s) | Helm chart or Kustomize |
| Helm | 3.12+ | 3.16 | Chart version 0.1.0 |
Digital.ai Release
| Component | Supported Versions | Notes |
|---|---|---|
| Release | 23.3, 24.1, 24.3, 25.1 | MCP server connects via REST API |
| Release MCP Server | Bundled | Runs as sidecar container |
Jira (Optional)
| Component | Supported | Notes |
|---|---|---|
| Jira Cloud | Current | API token authentication |
| Jira Data Center | 9.x+ | API token or basic auth |
| Jira MCP Server | Bundled | Runs as sidecar container |
LLM / AI
Local Models (Ollama)
| Component | Supported | Recommended | Notes |
|---|---|---|---|
| Ollama | 0.5+ | Latest | Hosts local LLM models |
| Router model | llama3.2:3b | llama3.2:3b | Fast intent classification |
| Agent model | qwen3:14b | qwen3:14b | Tool-calling, format compliance |
| Embedding model | nomic-embed-text | nomic-embed-text | Cross-session memory |
Cloud LLM Alternatives
| Provider | Support Level | Config Variable |
|---|---|---|
| Anthropic Claude | Supported | REVA_ANTHROPIC_API_KEY |
| OpenAI | Supported | REVA_OPENAI_API_KEY |
| vLLM | Experimental | REVA_VLLM_URL |
When using a cloud LLM, the local GPU/Ollama requirement is removed. The router model (llama3.2:3b) still needs Ollama.
GPU Requirements
| GPU | VRAM | Suitability | Notes |
|---|---|---|---|
| NVIDIA RTX 5070 Ti 16GB | 16 GB | MINIMUM | Single-user only, ~22s response time |
| NVIDIA RTX 5080 16GB | 16 GB | RECOMMENDED | Faster inference, single-user or light concurrency |
| NVIDIA RTX 5090 | 32 GB | RECOMMENDED | Headroom for concurrent users |
| NVIDIA A10 | 24 GB | RECOMMENDED | Data center, good throughput |
| NVIDIA L40S | 48 GB | OPTIMAL | Multi-user, parallel inference viable |
Important: Do NOT set OLLAMA_NUM_PARALLEL=2 on 16 GB GPUs — it causes 5× slowdown due to KV cache overflow.
Driver Requirements
| Component | Minimum Version |
|---|---|
| NVIDIA Driver | 535+ |
| CUDA | 12.1+ |
| NVIDIA Container Toolkit | Latest |
Microsoft Teams / Azure
| Component | Supported | Notes |
|---|---|---|
| Microsoft Teams | Current (desktop, web, mobile) | Bot Framework v4 |
| Azure Bot Registration | Single-tenant, multi-tenant | REVA_MICROSOFT_APP_TYPE |
| Azure AD | Current | JWT validation for incoming activities |
LDAP (Optional)
| Component | Supported | Notes |
|---|---|---|
| Active Directory | 2016+ | LDAP / LDAPS |
| OpenLDAP | 2.x | For group/role resolution |
Operating System (Host)
| OS | Tested With | Notes |
|---|---|---|
| Ubuntu | 22.04, 24.04 | Recommended for production |
| Debian | 12 | Supported |
| RHEL / Rocky | 8, 9 | Supported |
Reva runs in containers, so the host OS matters only for Docker/k3s and NVIDIA driver support.