Contents
  1. Overview
  2. Pre-Upgrade Checklist
  3. Docker Compose Upgrade
  4. Kubernetes Kustomize Upgrade
  5. Kubernetes Helm Upgrade
  6. Rollback Procedure
  7. Database Migrations
  8. MCP Server Updates
  9. Ollama Model Updates
  10. Version Pinning
  11. Troubleshooting

Golden rule: Always back up your database before upgrading. Database migrations run automatically at startup and some may not be reversible.

Overview

Version Scheme

Reva follows Semantic Versioning:

Component Versions

A Reva deployment consists of several independently versioned components:

ComponentVersion SourceExample
Reva applicationREVA_VERSION in .env / image tag1.0.4
Release MCP sidecarImage tag in docker-compose or K8s manifest25.3.0-beta.926
Jira MCP sidecarImage tag in docker-compose or K8s manifest0.21.0
PostgreSQLBase image tagpg16
RedisBase image tag7-alpine
Ollama modelsModel tag on Ollama serverqwen3:14b

Upgrade Philosophy

  1. Read the CHANGELOG before every upgrade
  2. Back up the database
  3. Verify the current deployment is healthy
  4. Upgrade one component at a time when possible
  5. Verify health after each change
  6. Know how to roll back before you start

Pre-Upgrade Checklist

Run through this checklist before every upgrade, regardless of deployment method.

Note the Current Version

docker inspect reva --format '{{.Config.Image}}'
kubectl get deployment reva -n reva -o jsonpath='{.spec.template.spec.containers[0].image}'

Verify Current Health

curl -s http://localhost:3978/api/health | python3 -m json.tool
kubectl exec -n reva deploy/reva -c reva -- \
  python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8000/api/health').read().decode())"

Expected output includes "status": "ok" and all MCP servers showing "connected": true.

Back Up the Database

./bin/backup-db.sh

Creates a compressed backup at backups/reva_YYYY-MM-DD_HHMMSS.sql.gz with 30-day retention.

# Trigger a manual backup
kubectl create job --from=cronjob/db-backup manual-backup-pre-upgrade -n reva

# Wait for completion
kubectl wait --for=condition=complete job/manual-backup-pre-upgrade -n reva --timeout=120s

# Verify success
kubectl logs job/manual-backup-pre-upgrade -n reva

Read the CHANGELOG

cat CHANGELOG.md

Look for:

Docker Compose Upgrade

1 Read the CHANGELOG

cat CHANGELOG.md

Pay attention to any breaking changes between your current version and the target version.

2 Back up the database

./bin/backup-db.sh

# Verify the backup was created
ls -lh backups/reva_*.sql.gz | tail -1

3 Update the version

Edit REVA_VERSION in .env:

# Example: upgrade from 1.0.4 to 1.0.5
sed -i.bak 's/^REVA_VERSION=.*/REVA_VERSION=1.0.5/' .env

The docker-compose.yml references this variable:

image: ghcr.io/x-idra-systems-gmbh/reva:${REVA_VERSION:-latest}

4 Pull new images

docker compose pull

This pulls the new Reva image from ghcr.io. If you also updated MCP sidecar versions in docker-compose.yml, those images will be pulled too.

5 Restart services

docker compose up -d

Docker Compose will recreate only the containers whose images changed. The depends_on configuration ensures PostgreSQL and Redis are healthy before Reva starts.

6 Verify health

curl -s http://localhost:3978/api/health | python3 -m json.tool

Confirm:

7 Check logs for migration output

# Check for Alembic migration output
docker compose logs reva | grep -i -E "(alembic|migration|upgrade)"

# Check for any errors
docker compose logs reva | grep -i error | tail -20

# Follow logs to watch for issues
docker compose logs -f reva

Kubernetes Kustomize Upgrade

1 Read the CHANGELOG

cat CHANGELOG.md

2 Back up the database

kubectl create job --from=cronjob/db-backup manual-backup-pre-upgrade -n reva
kubectl wait --for=condition=complete job/manual-backup-pre-upgrade -n reva --timeout=120s
kubectl logs job/manual-backup-pre-upgrade -n reva

3 Update image tags

Edit k8s/reva-deployment.yaml to set the new image tag for the Reva container.

If building locally for k3s:

# Build the new image
docker build -t reva:<new-version> .

# Import into k3s
docker save reva:<new-version> | sudo k3s ctr images import -

If pulling from ghcr.io:

docker pull ghcr.io/x-idra-systems-gmbh/reva:<new-version>
docker tag ghcr.io/x-idra-systems-gmbh/reva:<new-version> reva:<new-version>
docker save reva:<new-version> | sudo k3s ctr images import -

4 Apply manifests

kubectl apply -k k8s/

5 Watch the rollout

kubectl rollout status deployment/reva -n reva --timeout=120s

This blocks until the new pod is ready or the timeout is reached.

6 Verify health

# Port-forward and check
kubectl port-forward -n reva svc/reva 8000:8000 &
curl -s http://localhost:8000/api/health | python3 -m json.tool
kill %1

# Or exec into the pod
kubectl exec -n reva deploy/reva -c reva -- \
  python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8000/api/health').read().decode())"

7 Check logs

kubectl logs -n reva deploy/reva -c reva | grep -i -E "(alembic|migration|upgrade)"
kubectl logs -n reva deploy/reva -c reva | grep -i error | tail -20

Kubernetes Helm Upgrade

1 Read the CHANGELOG

cat CHANGELOG.md

2 Back up the database

kubectl create job --from=cronjob/db-backup manual-backup-pre-upgrade -n reva
kubectl wait --for=condition=complete job/manual-backup-pre-upgrade -n reva --timeout=120s
kubectl logs job/manual-backup-pre-upgrade -n reva

3 Update the version

Option A — edit values.yaml:

# values.yaml
reva:
  image:
    tag: "1.0.5"

Option B — pass the version on the command line:

helm upgrade reva helm/reva/ -n reva --set reva.image.tag=1.0.5

4 Run the upgrade

# With a values file
helm upgrade reva helm/reva/ -n reva -f values.yaml

# Or with --set
helm upgrade reva helm/reva/ -n reva --set reva.image.tag=1.0.5

5 Verify the rollout

# Check Helm release status
helm status reva -n reva

# Watch pod rollout
kubectl rollout status deployment/reva -n reva --timeout=120s

# Verify health
kubectl exec -n reva deploy/reva -c reva -- \
  python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8000/api/health').read().decode())"

6 Check logs

kubectl logs -n reva deploy/reva -c reva | grep -i -E "(alembic|migration|upgrade)"
kubectl logs -n reva deploy/reva -c reva | grep -i error | tail -20

Rollback Procedure

# Revert to the previous version
sed -i.bak 's/^REVA_VERSION=.*/REVA_VERSION=1.0.4/' .env

# Restart with the old image
docker compose up -d

# Verify
curl -s http://localhost:3978/api/health | python3 -m json.tool
# Roll back to the previous revision
kubectl rollout undo deployment/reva -n reva

# Verify
kubectl rollout status deployment/reva -n reva --timeout=120s

Then update k8s/reva-deployment.yaml to match the reverted image tag so that the next kubectl apply -k k8s/ does not re-apply the broken version.

# List release history
helm history reva -n reva

# Roll back to the previous revision
helm rollback reva -n reva

# Or roll back to a specific revision
helm rollback reva 3 -n reva

# Verify
helm status reva -n reva
kubectl rollout status deployment/reva -n reva --timeout=120s

Database Rollback

If a migration corrupted data or schema, restore from the pre-upgrade backup.

# Stop Reva (keep Postgres running)
docker compose stop reva

# Restore from backup
gunzip -c backups/reva_2026-03-15_020000.sql.gz | \
  docker exec -i reva-postgres psql -U postgres -d reva

# Revert REVA_VERSION in .env to the previous version
sed -i.bak 's/^REVA_VERSION=.*/REVA_VERSION=1.0.4/' .env

# Restart with the old version
docker compose up -d
# Scale down Reva to stop writes
kubectl scale deployment/reva -n reva --replicas=0

# Restore from backup using a temporary pod
kubectl run pg-restore -n reva --rm -it \
  --image=pgvector/pgvector:pg16 \
  --env="PGPASSWORD=<admin-password>" \
  -- sh -c 'gunzip -c /backups/reva_<TIMESTAMP>.sql.gz | psql -h postgres -U postgres -d reva'

# Revert the image tag in your manifests, then re-apply
kubectl apply -k k8s/
# Or: helm rollback reva -n reva

# Scale back up
kubectl scale deployment/reva -n reva --replicas=1

Warning: Database rollbacks restore data to the backup point. Any data written between the backup and the rollback will be lost.

Database Migrations

Reva uses two migration mechanisms that both run automatically at startup — no manual intervention is needed for normal upgrades.

Alembic Migrations (Renfield Platform)

The Renfield platform manages its own schema via Alembic. Migrations run automatically when the application starts. You will see output like:

INFO  [alembic.runtime.migration] Running upgrade abc123 -> def456, add memory table

SQLAlchemy metadata.create_all() (Reva Tables)

Reva-specific tables are created via SQLAlchemy's metadata.create_all(), which is idempotent. It creates new tables if they do not exist and leaves existing tables untouched.

What To Do If a Migration Fails

1 Check the logs for the specific error:

docker compose logs reva | grep -i -E "(alembic|error|traceback)" | tail -40
kubectl logs -n reva deploy/reva -c reva | grep -i -E "(alembic|error|traceback)" | tail -40

2 Common causes:

3 Recovery: If migration fails, the application will not start and the health check will fail. Roll back to the previous version (see Rollback Procedure) and report the migration error.

MCP Server Updates

The Release MCP and Jira MCP sidecars are independently versioned. You can update them without changing the Reva application version.

MCP containers run via Docker-in-Docker (configured in config/mcp_servers.yaml). Update the image tags there:

vi config/mcp_servers.yaml
# Update the image tag for release-mcp or jira-mcp

Then restart Reva so it recreates the MCP containers:

docker compose restart reva

Edit k8s/reva-deployment.yaml and update the sidecar image tags:

# Release MCP sidecar
- name: release-mcp
  image: xebialabsearlyaccess/dai-release-mcp:25.3.0-beta.NEW_TAG

# Jira MCP sidecar
- name: jira-mcp
  image: ghcr.io/sooperset/mcp-atlassian:0.NEW_TAG

Apply the change:

kubectl apply -k k8s/
kubectl rollout status deployment/reva -n reva --timeout=120s

Update sidecar image tags in values.yaml:

mcp:
  release:
    image:
      tag: "25.3.0-beta.NEW_TAG"
  jira:
    image:
      tag: "0.NEW_TAG"

Then upgrade:

helm upgrade reva helm/reva/ -n reva -f values.yaml

Verify MCP Connectivity

After updating MCP sidecars, confirm they reconnect:

curl -s http://localhost:3978/api/health | python3 -m json.tool
kubectl exec -n reva deploy/reva -c reva -- \
  python -c "import urllib.request; print(urllib.request.urlopen('http://localhost:8000/api/health').read().decode())"

Both MCP servers should show "connected": true.

Ollama Model Updates

Reva uses Ollama for LLM inference. The agent model (qwen3:14b) and router model (llama3.2:3b) run on the Ollama server, which is external to the Reva deployment.

Pull a New Model Version

# On the Ollama server
ollama pull qwen3:14b

# Verify the model is available
ollama list | grep qwen3

Verify the Model Works

ollama run qwen3:14b "Say hello in one sentence" --verbose

Update the Model Reference

If switching to a different model entirely (not just updating the same tag), update the agent role configuration:

vi config/agent_roles.yaml
# Update the model field for each role

Then restart Reva:

docker compose restart reva
kubectl rollout restart deployment/reva -n reva

Important: Do not set AGENT_MODEL in .env — it overrides the router model. Agent models are configured per role in config/agent_roles.yaml.

Version Pinning

Pin Reva

Always pin REVA_VERSION in .env to a specific version. Never use latest in production:

# .env
REVA_VERSION=1.0.4

Pin MCP Sidecars

In k8s/reva-deployment.yaml or values.yaml, use explicit image tags:

# Good
image: xebialabsearlyaccess/dai-release-mcp:25.3.0-beta.926
image: ghcr.io/sooperset/mcp-atlassian:0.21.0

# Bad -- do not use in production
image: xebialabsearlyaccess/dai-release-mcp:latest

Pin Infrastructure

Pin PostgreSQL and Redis versions:

# docker-compose.yml / K8s manifests
image: pgvector/pgvector:pg16    # not pgvector/pgvector:latest
image: redis:7-alpine            # not redis:latest

Record Deployed Versions

After each upgrade, record the deployed versions for audit and rollback:

echo "=== Reva ===" && docker inspect reva --format '{{.Config.Image}}'
echo "=== Postgres ===" && docker inspect reva-postgres --format '{{.Config.Image}}'
echo "=== Redis ===" && docker inspect reva-redis --format '{{.Config.Image}}'
echo "=== Ollama models ===" && ollama list

Keeping a version inventory makes rollbacks straightforward and helps diagnose environment drift between staging and production.

Troubleshooting

Health Check Failing After Upgrade

Symptom: GET /api/health returns 503 or times out.

# Check what's failing
curl -s http://localhost:3978/api/health | python3 -m json.tool

# Check container status
docker compose ps

# Check logs for startup errors
docker compose logs reva --tail 50

Common causes:

MCP Server Not Connecting

Symptom: Health check shows "connected": false for one or both MCP servers.

docker logs reva-release-mcp 2>&1 | tail -20
docker logs reva-jira-mcp 2>&1 | tail -20
kubectl logs -n reva deploy/reva -c release-mcp --tail 20
kubectl logs -n reva deploy/reva -c jira-mcp --tail 20

Common causes:

Alembic Migration Error

Symptom: Reva fails to start with an Alembic error in the logs.

docker compose logs reva | grep -A5 "alembic"

Common causes:

Resolution:

  1. Roll back to the previous Reva version
  2. Restore the database from backup if needed
  3. Report the migration error with logs attached

Container Fails to Pull Image

Symptom: docker compose pull fails with authentication error.

# Verify ghcr.io login
docker login ghcr.io

# Check that REVA_VERSION matches a published tag
docker manifest inspect ghcr.io/x-idra-systems-gmbh/reva:1.0.5

If your GHCR token has expired, generate a new Personal Access Token with read:packages scope and log in again:

echo $GHCR_TOKEN | docker login ghcr.io -u <username> --password-stdin

Pod Stuck in CrashLoopBackOff (Kubernetes)

Symptom: The Reva pod keeps restarting.

# Check pod events
kubectl describe pod -n reva -l app.kubernetes.io/name=reva

# Check logs from the crashing container
kubectl logs -n reva -l app.kubernetes.io/name=reva -c reva --previous

Common causes:

Secrets Not Available After Upgrade

Symptom: Reva logs show authentication errors to Release, Jira, or the database.

# Verify secrets exist
ls -la secrets/

# Regenerate if needed
./bin/generate-secrets.sh
# Verify secret exists
kubectl get secret reva-secrets -n reva -o jsonpath='{.data}' | python3 -m json.tool

# Update values.yaml with new secret values, then upgrade
helm upgrade reva helm/reva/ -n reva -f values.yaml

This website does not use cookies or tracking technologies. All fonts are self-hosted; no data is transferred to third parties. See our Privacy Policy for details.