Deployment & Operations Guide
Operational procedures for the nuuvi platform. For architecture overview, see ARCHITECTURE.md.
Production Stack
| Component | Technology | Image/Version |
|---|---|---|
| Application | FrankenPHP | dunglas/frankenphp:1-php8.4-alpine |
| Reverse proxy | Caddy + Cloudflare DNS plugin | Custom build (infrastructure/prod/caddy/Dockerfile) |
| Database | PostgreSQL 16 Alpine | postgres:16-alpine |
| Cache/Queue | Redis 7 Alpine | redis:7-alpine (password-protected) |
| Identity | Zitadel | ghcr.io/zitadel/zitadel:latest |
| Async workers | Symfony Messenger | Same FrankenPHP image |
| Monitoring | Uptime Kuma | louislam/uptime-kuma:1 |
Docker Build
Multi-stage Dockerfile at project root:
Stage 1: composer-deps
Base: dunglas/frankenphp:1-php8.4-alpine
Extensions: pdo_pgsql, redis, intl, gd, zip, sodium, opcache
Composer install: --no-dev --no-scripts --no-autoloader
Stage 2: assets
Full source copy
Optimized autoloader: composer dump-autoload --optimize --classmap-authoritative
AssetMapper compile: php bin/console asset-map:compile --env=prod
Stage 3: production
PHP production ini: infrastructure/docker/php/prod.ini
Caddy config: infrastructure/docker/Caddyfile.frankenphp
Cache warmup: php bin/console cache:warmup --env=prod
Healthcheck: curl -f http://localhost:8080/healthz
Ports: 8080 (HTTP), 443 (HTTPS)
CMD: frankenphp run --config /etc/caddy/Caddyfile
Build locally: make docker-build
CI builds: pushed to registry.gitlab.com/nuuvi/church-app:{tag}
PHP Production Config (infrastructure/docker/php/prod.ini)
- OpCache: 256MB memory, 20k files, disabled validation timestamps
- Memory limit: 256MB
- Upload: 20MB max file size, 25MB post size
- Realpath cache: 4MB for 10min TTL
Caddy Configuration
Production (infrastructure/prod/Caddyfile)
Wildcard TLS via Cloudflare DNS-01 challenge:
nuuvi.app → app:8080
*.nuuvi.app:
auth.nuuvi.app → zitadel:8080
status.nuuvi.app → uptime-kuma:3001
*.nuuvi.app → app:8080 (org slug subdomains)
Custom Caddy build (infrastructure/prod/caddy/Dockerfile) adds github.com/caddy-dns/cloudflare plugin via xcaddy.
Local Dev (infrastructure/docker/Caddyfile.frankenphp)
- Single-file PHP server on
:8080, auto-HTTPS disabled
Service Topology (docker-compose.prod.yml)
Networks:
frontend: Caddy, app, zitadel, uptime-kuma
backend: app, worker, postgres, redis, zitadel
Volumes:
caddy_data, caddy_config, zitadel_machinekey
${DATA_DIR}/postgres, ${DATA_DIR}/redis, ${DATA_DIR}/uptime-kuma
Environment Variables
Set in .env.prod (generated by cloud-init, mode 0600):
APP_ENV=prod
APP_SECRET=<random>
APP_DOMAIN=nuuvi.app
APP_VERSION=<git-tag-or-sha>
APP_IMAGE=registry.gitlab.com/nuuvi/church-app
DB_PASSWORD=<random>
REDIS_PASSWORD=<random>
ZITADEL_MASTERKEY=<32-char>
ZITADEL_DB_PASSWORD=<random>
ZITADEL_DOMAIN=auth.nuuvi.app
CLOUDFLARE_DNS_API_TOKEN=<token>
S3_ACCESS_KEY=<key>
S3_SECRET_KEY=<secret>
S3_ENDPOINT=https://<account>.r2.cloudflarestorage.com
S3_BUCKET=nuuvi-media
MAILER_DSN=smtp://<user>:<pass>@smtp.mailgun.org:587
DATA_DIR=/mnt/data
CI/CD Pipeline (.gitlab-ci.yml)
Stages
| Stage | Jobs | Trigger |
|---|---|---|
| lint | phpstan (level 8), deptrac, frontend-lint (ESLint + tsc) | Every push/MR |
| test | phpunit (PostgreSQL 16 + Redis 7 services, real DB with extensions) | Every push/MR |
| build | docker-build (build + push to GitLab Container Registry) | develop branch OR v*.*.* tag |
| deploy-staging | SSH deploy to staging server | Auto on develop merge |
| deploy-production | SSH deploy to production server | Manual on semver tag |
| smoke | Health check (/healthz) + API check (/api/doc.json) | After deploy |
CI Variables Required
| Variable | Purpose |
|---|---|
SSH_PRIVATE_KEY | Deploy user SSH key |
SSH_KNOWN_HOSTS | Server host keys |
SSH_HOST_STAGING | Staging server IP/hostname |
SSH_HOST_PROD | Production server IP/hostname |
CI_REGISTRY_* | GitLab Container Registry (auto-provided) |
Deploy Procedure
# What the CI deploy job does:
ssh deploy@${SERVER} <<EOF
cd /opt/nuuvi
export APP_VERSION=${TAG}
docker login -u ${CI_REGISTRY_USER} -p ${CI_REGISTRY_PASSWORD} ${CI_REGISTRY}
docker pull ${CI_REGISTRY_IMAGE}:${TAG}
docker compose -f docker-compose.prod.yml up -d --no-deps app worker
docker compose -f docker-compose.prod.yml exec -T app php bin/console doctrine:migrations:migrate --no-interaction --allow-no-migration
docker compose -f docker-compose.prod.yml exec -T app php bin/console cache:clear --env=prod
docker compose -f docker-compose.prod.yml exec -T app php bin/console messenger:stop-workers
EOF
Manual Deploy
make deploy-test # Deploy to staging
make deploy-prod # Deploy to production
Uses infrastructure/tofu/scripts/deploy.sh.
Infrastructure Provisioning (OpenTofu)
Modules
| Module | Resources |
|---|---|
hetzner-server | SSH key, firewall (22/80/443/ICMP), server (CPX22), volume (ext4, mounted at /mnt/data) |
cloudflare-dns | A record (root domain) + wildcard *.domain, TTL 300, DNS-only (no proxy) |
Environments
| Environment | Config path | Server | Volume | Domain |
|---|---|---|---|---|
| test | infrastructure/tofu/environments/test/ | CPX22 @ nbg1 | 20 GB | test.nuuvi.app |
| production | infrastructure/tofu/environments/production/ | CPX22 @ nbg1 | 40 GB | nuuvi.app |
Provisioning a New Environment
cd infrastructure/tofu/environments/production
tofu init
tofu plan -var-file=terraform.tfvars # or: make infra-plan-prod
tofu apply -var-file=terraform.tfvars # or: make infra-apply-prod
Cloud-Init Bootstrap (infrastructure/tofu/scripts/cloud-init.yaml.tpl)
What runs on first boot:
- Creates
deployuser with SSH key auth, sudo, no password - Hardens SSH: disables password auth, root login
- Installs: Docker CE + compose plugin, fail2ban, ufw, htop, unattended-upgrades
- Configures firewall (ufw): allow 22, 80, 443
- Auto-detects Hetzner volume, formats ext4, mounts at
/mnt/data - Creates directories:
/mnt/data/{postgres,redis,uptime-kuma},/opt/nuuvi - Generates
.env.prodwith all secrets from Terraform variables - Optional: creates swap file
Backup & Recovery
Backup (infrastructure/backup/backup-postgres.sh)
- Schedule: Daily at 02:00 UTC (systemd timer with 5min jitter)
- Databases:
churchappandzitadel(both viapg_dump --format=custom) - Output:
/opt/nuuvi/backups/{db}_{YYYY-MM-DD}_{HH-MM}.dump - Retention: 7 daily, 4 weekly (Sundays), 3 monthly (1st of month)
- Remote sync: Optional rsync to Hetzner Storage Box via SSH
Systemd Timer
# infrastructure/backup/nuuvi-backup.timer
OnCalendar=*-*-* 02:00:00
Persistent=true
RandomizedDelaySec=300
Restore (infrastructure/backup/restore-postgres.sh)
# Usage: ./restore-postgres.sh /opt/nuuvi/backups/churchapp_2026-03-19_02-00.dump
# What it does:
1. Stops app and worker containers
2. Drops and recreates the database
3. Restores PostgreSQL extensions (ltree, pg_trgm, uuid-ossp)
4. Restores from dump file (pg_restore)
5. Starts containers
6. Runs migrations (churchapp only)
Health Checks
| Endpoint | Checks | Response | Use |
|---|---|---|---|
/api/v1/health | None | 200 OK | Load balancer lightweight check |
/healthz | PostgreSQL SELECT 1 + Redis PING | 200 or 503 | Docker healthcheck, deep monitoring |
Both are public (security firewall bypass).
Domains
| Domain | Purpose | Hosting | Managed by |
|---|---|---|---|
nuuvi.app | Platform root | Hetzner + Caddy | Cloudflare DNS |
*.nuuvi.app | Org subdomains | Hetzner + Caddy | Cloudflare wildcard |
auth.nuuvi.app | Zitadel OIDC | Caddy → Zitadel | Cloudflare DNS |
status.nuuvi.app | Uptime Kuma | Caddy → Uptime Kuma | Cloudflare DNS |
nuuvi.io | Marketing website | Cloudflare Pages | Cloudflare DNS |
Rollback
To roll back a deployment:
ssh deploy@${SERVER}
cd /opt/nuuvi
# Set the previous version
export APP_VERSION=v0.1.0 # or the previous SHA
# Pull and restart
docker pull registry.gitlab.com/nuuvi/church-app:${APP_VERSION}
docker compose -f docker-compose.prod.yml up -d --no-deps app worker
docker compose -f docker-compose.prod.yml exec -T app php bin/console cache:clear --env=prod
Note: If the rollback crosses a migration, you may need to manually revert the database migration.