Church App — Technical Architecture Blueprint

Church App — Technical Architecture Blueprint

Version 0.5 — Pre-MVP Architecture Decision Record Last updated: March 2026


Table of Contents

  1. Architecture Principles
  2. AI-Agentic Development Model
  3. System Overview
  4. Mobile App
  5. Backend Services (Module System, DDD, Service Boundaries)
  6. Real-time & Chat
  7. Identity & Authentication
  8. Internationalization & AI Translation
  9. Data Layer
  10. Infrastructure & DevOps
  11. Security, Data Privacy & Compliance
  12. Architectural Decisions & Challenges
  13. MVP vs. Scale Phasing

1. Architecture Principles

Before any tech choice, these principles guide every decision:

  1. Member experience is the bottleneck, not admin throughput. If Sunday morning is slow, nothing else matters. Optimize the read path relentlessly.
  2. Event-driven from the start, not as a retrofit. Modules communicate through domain events. No module reads another module’s database. No shared tables. No CRUD integration. This is the lesson from 25 years of legacy systems: once you couple via database, you never decouple. Events are the integration layer from day one, even if the transport is in-process Symfony Messenger initially.
  3. Domain-Driven Design where it matters. Clean bounded contexts, proper aggregates, value objects, domain events. Not DDD-as-religion (no need for specification patterns in a CRUD settings screen), but DDD where the domain complexity justifies it: groups/events hierarchy, giving transactions, service planning.
  4. Self-contained systems (SCS). Each bounded context owns its data, its logic, and its API surface. One database for MVP, but strict schema separation and zero cross-module queries from day one. When extraction happens, it’s a deployment change, not a rewrite.
  5. API-first, app-second. The mobile app is a client. The admin UI is a client. The public API is a client. They all consume the same API layer. No backdoors.
  6. Progressive complexity. Start simple, add infrastructure only when pain is real. Kubernetes at MVP is a cost/complexity trap. Kafka at MVP is an ops burden with no payoff.
  7. Open source by default, pay only for genuine quality gaps. The test: “Would my users notice the difference?” If yes, pay. If no, self-host.
  8. Fully automated from day one. Infrastructure as Code, CI/CD pipelines, environment provisioning, database cloning — all automated.
  9. Bootstrapped economics. No investor money. The architecture must work at €30/month and scale to €3,000/month without a redesign.
  10. Modular and extensible, not monolithic mud. Every feature is a module with explicit boundaries enforced by tooling. Modules can be enabled/disabled per denomination or church.
  11. AI-agentic development is the default mode. Code is cheap. Architecture decisions, domain models, and guidelines are the expensive part. The founder is the master architect, reviewer, and product manager — not the primary implementer. AI agents write the code, following well-defined skill files, architecture decision records, and coding guidelines. This changes the cost calculus: “good architecture costs more developer time” is no longer true. Good architecture costs more thinking time, but the implementation is generated. Invest heavily in documentation, guidelines, and decision records — they are the codebase for the agents.

2. AI-Agentic Development Model

The founder is not a developer who sometimes thinks about architecture. The founder is an architect and product manager who directs AI agents.

The New Economics of Architecture

Traditional thinking: “Good architecture = more upfront effort = slower MVP.” AI-agentic thinking: “Good architecture = better prompt context = faster and more consistent agent output.”

Every hour spent writing a clear architecture decision record, a domain model specification, or a module guideline pays for itself 100x because:

  • An AI agent that generates a new module with clear boundaries, proper events, clean aggregates, and full tests does so in minutes — but only if it has the spec.
  • Without the spec, the agent generates plausible-looking code that subtly violates boundaries, mixes concerns, and creates the ball of mud you’re trying to avoid.
  • The spec IS the codebase. The generated code is a disposable artifact. The guidelines, decision records, and domain models are the durable assets.

Repository Documentation Structure

.ai/                                    # AI agent context — the most important directory
├── ARCHITECTURE.md                     # This document (or a condensed version for agent context)
├── DOMAIN_MODEL.md                     # Bounded contexts, aggregates, invariants, events
├── CODING_GUIDELINES.md                # PHP style, patterns, naming conventions, do/don't
├── MODULE_GUIDELINES.md                # How to create a new module, step-by-step
├── API_GUIDELINES.md                   # REST conventions, error formats, pagination, versioning
├── EVENT_CATALOG.md                    # All domain events, their schemas, who emits/consumes
├── TESTING_GUIDELINES.md               # Unit, integration, E2E patterns and expectations
├── DATABASE_CONVENTIONS.md             # Naming, indexing, JSONB usage, migration rules

├── decisions/                          # Architecture Decision Records (ADRs)
│   ├── 001-symfony-backend.md          # Why Symfony, what was considered
│   ├── 002-react-native-mobile.md      # Why RN over native
│   ├── 003-zitadel-auth.md             # Why Zitadel, not Keycloak/Ory
│   ├── 004-centrifugo-realtime.md      # Why Centrifugo for chat + live
│   ├── 005-deepl-translation.md        # Why DeepL over LibreTranslate
│   ├── 006-hetzner-hosting.md          # Why Hetzner, cost comparison
│   ├── 007-event-driven-first.md       # Why events from day one
│   ├── 008-deptrac-not-packages.md     # Why Deptrac at MVP, packages later
│   └── ...                             # One per significant decision

├── skills/                             # Agent skill files — how to do specific tasks
│   ├── create-module.md                # Step-by-step: scaffold a new module
│   ├── create-aggregate.md             # How to define a DDD aggregate
│   ├── create-domain-event.md          # Event class, schema, registration
│   ├── create-api-endpoint.md          # Controller, DTO, validation, OpenAPI
│   ├── create-admin-page.md            # Twig + Vue component pattern
│   ├── add-translation-support.md      # Make an entity translatable
│   ├── add-audit-logging.md            # (Automatic via pipeline, but manual overrides)
│   ├── write-tests.md                  # Unit, integration, fixture patterns
│   ├── create-migration.md             # Doctrine migration conventions
│   └── deploy.md                       # Deployment checklist and commands

├── templates/                          # Code templates agents use as starting points
│   ├── module/                         # Skeleton for a new module
│   │   ├── ModuleTemplate.php
│   │   ├── Entity/AggregateTemplate.php
│   │   ├── Event/DomainEventTemplate.php
│   │   ├── Service/ServiceTemplate.php
│   │   ├── Controller/ApiControllerTemplate.php
│   │   ├── Repository/RepositoryTemplate.php
│   │   ├── config/routes.yaml
│   │   └── config/services.yaml
│   ├── test/
│   │   ├── UnitTestTemplate.php
│   │   └── IntegrationTestTemplate.php
│   └── migration/
│       └── MigrationTemplate.php

└── prompts/                            # Reusable prompt fragments for common tasks
    ├── review-module.md                # "Review this module for boundary violations..."
    ├── review-event-design.md          # "Check event schema completeness..."
    └── review-aggregate.md             # "Validate aggregate boundaries..."

Agent Workflow

┌─────────────────────────────────────────────────────────────┐
│ FOUNDER (Master Architect / PM / Reviewer)                   │
│                                                              │
│ Creates:  Requirements, domain models, ADRs, guidelines      │
│ Reviews:  Generated code, PRs, architectural consistency      │
│ Decides:  Priorities, scope, trade-offs, module boundaries    │
│ Does NOT: Write boilerplate, implement CRUD, write migrations │
└───────────────────────────┬─────────────────────────────────┘

                            │ Writes specs + reviews output


┌─────────────────────────────────────────────────────────────┐
│ AI AGENTS (Implementation)                                   │
│                                                              │
│ Read:     .ai/ directory for context on every task            │
│ Generate: Module scaffolds, entities, events, controllers,    │
│           tests, migrations, admin pages, API docs            │
│ Follow:   CODING_GUIDELINES.md, MODULE_GUIDELINES.md, etc.   │
│ Validate: Run Deptrac, PHPStan, tests before proposing       │
│                                                              │
│ Tools: Claude Code, Cursor, Copilot, Aider, etc.             │
└─────────────────────────────────────────────────────────────┘

What This Means for Architecture Decisions

The traditional “YAGNI” argument against good architecture assumes human labor costs. With AI agents:

TraditionalAI-Agentic
”Don’t create an event class, just call the service directly”Events cost 30 seconds of agent time. The decoupling pays for itself immediately.
”Skip the value object, just use a string”Value objects cost 10 seconds. Type safety catches bugs the agent would otherwise create.
”Don’t write a contract interface, you only have one implementation”The interface costs 15 seconds. It enables the module to be swapped or mocked in tests.
”Write the test later”Tests written alongside the code cost the agent 60 seconds. Written later, they cost 10x because context is lost.
”Don’t document this ADR, just remember the decision”The ADR takes 5 minutes to write. Without it, every future agent interaction re-derives the decision from scratch.

The conclusion: With AI agents, the marginal cost of good architecture approaches zero. The cost of BAD architecture remains high (debugging, refactoring, untangling coupling). Therefore: invest heavily in structure from day one. Write the guidelines, define the boundaries, model the domains properly. The agents will follow them.


3. System Overview

┌─────────────────────────────────────────────────────────────────┐
│                        CLIENTS                                  │
│                                                                 │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌───────────────┐   │
│  │ iOS App  │  │ Android  │  │ Web/PWA  │  │ Admin Backend │   │
│  │ (Native) │  │ (Native) │  │ (React)  │  │ (Symfony+Vue) │   │
│  └────┬─────┘  └────┬─────┘  └────┬─────┘  └──────┬────────┘   │
│       │              │             │               │            │
└───────┼──────────────┼─────────────┼───────────────┼────────────┘
        │              │             │               │
        ▼              ▼             ▼               ▼
┌─────────────────────────────────────────────────────────────────┐
│                     API GATEWAY / CDN                            │
│              (Cloudflare / nginx + rate limiting)                │
└───────────────────────────┬─────────────────────────────────────┘

        ┌───────────────────┼───────────────────────┐
        ▼                   ▼                       ▼
┌──────────────┐  ┌─────────────────┐  ┌────────────────────┐
│  PUBLIC API  │  │   ADMIN API     │  │  WEBHOOK DISPATCH  │
│  (Symfony)   │  │   (Symfony)     │  │    (Symfony)       │
│  read-heavy  │  │  write-heavy    │  │   async workers    │
└──────┬───────┘  └────────┬────────┘  └─────────┬──────────┘
       │                   │                     │
       ▼                   ▼                     ▼
┌─────────────────────────────────────────────────────────────────┐
│                     EVENT BUS                                    │
│          (Phase 1: PostgreSQL LISTEN/NOTIFY + Redis Streams)     │
│          (Phase 2+: Redpanda when justified — NOT Kafka)         │
└───────────────────────────┬─────────────────────────────────────┘

        ┌───────────┬───────┼───────┬──────────────┐
        ▼           ▼       ▼       ▼              ▼
  ┌──────────┐ ┌────────┐ ┌─────┐ ┌──────┐ ┌───────────┐
  │ People   │ │ Groups │ │Event│ │Giving│ │  Sunday   │
  │ Service  │ │Service │ │Svc  │ │ Svc  │ │  Service  │
  └────┬─────┘ └───┬────┘ └──┬──┘ └──┬───┘ └─────┬─────┘
       │           │         │       │            │
       ▼           ▼         ▼       ▼            ▼
  ┌──────────┐ ┌────────┐ ┌─────┐ ┌──────┐ ┌───────────┐
  │  Own DB  │ │ Own DB │ │ DB  │ │  DB  │ │    DB     │
  │ (PgSQL)  │ │(PgSQL) │ │     │ │      │ │           │
  └──────────┘ └────────┘ └─────┘ └──────┘ └───────────┘

  ┌──────────┐ ┌──────────┐ ┌───────────┐ ┌───────────┐
  │   Chat   │ │   News   │ │Notification│ │  Search   │
  │ Service  │ │ Service  │ │  Service   │ │ Service   │
  └──────────┘ └──────────┘ └───────────┘ └───────────┘

  ┌─────────────────────────────────────────────────────────┐
  │         OBSERVE & PROTECT PIPELINE (event consumers)     │
  │  Audit Log · History Tracker · Moderation · Translation  │
  │  Consumes ALL events. Zero coupling to domain modules.   │
  └─────────────────────────────────────────────────────────┘

  ┌──────────────────────────────────────────────────────┐
  │              SHARED INFRASTRUCTURE                    │
  │  PostgreSQL · Redis · S3 · CDN · Firebase · Centrifugo    │
  │  Zitadel/Ory (IdP) · DeepL API · Tolgee                  │
  └──────────────────────────────────────────────────────┘

4. Mobile App

The Big Decision: React Native vs. Native

DimensionReact NativeNative (Swift + Kotlin)
Code sharing~70-80% shared between iOS, Android, and Web (PWA)0% shared. Three codebases.
Development speed1 team builds 3 platformsNeed iOS dev + Android dev + web dev
PerformanceGood for 95% of use cases. Hot path (chat, feeds, scrolling) is fine. Live interaction on Sunday could be tricky.Best possible performance. Smooth 60fps everywhere.
Native APIsPush, camera, calendar, deep links — all well-supported via Expo.Full access to everything, always.
Developer availabilityReact/JS devs are abundant.Swift/Kotlin devs are scarcer and more expensive.
PWA / Web appMassive win. Same components render on web.Separate React/Vue web app needed anyway.
Long-term maintenanceOne dependency (React Native) to track. Breaking changes happen.Apple/Google SDKs are the dependency — they break too, but docs are better.
AI-assisted developmentExcellent. LLMs are very strong at React/TypeScript.Good for Swift, decent for Kotlin. Less training data for Compose.

Recommendation: React Native (Expo)

Why: You’re one person (for now). The ability to ship iOS, Android, and a web PWA from one codebase is not a nice-to-have — it’s a survival requirement. The performance ceiling of React Native is high enough for everything in this app. Chat, feeds, push, giving, livestream embedding — all well within RN’s capabilities.

The Sunday live experience is the only area where native might win. But even there, a WebSocket connection + a ScrollView is not pushing RN’s limits. If you ever hit a perf wall on one specific screen, you can write that single screen as a native module — Expo supports this.

The web/PWA angle is huge. You mentioned wanting code reuse between app and web. With React Native + React Native Web (built into Expo), the same component library renders on all three platforms. The admin backend is a separate Symfony+Vue app, but the member-facing web experience (for the elderly, for desktop users) comes almost free.

Architecture decision:

Mobile & Web Client Stack:
├── Expo (managed workflow)
├── React Native (iOS + Android)
├── React Native Web (PWA)
├── TypeScript (strict mode, no exceptions)
├── Custom design system (no Tailwind, no NativeBase — own components)
│   ├── Themed primitives: Text, Button, Card, Input, Avatar, Badge
│   ├── Layout: Stack, Grid, Screen, SafeArea
│   ├── Design tokens: colors, spacing, typography, radii, shadows
│   ├── Theme provider: denomination branding injected at runtime
│   └── Dark mode: token-level, not per-component
├── State management: Zustand (simple) or TanStack Query (server state)
├── Navigation: Expo Router (file-based routing)
├── Push: expo-notifications → Firebase (Android) + APNs (iOS)
└── Offline: optimistic UI + local SQLite cache for critical data

⚠️ Challenge: “We build our own component library”

Good instinct — no CSS framework makes sense for a cross-platform design system. But be disciplined:

  • Phase 1 (MVP): Build only the components you need. Don’t design a full system upfront. You need: Text, Button, Card, TextInput, Avatar, Icon, List, Badge, Modal, BottomSheet. That’s ~12 components.
  • Phase 2: Extract patterns as you see them. If three screens use the same card layout, abstract it then.
  • Anti-pattern: Building a Storybook with 40 components before shipping a single feature. The design system grows with the product, not ahead of it.
  • Design tokens from day one: Colors, spacing scale, font sizes, border radii — define these as a theme object. Denomination branding swaps the token set, not the components. This is non-negotiable from the start.

5. Backend Services

Symfony — An Honest Assessment

Strengths for this project:

  • You know it deeply. Shipping speed matters more than tech purity.
  • Symfony’s ecosystem is mature: Doctrine ORM, Messenger (async), Security, Serializer, Validator — all battle-tested.
  • Symfony Messenger + Redis/RabbitMQ is a solid async processing foundation.
  • API Platform (Symfony bundle) can auto-generate REST/GraphQL APIs from entities with minimal code. Worth evaluating.
  • PHP 8.3+ with JIT, fibers, and typed properties is a different language than PHP 5.x. Performance is good.

Honest concerns:

  • Concurrency model. PHP is request-scoped. Each request boots the framework, handles the request, dies. This is fine for CRUD APIs but becomes a constraint for real-time features (WebSockets, SSE, long-polling). You’ll need a sidecar for real-time (separate Node.js/Go service, or use a managed platform).
  • Worker processes. Symfony Messenger workers are long-running PHP processes. They work, but they leak memory over time and need supervisor restarts. At scale (thousands of events/second), this becomes an operational headache compared to Go/Rust/Java workers.
  • Sunday peak load. If 50 churches hit the API simultaneously on Sunday morning with 500 members each, you’re looking at 25,000 concurrent users. PHP-FPM can handle this with enough workers, but you need to think about connection pooling (PgBouncer), opcache warming, and response caching aggressively.

Verdict: Symfony is a valid choice. The concurrency concern is real but solvable (sidecar for real-time, aggressive caching, read replicas). Don’t let anyone tell you PHP can’t scale — Wikipedia, Slack (originally), Etsy, and WordPress.com prove otherwise. But be honest about where PHP struggles and plan for sidecars from the architecture level.

Module System Architecture

The platform must not become a big ball of mud. Every feature — pinboard, giving, chat, Sunday experience — is a module that plugs into a core platform via well-defined interfaces. Modules can be enabled/disabled per denomination or per church. The core platform is thin: auth, user profile, settings, navigation, and the module loader. Everything else is a module.

What is a Module?

A module is a self-contained feature package that:

  1. Owns its domain. Its entities, its database tables (or schema), its business logic. No other module touches its data directly.
  2. Exposes a defined interface. Other modules and the core platform interact with it only through: events (emitted/consumed), a service interface (for synchronous queries the core platform needs), and API endpoints.
  3. Is independently deployable in concept. Even within the monolith, a module can be disabled without breaking other modules. No module depends on the concrete implementation of another module — only on interfaces and events.
  4. Is configurable per tenant. Each denomination or church can enable/disable the module, and configure its behavior through a module-specific settings schema.

Core Platform vs. Modules

┌─────────────────────────────────────────────────────────────┐
│ CORE PLATFORM (always active, not disableable)               │
│                                                              │
│ ├── Identity / Auth      JWT validation, session, current user│
│ ├── User Profile         Name, avatar, preferences, locale   │
│ ├── Settings Engine      Per-user, per-church, per-denom     │
│ │   └── Extensible: modules register their own settings      │
│ ├── Navigation           Dynamic: shows menu items for       │
│ │                        enabled modules only                │
│ ├── Notification Engine  Channels (push, email, in-app),     │
│ │   └── Modules register notification types + templates      │
│ ├── Permission Engine    RBAC framework, resource checks     │
│ │   └── Modules register their own permissions               │
│ ├── Event Bus            Publish/subscribe infrastructure     │
│ ├── Tenant Config        denomination_id, church_id,          │
│ │                        enabled_modules[], module_configs{}  │
│ └── Module Loader        Discovers, registers, initializes    │
│                          modules based on tenant config       │
└─────────────────────────────────────────────────────────────┘

          ┌───────────────────┼───────────────────┐
          ▼                   ▼                   ▼
   ┌─────────────┐   ┌─────────────┐   ┌─────────────┐
   │   MODULE:   │   │   MODULE:   │   │   MODULE:   │
   │   Groups    │   │   Pinboard  │   │   Giving    │  ...
   │             │   │             │   │             │
   │ Entities    │   │ Entities    │   │ Entities    │
   │ Events      │   │ Events      │   │ Events      │
   │ API routes  │   │ API routes  │   │ API routes  │
   │ Settings    │   │ Settings    │   │ Settings    │
   │ Permissions │   │ Permissions │   │ Permissions │
   │ Nav items   │   │ Nav items   │   │ Nav items   │
   │ Notif types │   │ Notif types │   │ Notif types │
   └─────────────┘   └─────────────┘   └─────────────┘

Module Contract (what every module must implement)

interface ModuleInterface
{
    /** Unique identifier, e.g. 'pinboard', 'giving', 'sunday' */
    public function getIdentifier(): string;
    
    /** Human-readable name for admin UI */
    public function getName(): string;
    
    /** Dependencies on other modules (e.g. Pinboard depends on Groups for scoping) */
    public function getDependencies(): array;  // ['groups'] or []
    
    /** Navigation items this module adds to the app and admin */
    public function getNavigationItems(): array;
    
    /** Permission definitions this module registers */
    public function getPermissions(): array;
    // e.g. ['pinboard.post.create', 'pinboard.post.moderate', 'pinboard.settings.edit']
    
    /** Notification types this module can send */
    public function getNotificationTypes(): array;
    // e.g. ['pinboard.new_post', 'pinboard.post_reply']
    
    /** Settings schema this module adds to the settings engine */
    public function getSettingsSchema(): array;
    // e.g. ['pinboard.categories' => ['type' => 'array', 'default' => [...]]]
    
    /** API route prefix, e.g. '/api/pinboard' */
    public function getRoutePrefix(): string;
    
    /** Events this module emits */
    public function getEmittedEvents(): array;
    
    /** Events this module listens to (from other modules) */
    public function getConsumedEvents(): array;
}

Example: Pinboard Module

Module: Pinboard
├── identifier: 'pinboard'
├── dependencies: []  (works standalone, optionally scoped to groups if Groups module active)

├── Entities (own schema/tables):
│   ├── pinboard_posts (id, church_id, author_id, category, title, body, 
│   │                    visibility, expires_at, status, translations JSONB)
│   ├── pinboard_categories (id, church_id, name, icon, sort_order)
│   └── pinboard_replies (id, post_id, author_id, body)

├── Events emitted:
│   ├── pinboard.post.created    → consumed by: Notification, Audit, Moderation, Translation
│   ├── pinboard.post.replied    → consumed by: Notification, Audit
│   └── pinboard.post.expired    → consumed by: Audit

├── Events consumed:
│   ├── member.deleted           → anonymize author on posts
│   └── group.deleted            → unlink posts from deleted group (if group-scoped)

├── Permissions registered:
│   ├── pinboard.post.create     (default: member+)
│   ├── pinboard.post.moderate   (default: church_admin+)
│   └── pinboard.settings.edit   (default: church_admin+)

├── Settings registered:
│   ├── pinboard.enabled                    (boolean, per church)
│   ├── pinboard.categories                  (array, per church)
│   ├── pinboard.default_expiry_days         (int, default: 30)
│   ├── pinboard.require_moderation_approval (boolean, default: false)
│   └── pinboard.allow_group_scoping         (boolean, default: true)

├── Notification types:
│   ├── pinboard.new_post        (configurable: push + in-app)
│   └── pinboard.post_reply      (configurable: push + in-app)

├── Navigation items:
│   ├── App: "Pinboard" tab (icon: pin, position: 4)
│   └── Admin: "Pinboard" menu item under "Content"

├── API routes: /api/pinboard/*
│   ├── GET  /posts              (list, filtered by church, category, group)
│   ├── POST /posts              (create)
│   ├── GET  /posts/{id}         (detail)
│   ├── POST /posts/{id}/replies (reply)
│   └── GET  /categories         (list categories for this church)

└── Integration with other modules:
    ├── If Groups module is active AND pinboard.allow_group_scoping is true:
    │   → posts can be scoped to a group (group_id FK, nullable)
    │   → group members see group-scoped posts in their group view
    │   → this is done via an interface, NOT by importing Groups entities
    ├── If Chat module is active:
    │   → "Reply via chat" button on pinboard posts (deep-link to DM)
    └── If Translation module is active:
        → post content auto-translated (handled by Translation pipeline consumer,
           Pinboard module doesn't know or care about translation)

Module Feature Flags (per denomination / per church)

-- Tenant configuration stores which modules are active
CREATE TABLE tenant_module_config (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    denomination_id UUID NOT NULL,
    church_id UUID,                          -- NULL = denomination-wide default
    module_identifier VARCHAR(50) NOT NULL,  -- 'pinboard', 'giving', 'sunday', 'chat'
    enabled BOOLEAN NOT NULL DEFAULT true,
    config JSONB DEFAULT '{}',               -- module-specific settings overrides
    
    -- Precedence: church-level > denomination-level > platform default
    UNIQUE(denomination_id, church_id, module_identifier)
);

-- Examples:
-- ICF Movement enables Giving for all churches:
--   denomination_id: icf, church_id: NULL, module: 'giving', enabled: true
-- ICF Zürich disables Pinboard (they don't want it):
--   denomination_id: icf, church_id: icf-zurich, module: 'pinboard', enabled: false
-- FEG enables Pinboard with custom categories:
--   denomination_id: feg, church_id: NULL, module: 'pinboard', enabled: true,
--   config: {"categories": ["Angebote", "Gesuche", "Gebetsanliegen", "Wohnungen"]}

Resolution logic in the Module Loader:

1. Load platform defaults (all modules enabled with default config)
2. Override with denomination-level config (church_id IS NULL)
3. Override with church-level config (church_id = current church)
4. Result: final set of enabled modules + merged configuration
5. App receives this via GET /api/config → shows/hides features dynamically

Mobile app behavior: The app’s navigation, screens, and features are driven by the module config received at login. If Pinboard is disabled for a church, the tab doesn’t appear, the API routes return 403, and the module’s notification types are suppressed. No dead UI, no hidden features leaking through.

Module Categories

CORE (always active, cannot be disabled):
├── Identity / Auth
├── User Profile
├── Settings
├── Notifications
└── Navigation

STANDARD MODULES (enabled by default, can be disabled per church):
├── Groups            — group hierarchy, memberships, types
├── Events            — calendar, RSVP, types, group-linking
├── News              — multi-level feed, announcements, scheduled posts
├── People Directory  — searchable member list, privacy-controlled
└── Push Notifications

OPTIONAL MODULES (disabled by default, enabled per denomination/church):
├── Pinboard          — community marketplace / notice board
├── Giving            — donations, funds, campaigns, payment integration
├── Sunday            — service experience, sermon archive, live features
├── Chat              — group chat, 1:1, event chat (via Centrifugo)
└── Church Finder     — denomination-level location directory

PIPELINE MODULES (infrastructure, always active, not user-facing):
├── Audit Log
├── History Tracker
├── Content Moderation
├── Translation
└── Search Indexing

Replacing a Module

Because modules interact only through interfaces and events, replacing one is straightforward:

  1. New module implements the same ModuleInterface.
  2. New module emits the same event types (or a superset).
  3. New module consumes the same events it needs.
  4. Swap the module registration. Other modules don’t notice.

Example: if you later decide the built-in Chat module should be replaced with a Matrix-based implementation, the new module just needs to implement the same ChatModuleInterface, emit the same chat.message.sent events, and consume the same member.deleted events. The Audit, Moderation, and Notification consumers keep working unchanged.

Service Boundaries (Self-Contained Systems)

Each service owns its domain, its database, and its API surface. Communication between services is via events, not direct API calls.

┌─────────────────────────────────────────────────────────────┐
│ IDENTITY & ACCESS (delegated to external IdP)                │
│ Zitadel (primary) or Ory Kratos+Hydra (alternative)         │
│ Handles: login, registration, MFA, passkeys, social login,  │
│   sessions, OAuth2/OIDC, password recovery, account deletion │
│ DB: IdP's own PostgreSQL (separate from app databases)       │
│ Integration: JWT validation + webhook on user lifecycle      │
│ Events consumed by app: user.created, user.updated,          │
│   user.deleted, login.succeeded, login.failed                │
│ Multi-tenancy: Zitadel Organizations = denominations         │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ PEOPLE & DIRECTORY                                          │
│ Member profiles, directory, privacy settings, tags           │
│ DB: PostgreSQL                                               │
│ Consumes: user.created (to create member profile)            │
│ Events: member.updated, member.joined_church                 │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ GROUPS                                                       │
│ Group types, hierarchy, memberships, sub-groups, relations   │
│ DB: PostgreSQL (relational for hierarchy + JSONB for config) │
│ Events: group.created, membership.changed,                   │
│         group.hierarchy_changed                              │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ EVENTS (calendar, not system events)                         │
│ Event types, scheduling, RSVP, hierarchy, group links        │
│ DB: PostgreSQL                                               │
│ Events: event.created, event.updated, rsvp.changed           │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ NEWS & COMMUNICATION                                         │
│ Posts, announcements, pinboard, feeds, scheduled publishing  │
│ DB: PostgreSQL + JSONB for content blocks + translations      │
│ Events: post.published, announcement.created                 │
│ Translation: post.published → Translation Worker → DeepL API │
│   → stores translations in JSONB per configured language      │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ GIVING                                                       │
│ Donations, funds, campaigns, payment processing, receipts    │
│ DB: PostgreSQL (financial data — strong consistency required) │
│ Events: donation.received, campaign.updated                  │
│ ⚠️ PCI compliance scope — isolate strictly                   │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ SUNDAY / SERVICES                                            │
│ Service entries, series, speakers, media, live interactions   │
│ DB: PostgreSQL + S3 (media)                                  │
│ Events: service.live_started, service.ended, poll.created    │
│ ⚠️ Real-time component — needs WebSocket sidecar             │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ CHAT                                                         │
│ 1:1 messages, group chats, event chats, media, threads       │
│ DB: own store (see Chat section below)                       │
│ ⚠️ Most demanding real-time requirements — separate system   │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ NOTIFICATION                                                 │
│ Push (Firebase/APNs), email, in-app, digest, preferences     │
│ DB: PostgreSQL (preferences) + Redis (queues)                │
│ Consumes: ALL domain events → applies routing rules →        │
│           dispatches to channels                             │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ SEARCH                                                       │
│ Full-text search across people, groups, events, posts        │
│ Phase 1: PostgreSQL full-text search (tsvector + pg_trgm)    │
│ Phase 2: Meilisearch (self-hosted, Rust, typo-tolerant)      │
│ NOT Elasticsearch/ELK (massively over-engineered for this)   │
│ NOT Algolia (managed = cost + another GDPR processor)        │
│ Consumes: entity events → indexes into search engine         │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ WEBHOOK DISPATCH                                             │
│ External webhook delivery for API consumers                  │
│ Consumes: ALL domain events → filters by subscription →      │
│           delivers with retry, backoff, signing              │
└─────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────┐
│ MEDIA                                                        │
│ Image/video/audio upload, processing, CDN distribution       │
│ Storage: S3-compatible (Cloudflare R2 or MinIO)              │
│ CDN: Cloudflare                                              │
│ Processing: image resize, audio transcode (async workers)    │
└─────────────────────────────────────────────────────────────┘

Admin Backend

Admin Stack:
├── Symfony 7.x (PHP 8.3+)
├── Twig templates (server-rendered pages)
├── Vue 3 components (islands of interactivity)
│   ├── People list with filters and bulk actions
│   ├── News composer (rich text editor)
│   ├── Giving dashboard (charts)
│   ├── Real-time notification preview
│   └── Group hierarchy editor (drag & drop tree)
├── Stimulus / Turbo (Hotwire) for lightweight reactivity
├── Authentication: OIDC via Zitadel/Ory (SSO into admin, JWT or session)
└── Same API contracts as public API (admin just has elevated permissions)

This is a sound approach. Symfony + Twig for the shell, Vue for the interactive islands. Don’t over-SPA the admin — most admin screens are forms and tables. Server-rendering is faster to build and more reliable.


6. Real-time & Chat

The Chat Decision

This is the hardest architectural decision in the entire project. A full deep-dive analysis is available in chat-architecture-deep-dive.md. Here is the summary of evaluated options and the decision.

Options Evaluated

OptionTime to MVPCost at 50K MAUGDPRVendor Lock-inVerdict
Stream (getstream.io)1-2 weeks~€2,000/moEU datacenter, DPA availableHigh (proprietary)Best for speed. Worst for long-term cost. Sunday peak pricing punishes our usage pattern.
Matrix (Synapse)3-5 weeks~€350/mo (self-hosted)Perfect (self-hosted)Zero (open protocol)Best GDPR + E2E encryption story. But multi-tenancy mismatch is a dealbreaker — Matrix is designed for federation, not multi-denomination SaaS.
Centrifugo + Custom6-10 weeks (Phase 2)~€120/moPerfect (self-hosted)Zero (MIT license)Best long-term fit. Full control, minimal cost, same infra for chat + Sunday live.
Full self-built12-24 weeks~€120/moPerfectZeroNot justified. Centrifugo handles the hard real-time part.

Why Not Matrix?

Matrix was tempting for GDPR and E2E encryption. But the multi-tenancy model is fundamentally mismatched. Matrix was designed for open federation between servers, not for a multi-denomination SaaS where ICF data must be isolated from FEG data within one deployment. You’d need either separate Synapse instances per denomination (ops nightmare) or complex Space-based access controls. The protocol’s complexity (state events, power levels, room state resolution) is overkill for church group chat.

Why Not Stream Long-Term?

Stream charges for peak concurrent connections per billing cycle. Our usage pattern is uniquely punishing: every Sunday, hundreds of churches spike to maximum concurrent connections for 2 hours. Stream bills for that peak all month. At 50K MAU, estimated cost is €2,000/mo. The same load on Centrifugo (self-hosted) costs €120/mo in infrastructure.

5-year cost projection: Stream ~€250,000 vs. Centrifugo + Custom ~€15,000 (+ ~€10K one-time engineering investment).

⚡ Decision: Phased Approach

Phase 1 (MVP): Skip chat, or use Stream Maker plan behind an abstraction.

Chat is not the MVP differentiator — news feeds, groups, events, and giving are. Ship without chat, or with a minimal implementation using Stream’s free Maker plan ($100/mo credit for teams < 5 people, < $10K revenue). Abstract it behind a ChatProvider interface from day one:

interface ChatProvider {
  connect(userId: string, token: string): Promise<void>;
  sendMessage(channelId: string, message: MessageInput): Promise<Message>;
  subscribeToChannel(channelId: string, onMessage: (msg: Message) => void): Unsubscribe;
  getHistory(channelId: string, before?: string, limit?: number): Promise<Message[]>;
  markRead(channelId: string, messageId: string): Promise<void>;
  disconnect(): Promise<void>;
}

Phase 2 (Growth): Build Centrifugo + Custom chat. Migrate off Stream.

Centrifugo handles the real-time transport (WebSocket connections, pub/sub, presence). Symfony Chat module handles business logic (message storage, permissions, offline push fallback). Estimated effort: ~15-28 days.

┌──────────┐  WebSocket  ┌──────────────┐  HTTP API  ┌──────────────┐
│  Mobile   │ ──────────▶│  Centrifugo   │◀──publish──│  Symfony      │
│  App      │◀────────── │  (Go, self-   │            │  Chat Module  │
│           │  real-time  │   hosted)     │            │               │
│           │  messages   └──────────────┘            │  PostgreSQL   │
│           │                                          │  Redis        │
│           │ ──HTTP────────────────────────────────▶  │  S3 (media)   │
│           │  send msg, history, read receipts         └──────────────┘
└──────────┘

Message flow: Client → Symfony API (validate, store) → Centrifugo publish → WebSocket to online recipients. Offline recipients get push via Notification service.

Phase 3 (Maturity): E2E encryption, voice messages, AI moderation.

Evaluate Centrifugo PRO for built-in push notification integration. Consider MLS protocol for encrypted pastoral conversations.

Real-time Beyond Chat

Chat isn’t the only real-time feature. Centrifugo serves as the unified real-time layer for the entire platform:

Real-time Architecture (all via Centrifugo):
├── Chat: message delivery, typing indicators, presence
├── Sunday Live:
│   ├── Slide sync: admin publishes → Centrifugo channel → all connected apps
│   ├── Live polls: admin creates → members vote → results aggregated in real-time
│   └── Prayer wall: member submits → moderator approves → broadcast
├── News: real-time post updates when user has the app open
├── Notifications: in-app notification badges (complement to push)
└── Presence: who's online in a group/event (Redis-backed via Centrifugo)

Push Notifications (separate from Centrifugo):
├── Firebase Cloud Messaging (Android)
├── APNs (iOS)
└── Symfony Messenger workers consume domain events → dispatch push

7. Identity & Authentication

Decision: External Identity Provider, Not Self-Built

Authentication is a solved problem — and a dangerous one to get wrong. Building your own auth means handling password hashing, session management, token rotation, brute-force protection, WebAuthn ceremony flows, social login OAuth dance, account recovery, MFA enrollment, and GDPR-compliant account deletion. One mistake = data breach = trust destroyed.

Use a dedicated, battle-tested identity provider. The app and admin backend are OIDC clients. The identity provider handles the rest.

Options Evaluated

DimensionZitadelOry (Kratos + Hydra)KeycloakFusionAuth
What it isAll-in-one cloud-native identity platformModular headless identity infrastructureEnterprise IAM server (CNCF)Developer-focused identity platform
LanguageGoGoJava (Quarkus)Java
Fits JVM-free stack❌ JVM required❌ Java required
Passkeys / FIDO2✅ Built-in, free✅ Kratos✅ Supported✅ (v1.52+)
Social login✅ Provider templates✅ Via config✅ Extensive
Passwordless (magic link, OTP)
Multi-tenancy✅ Native “Organizations” — maps to denomination → church⚠️ Possible but manual — each tenant needs config⚠️ “Realms” — heavy, one realm per tenant✅ Native tenants
Custom login UI / branding✅ Login Experience API or hosted login✅ Fully headless — you build all UI⚠️ FreeMarker theme templates✅ Themes per tenant
Self-hosted GDPR✅ Go binary + PostgreSQL✅ Go binaries + PostgreSQL✅ But JVM = heavier✅ But Java = heavier
Open sourceApache 2.0Apache 2.0Apache 2.0Community = limited features
Operational weightLight (single Go service)Light per component, but multiple services (Kratos + Hydra + Oathkeeper)Heavy (JVM, needs tuning at scale)Medium
Event-driven architecture✅ Event-sourced internally⚠️ Webhooks for lifecycle events⚠️ SPI extensions⚠️ Webhooks
React Native integrationStandard OIDC (any OIDC lib)Standard OIDC + custom login UIStandard OIDCOfficial RN SDK
MaturityGrowing fast, younger ecosystemProven, used by Fortune 500Most mature, largest communityMature, 10M+ downloads
Managed cloud option✅ Zitadel Cloud✅ Ory Network❌ (Red Hat SSO is commercial Keycloak)✅ FusionAuth Cloud

Recommendation: Zitadel (primary) or Ory (alternative)

Zitadel is the primary recommendation for these reasons:

  • Multi-tenancy maps directly to your domain. Zitadel’s “Organization” concept models denomination → church naturally. Each denomination is an Organization, each church can be a sub-organization. Members, roles, and branding are scoped per organization. This is exactly your data model.
  • No JVM. Your entire stack is JVM-free: Symfony (PHP), Centrifugo (Go), Redpanda (C++), PostgreSQL, Redis. Zitadel (Go) fits. Keycloak or FusionAuth (Java) would be the only JVM process in your infrastructure — adding heap tuning, GC monitoring, and memory overhead.
  • Event-sourced internally. Zitadel stores all state changes as events — philosophically aligned with your event-driven architecture. You can react to auth events (user registered, login failed, password changed) in your own system.
  • Passkeys and modern auth included at no extra cost. No premium tier required for FIDO2, passwordless, or social login.
  • Branding per organization. Each denomination gets its own login experience with their colors and logo — matching the “one app per movement” design philosophy.
  • Self-hostable on Hetzner. Go binary + PostgreSQL (or CockroachDB). Runs alongside your existing infrastructure with minimal resources.

Ory is the strong alternative for teams that want maximum control:

  • Fully headless. Ory provides zero UI — you build every login screen, registration form, and recovery flow yourself. This is both a strength (pixel-perfect branding, full control) and a cost (more frontend work).
  • Modular architecture. Kratos (identity), Hydra (OAuth2/OIDC), Oathkeeper (API gateway auth), Keto (permissions). You pick what you need. This modularity is powerful but means managing multiple services.
  • Battle-tested at scale. Used by large enterprises with millions of identities. The Ory Network managed cloud is an option if you don’t want to self-host.
  • API-first, developer-friendly. Everything is configured via YAML/JSON and APIs — no admin UI to click through (unless you build one or use the Ory Console in cloud).

When to choose Ory over Zitadel:

  • You want 100% custom login UI with zero constraints (Zitadel’s hosted login is customizable but not infinitely).
  • You need fine-grained permission management beyond RBAC (Ory Keto provides Zanzibar-style relationship-based access control).
  • You’re already familiar with the Ory ecosystem.

When to choose Zitadel over Ory:

  • You want faster time to market (Zitadel’s hosted login = no login UI to build).
  • Native multi-tenancy matters (Zitadel Organizations vs. manual Ory tenant config).
  • You want one service to manage, not three.

Integration Architecture

┌──────────────────────────────────────────────────────────────┐
│ IDENTITY PROVIDER (Zitadel or Ory)                           │
│                                                              │
│  ┌─────────────┐  ┌──────────────┐  ┌─────────────────────┐ │
│  │  Login UI   │  │  User Store  │  │  OIDC / OAuth2      │ │
│  │  (hosted or │  │  (PostgreSQL)│  │  Token Issuer       │ │
│  │   custom)   │  │              │  │                     │ │
│  └──────┬──────┘  └──────────────┘  └──────────┬──────────┘ │
│         │                                       │            │
└─────────┼───────────────────────────────────────┼────────────┘
          │                                       │
          ▼                                       ▼
┌──────────────┐                        ┌──────────────────┐
│  Mobile App  │──── OIDC login ───────▶│  Zitadel/Ory     │
│  (Expo)      │◀─── JWT tokens ────────│  Token endpoint  │
│              │                        └──────────────────┘
│              │                                 │
│              │──── API calls ─────▶ ┌──────────▼──────────┐
│              │     (Bearer JWT)     │  Symfony API        │
│              │◀──── responses ──────│  (validates JWT,    │
└──────────────┘                      │   extracts claims:  │
                                      │   user_id, org_id,  │
┌──────────────┐                      │   roles, church_id) │
│  Admin UI    │──── session/JWT ────▶│                     │
│  (Symfony)   │◀──── responses ──────│                     │
└──────────────┘                      └─────────────────────┘

Key integration points:

  • Mobile app uses expo-auth-session or react-native-app-auth for OIDC login flow → receives JWT access token + refresh token.
  • JWT contains custom claims: org_id (denomination), church_id, roles[], locale.
  • Symfony validates JWT signature against Zitadel/Ory’s JWKS endpoint (cached).
  • No user passwords ever touch your Symfony backend. Auth is fully delegated.
  • User provisioning: when a user registers via Zitadel/Ory, a webhook fires → Symfony creates the member profile in your People module.
  • Admin backend can use either OIDC (redirect to Zitadel login) or session-based auth via Symfony Security with Zitadel as the identity backend.

Phasing

Phase 1 (MVP): Deploy Zitadel (Docker Compose on Hetzner) or use Zitadel Cloud free tier (up to 25K MAU). Configure one Organization per pilot denomination. Enable email/password + social login (Google, Apple). Passkeys available but not mandatory.

Phase 2: Enable passkeys as primary login method. Add denomination-specific branding. Implement SSO for denominations that have existing IdPs (Zitadel supports SAML/OIDC federation).

Phase 3: Evaluate Ory Keto for fine-grained permissions if RBAC becomes limiting. Add device management (revoke sessions per device).


8. Internationalization & AI Translation

Two distinct problems, two distinct solutions.

7a. Static UI Translations (i18n)

Button labels, navigation, error messages, screen titles, form placeholders. ~500-2000 translation keys that change with each release.

Tech stack:

Static Translation Stack:
├── Mobile App (React Native)
│   ├── i18next + react-i18next (industry standard)
│   ├── Language bundles: JSON files per locale (de.json, en.json, fr.json, pt.json)
│   ├── Pluralization, interpolation, nested keys, namespacing
│   ├── Lazy loading: load only the active language bundle
│   ├── Fallback chain: user locale → church default → "de"
│   └── Over-the-air updates: push translation fixes without app release

├── Admin Backend (Symfony)
│   ├── Symfony Translation component (built-in)
│   ├── XLIFF or YAML translation files
│   └── Same locale keys as mobile where applicable

└── Translation Management Platform (for non-developer translators)
    ├── PRIMARY: Tolgee (open source, self-hostable)
    │   ├── Self-hosted on Hetzner → full GDPR sovereignty
    │   ├── In-context translation editor (translate directly in the UI)
    │   ├── Machine translation via DeepL as starting point
    │   ├── GitLab integration → auto-sync translation files to repo
    │   ├── Translation memory → reuse across keys
    │   └── Community translation → church volunteers can contribute

    ├── ALTERNATIVE: Crowdin
    │   ├── Managed SaaS (not self-hosted)
    │   ├── Best-in-class GitLab/GitHub integration
    │   ├── Over-the-air SDK for React Native (push updates without release)
    │   ├── Excellent for open-source projects (free for OSS)
    │   └── Machine translation pre-fill + human review workflow

    └── ALTERNATIVE: Lokalise
        ├── Managed SaaS
        ├── Strong developer workflow, Figma plugin
        └── More enterprise-focused pricing

Recommendation: Tolgee. Self-hosted, open source, fits your sovereignty story. Church volunteers can be given access to translate their language without touching code. DeepL pre-fills translations, humans review and approve. Translation files sync to your GitLab repo automatically.

Workflow:

  1. Developer adds a new translation key in code: t('events.rsvp.confirm_button')
  2. Key appears in Tolgee dashboard as “untranslated”
  3. DeepL auto-suggests translations for all configured languages
  4. Volunteer translator reviews, adjusts, approves
  5. Tolgee pushes updated JSON to GitLab → next build includes the translation
  6. For urgent fixes: over-the-air update via Tolgee CDN (no app release needed)

7b. Dynamic Content Translation (AI-Powered)

A pastor writes a news post in German. A Brazilian member sees it in Portuguese. A French-speaking member sees it in French. Automatically, in real-time, without the pastor doing anything.

This is the USP. No church app competitor offers this. For multilingual churches (ICF Zürich, international congregations, Heilsarmee with French/German/Italian) this is transformative.

Architecture:

Dynamic Content Translation Flow:

  ┌─────────────┐        ┌──────────────────────┐
  │ Admin/Pastor │        │ Symfony News Module   │
  │ writes post  │──POST─▶│                      │
  │ in German    │        │ 1. Detect language    │
  └─────────────┘        │    (source_lang: "de")│
                          │ 2. Store original     │
                          │ 3. Emit event:        │
                          │    post.published      │
                          └──────────┬───────────┘


                          ┌──────────────────────┐
                          │ Translation Worker    │
                          │ (Symfony Messenger)   │
                          │                       │
                          │ For each church lang  │
                          │ that ≠ source_lang:   │
                          │                       │
                          │ ┌──────────────────┐  │
                          │ │ DeepL API        │  │
                          │ │ (or LibreTransl.) │  │
                          │ │ de → en           │  │
                          │ │ de → pt           │  │
                          │ │ de → fr           │  │
                          │ └──────────────────┘  │
                          │                       │
                          │ Store translations    │
                          │ in JSONB column       │
                          └──────────┬───────────┘


                          ┌──────────────────────┐
                          │ API Response          │
                          │                       │
                          │ GET /api/posts/123    │
                          │ Accept-Language: pt   │
                          │                       │
                          │ → Returns Portuguese  │
                          │   version with badge: │
                          │   "auto-translated"   │
                          │   + "view original"   │
                          └──────────────────────┘

Data model for translatable content:

-- Every translatable entity gets these columns
ALTER TABLE news_posts ADD COLUMN source_language VARCHAR(5) NOT NULL DEFAULT 'de';
ALTER TABLE news_posts ADD COLUMN translations JSONB DEFAULT '{}';
ALTER TABLE news_posts ADD COLUMN translation_status JSONB DEFAULT '{}';

-- Example stored data:
-- source_language: "de"
-- content: "Herzlich willkommen zum Gottesdienst..."  (original)
-- translations: {
--   "en": "Welcome to the worship service...",
--   "pt": "Bem-vindos ao culto...",
--   "fr": "Bienvenue au culte..."
-- }
-- translation_status: {
--   "en": {"status": "auto", "translated_at": "2026-03-10T10:00:00Z", "engine": "deepl"},
--   "pt": {"status": "reviewed", "translated_at": "2026-03-10T10:05:00Z", "reviewed_by": "user-uuid"},
--   "fr": {"status": "auto", "translated_at": "2026-03-10T10:00:00Z", "engine": "deepl"}
-- }

-- Same pattern applies to:
-- event titles + descriptions
-- group names + descriptions  
-- pinboard posts
-- sermon summaries
-- announcements

Translation engine strategy:

Translation Engine (phased):
├── Phase 1 (MVP): DeepL API
│   ├── Best quality for European languages (DE, EN, FR, IT, PT, ES, NL)
│   ├── €20/month Pro plan = 500K characters (covers a LOT of church posts)
│   ├── API is simple: POST text + source_lang + target_lang → translated text
│   ├── Language detection API included
│   └── Formal/informal toggle (useful: "Sie" vs "du" in German)

├── Phase 2: Add caching + cost optimization
│   ├── Cache translations in PostgreSQL (never translate the same content twice)
│   ├── Translation memory: reuse phrases across posts
│   ├── Batch translations: queue during off-peak, translate overnight
│   └── Monitor DeepL costs → if > €100/month, evaluate alternatives

├── Phase 3: Hybrid engine
│   ├── DeepL for high-visibility content (announcements, sermon summaries)
│   ├── LibreTranslate (self-hosted, open source) for high-volume/lower-priority
│   │   ├── Runs on your Hetzner infra → zero per-character cost
│   │   ├── Quality is decent but noticeably below DeepL for DE↔PT, DE↔FR
│   │   └── Good enough for pinboard posts and chat message translation
│   └── LLM-based (Claude/GPT API) for nuanced content requiring context
│       └── Sermon summaries, theological content where tone matters

└── Future: On-device translation
    ├── Apple/Google on-device models improving rapidly
    ├── Translate at read-time on the device → zero API cost
    └── Quality approaching cloud APIs for common language pairs

API contract:

GET /api/posts/123
Headers:
  Accept-Language: pt
  
Response:
{
  "id": "123",
  "content": "Bem-vindos ao culto...",           // Portuguese (requested)
  "source_language": "de",
  "display_language": "pt",
  "translation_meta": {
    "status": "auto",                             // "auto" | "reviewed" | "original"
    "engine": "deepl",
    "translated_at": "2026-03-10T10:00:00Z"
  },
  "original_available": true                      // show "view original" link
}

Member-facing UX:

  • Posts appear in the member’s preferred language automatically.
  • Subtle badge: “Auto-translated from German” with option to “View original.”
  • If a human translator has reviewed the translation: no badge (treated as official).
  • Members can toggle “always show original” in settings.
  • If no translation exists yet (translation worker still processing): show original with a “translation pending” indicator.

Admin/translator workflow:

  • Pastor publishes post → auto-translated within seconds (async worker).
  • Optional: church can enable “review required” mode → translations are generated but held until a volunteer reviews them.
  • Translation review UI in admin backend: side-by-side original + auto-translation, editable.
  • Reviewed translations replace auto-translations and lose the “auto-translated” badge.

Chat message translation (Phase 3 feature):

  • Different from post translation — must be near-instant (< 500ms perceived).
  • Translate on read, not on write (messages are sent in original language, stored once).
  • Use DeepL API with aggressive caching (short phrases repeat often).
  • Or: leverage on-device translation APIs (iOS 17+ and Android 14+ have built-in translation).
  • Display: original message + translated text below, with language indicator.

Cost projection (DeepL API):

ScalePosts/monthAvg. chars/postLanguagesTotal charsDeepL cost
MVP (5 churches)~100~5002 target~100KFree tier / ~€5
Growth (50 churches)~1,000~5003 target~1.5M~€20-30/mo
Scale (200 churches)~5,000~5004 target~10M~€80-120/mo

At scale, this is remarkably affordable compared to the value it delivers.


9. Data Layer

PostgreSQL — The Right Choice, With Nuance

PostgreSQL is excellent for this project. The JSONB support means you don’t need a separate document store. But there are architectural considerations at scale:

Per-service databases. Each self-contained system gets its own PostgreSQL database (or schema). No cross-service joins. If the Groups service needs member names, it consumes member.updated events and maintains its own local projection.

Read replicas for Sunday peak. The News, Events, and Sunday services will have extreme read:write ratios (1000:1 on a Sunday). Deploy read replicas and route read traffic there. PostgreSQL streaming replication makes this straightforward.

PgBouncer is mandatory. PHP-FPM creates a new database connection per request (unless using persistent connections, which have their own issues). At 500 concurrent requests, that’s 500 database connections. PgBouncer pools connections and reduces the actual database connection count to ~50.

JSONB for flexibility, not for relationships. Use JSONB for: notification preferences, group configuration/settings, flexible form data, content blocks in news posts. Don’t use JSONB for: anything you need to JOIN on, foreign keys, or reporting queries. Keep the relational model clean.

Data Architecture:
├── PostgreSQL 16+
│   ├── Per-service databases (logical separation)
│   ├── PgBouncer (connection pooling — essential for PHP)
│   ├── Streaming replication (read replicas for peak load)
│   ├── JSONB columns for flexible/config data
│   ├── Row-Level Security (multi-tenancy — denomination → church isolation)
│   └── pg_cron for scheduled jobs (digest notifications, post expiry)

├── Redis 7+
│   ├── Caching (API response cache, session cache, feed cache)
│   ├── Queues (Symfony Messenger transport)
│   ├── Rate limiting (API rate limits per tenant)
│   ├── Pub/sub (lightweight real-time — typing indicators, presence)
│   ├── Redis Streams (Phase 1 event bus — before Kafka)
│   └── Ephemeral data (online status, draft autosave)

├── S3-compatible Object Storage
│   ├── Cloudflare R2 (zero egress fees — huge cost advantage over AWS S3)
│   ├── Media uploads (images, audio, video, documents)
│   ├── Sermon recordings and podcast files
│   ├── Giving receipts (PDF generation → S3)
│   └── Backup storage

├── CDN
│   ├── Cloudflare (you're already familiar with it)
│   ├── Static assets (app bundles, images, fonts)
│   ├── API response caching (Cache-Control headers on public endpoints)
│   ├── Media delivery (R2 → Cloudflare CDN, automatic)
│   └── DDoS protection + WAF

├── Search (phased approach — no ELK, no Algolia)
│   ├── Phase 1: PostgreSQL full-text search
│   │   ├── tsvector columns + GIN indexes on searchable entities
│   │   ├── pg_trgm extension for fuzzy/typo-tolerant matching
│   │   ├── Good enough for < 50K records across all entities
│   │   └── Zero additional infrastructure
│   ├── Phase 2: Meilisearch (when PostgreSQL FTS feels limiting)
│   │   ├── Self-hosted, Rust-based, single binary, < 500MB RAM
│   │   ├── Sub-50ms typo-tolerant search with relevance ranking
│   │   ├── GDPR-friendly (runs on your Hetzner server)
│   │   └── Alternatives considered and rejected:
│   │       ├── Elasticsearch/ELK — massively over-engineered, JVM overhead
│   │       ├── Algolia — excellent DX but managed = cost + GDPR processor
│   │       └── Typesense — viable alternative to Meilisearch, similar profile
│   └── Search targets:
│       ├── People search (name, role, group, church)
│       ├── Event search (title, description, date, type)
│       ├── News search (full-text across posts)
│       └── Group discovery (name, type, tags, location)

└── Backups
    ├── PostgreSQL: pg_basebackup + WAL archiving → S3
    ├── Point-in-time recovery (PITR) — essential for financial data (giving)
    └── Daily logical backups (pg_dump) for cross-region restore testing

Multi-Tenancy Model

This is architecturally critical. You have three levels: denomination → church → user.

Recommendation: Schema-per-denomination, Row-Level Security per church.

  • Each denomination gets its own PostgreSQL schema (or database, depending on scale). This provides hard isolation at the denomination level — ICF data never touches FEG data, even accidentally.
  • Within a schema, church-level isolation uses Row-Level Security (RLS) or application-level filtering on a church_id column.
  • This balances security (denomination isolation) with efficiency (no need for hundreds of separate database instances).
Tenant Hierarchy:
├── Denomination (schema-level isolation)
│   ├── ICF Movement → schema: icf
│   ├── FEG → schema: feg
│   └── Viva Church → schema: viva
│       ├── Church/Location (row-level isolation via church_id)
│       │   ├── Viva Zürich → church_id: 1
│       │   ├── Viva Basel → church_id: 2
│       │   └── Viva Bern → church_id: 3
│       │       ├── Groups, Events, People → all filtered by church_id
│       │       └── Cross-church content → denomination-level (schema root)

10. Infrastructure & DevOps

⚠️ Challenge: Kubernetes at MVP

You said: “probably kubernetes for scaling with keda.”

Architect’s pushback: Kubernetes is the right answer for the wrong phase. At MVP, a single Hetzner server with Docker Compose runs everything for < €50/month. K8s adds €200-500/month in overhead before a single user connects.

Cloud Provider: Hetzner Cloud (Confirmed)

DimensionHetzner CloudAWS / GCPDigitalOceanScaleway
CPX41 (8 vCPU, 16GB)~€16/mo~€120/mo (comparable EC2)~€96/mo~€45/mo
Managed PostgreSQLNot available (use dedicated server)€50-200/mo (RDS)€15-60/mo€15-50/mo
Block Storage (100GB)~€5/mo~€10/mo (EBS)~€10/mo~€8/mo
Load Balancer~€6/mo~€18/mo (ALB)~€12/mo~€10/mo
Bandwidth20TB included free$$$$ (egress fees)4-8TB includedUnlimited
EU-based / GDPR✅ Germany/Finland⚠️ EU regions available⚠️ EU regions available✅ France/Netherlands
Total for MVP setup~€30-50/mo~€150-300/mo~€80-120/mo~€60-100/mo

Decision: Hetzner. 3-5x cheaper than AWS for equivalent specs. EU-based (GDPR native). 20TB bandwidth included (AWS egress fees alone would cost more than the entire Hetzner bill). You already know it. The only downside is no managed Kubernetes and no managed PostgreSQL — both acceptable because you don’t need managed K8s at MVP, and PostgreSQL on a dedicated server with automated backups is production-ready.

For Phase 3 (if K8s is needed): consider Hetzner’s k3s-compatible nodes, or migrate the compute layer to Scaleway Kapsule while keeping Hetzner for databases.

External Service Cost Audit

Every external service must justify its existence. Two categories: commoditized infrastructure (open-source alternatives are equally good → self-host) and quality-critical services (users notice the difference → pay for quality).

Commoditized infrastructure — self-host, €0:

ServiceSolutionWhy not pay?
Auth / IdentityZitadel (self-hosted)Zitadel does everything Auth0/Clerk does. Passkeys, social login, multi-tenancy. No quality gap.
Translation mgmtTolgee (self-hosted)Same workflow as Crowdin/Lokalise. In-context editor, GitLab sync, machine pre-fill.
Error trackingSentry OSS (self-hosted)Identical product to Sentry Cloud. Same codebase, self-hosted.
MonitoringPrometheus + Grafana + Uptime KumaIndustry standard. Same dashboards Grafana Cloud sells.
Search (MVP)PostgreSQL FTS (tsvector + pg_trgm)Built into your existing database. No additional service.
Search (Phase 2)Meilisearch (self-hosted)As fast as Algolia, zero per-query cost.
Real-timeCentrifugo (self-hosted)Proven at 1M+ connections. No per-connection pricing like Pusher/Ably.
Event busRedis Streams → RedpandaBoth self-hosted. No per-message pricing.
CDN + S3Cloudflare free plan + R2R2 has zero egress fees. Free tier is generous.
Push notificationsFirebase FCM + APNsFree tier covers millions of messages.
CI/CDGitLab CI/CDFree tier (400 min/mo, or unlimited with self-hosted runners).
IaCOpenTofuOpen-source Terraform fork, Linux Foundation backed.
Email (dev)MailpitOpen-source email capture for dev/staging.

Quality-critical services — worth paying for:

ServiceSolutionWhy pay?Cost
Content translationDeepL APILibreTranslate quality is noticeably worse for European languages. Members reading a sermon summary in Portuguese will feel the difference. DeepL is excellent.~€20-30/mo
Payment processingStripe / TWINTPCI compliance alone justifies not self-hosting payments. No alternative.Transaction fees only
Transactional emailMailgun (EU region)EU-hosted infrastructure for GDPR. Proven deliverability. Symfony Mailer integration. You have working experience.Free tier → ~€35/mo at scale
App Store distributionApple + GoogleNon-negotiable.~€125/year
Domain + DNSCloudflare~€10/year for domain registration.~€10/year

Infrastructure as Code: OpenTofu from Day One

Why OpenTofu over Terraform: Terraform switched to BSL license in 2023. OpenTofu is the community fork under Linux Foundation, fully open source (MPL-2.0), API-compatible. Same HCL syntax, same providers, same workflow. Use OpenTofu from the start — zero risk of license issues as you scale.

Why IaC from day one: As a solo founder, you cannot afford to have infrastructure knowledge only in your head. If you get sick for a week, a contributor or future hire must be able to tofu apply and have a working environment. IaC is not overhead — it’s your insurance policy.

infrastructure/
├── environments/
│   ├── prod/
│   │   ├── main.tf           # Production server definitions
│   │   ├── variables.tf      # Prod-specific variables (server sizes, IPs)
│   │   ├── terraform.tfvars  # Prod values (gitignored, stored in vault)
│   │   └── backend.tf        # Remote state (S3/R2 backend)
│   ├── staging/
│   │   ├── main.tf           # Staging (smaller servers, same topology)
│   │   ├── variables.tf
│   │   └── terraform.tfvars
│   └── dev/
│       └── main.tf           # Local Docker Compose (no cloud resources)

├── modules/
│   ├── hetzner-server/       # Reusable: server + firewall + DNS
│   ├── hetzner-network/      # VPC, subnets, firewall rules
│   ├── docker-stack/         # Docker Compose deployment via SSH
│   ├── cloudflare-dns/       # DNS records + R2 bucket
│   ├── backup/               # Automated backup configuration
│   └── monitoring/           # Prometheus + Grafana + Uptime Kuma + Sentry

├── scripts/
│   ├── db-clone.sh           # Clone prod DB → staging (sanitized)
│   ├── db-restore.sh         # Restore from backup
│   ├── setup-server.sh       # Bootstrap new server (Docker, firewall, SSH keys)
│   └── rotate-secrets.sh     # Secret rotation automation

└── Makefile                  # Single entry point for all operations
    # make env=prod plan
    # make env=prod apply
    # make env=staging clone-db
    # make env=staging deploy

Hetzner provider for OpenTofu:

# modules/hetzner-server/main.tf
terraform {
  required_providers {
    hcloud = {
      source  = "hetznercloud/hcloud"
      version = "~> 1.45"
    }
  }
}

resource "hcloud_server" "app" {
  name        = "${var.environment}-app"
  server_type = var.server_type  # "cpx21" for staging, "cpx41" for prod
  image       = "ubuntu-24.04"
  location    = "nbg1"          # Nuremberg, Germany
  ssh_keys    = [hcloud_ssh_key.deploy.id]
  
  labels = {
    environment = var.environment
    role        = "app"
  }
}

resource "hcloud_server" "db" {
  name        = "${var.environment}-db"
  server_type = var.db_server_type  # "cpx11" for staging, "ccx23" for prod
  image       = "ubuntu-24.04"
  location    = "nbg1"
  ssh_keys    = [hcloud_ssh_key.deploy.id]

  labels = {
    environment = var.environment
    role        = "database"
  }
}

resource "hcloud_firewall" "db" {
  name = "${var.environment}-db-firewall"
  rule {
    direction = "in"
    protocol  = "tcp"
    port      = "5432"
    source_ips = [hcloud_server.app.ipv4_address]  # Only app server can reach DB
  }
}

Environments

Three environments, same topology, different sizes.

┌─────────────────────────────────────────────────────────────┐
│ ENVIRONMENTS                                                 │
│                                                              │
│ ┌─────────────┐  ┌──────────────┐  ┌─────────────────────┐  │
│ │    DEV       │  │   STAGING    │  │    PRODUCTION       │  │
│ │             │  │              │  │                     │  │
│ │ Docker      │  │ Hetzner      │  │ Hetzner             │  │
│ │ Compose     │  │ CPX11 (2v/4G)│  │ CPX41 (8v/16G)      │  │
│ │ on laptop   │  │ + CPX11 (DB) │  │ + CCX23 (DB)        │  │
│ │             │  │              │  │                     │  │
│ │ Hot reload  │  │ Prod clone   │  │ The real thing       │  │
│ │ Seed data   │  │ Sanitized DB │  │ Real data            │  │
│ │ Mailpit     │  │ Mailpit      │  │ Mailgun (EU)         │  │
│ │ Sentry mock │  │ Sentry OSS   │  │ Sentry OSS           │  │
│ │             │  │              │  │                     │  │
│ │ Cost: €0    │  │ Cost: ~€12/mo│  │ Cost: ~€35/mo        │  │
│ └─────────────┘  └──────────────┘  └─────────────────────┘  │
│                                                              │
│ All provisioned via OpenTofu. Same Docker Compose structure. │
│ Only difference: server sizes and data.                      │
└─────────────────────────────────────────────────────────────┘

Dev environment: Docker Compose on your laptop. All services (Symfony, PostgreSQL, Redis, Zitadel, Tolgee, Sentry, Mailpit for email capture) in containers. make dev-up starts everything. Hot-reload via Symfony’s built-in server + Expo dev server. Seed data via Doctrine Fixtures.

Staging environment: Identical topology to production, smaller servers. Deployed via the same pipeline as production. Used for: final testing before production deploy, demo to pilot churches, load testing with k6. Database is a sanitized clone of production (see DB cloning below).

Production environment: The real thing. Deployed only after staging passes all checks.

Database Cloning: Prod → Staging

One-command clone with PII sanitization. Essential for realistic testing without GDPR violations.

#!/bin/bash
# scripts/db-clone.sh — Clone production DB to staging (sanitized)

set -euo pipefail

PROD_HOST="${PROD_DB_HOST}"
STAGING_HOST="${STAGING_DB_HOST}"
DUMP_FILE="/tmp/prod-dump-$(date +%Y%m%d).sql.gz"

echo ">>> Dumping production database..."
ssh ${PROD_HOST} "pg_dump -Fc -h localhost -U app church_app" | gzip > ${DUMP_FILE}

echo ">>> Restoring to staging..."
gunzip -c ${DUMP_FILE} | ssh ${STAGING_HOST} "pg_restore -h localhost -U app -d church_app --clean --if-exists"

echo ">>> Sanitizing PII..."
ssh ${STAGING_HOST} psql -h localhost -U app church_app << 'SQL'
  -- Anonymize member data
  UPDATE people SET
    first_name = 'Member',
    last_name = 'User-' || id::text,
    email = 'user-' || id::text || '@staging.local',
    phone = NULL,
    address = NULL;
  
  -- Scramble giving data (keep amounts for testing, remove identity link)
  UPDATE donations SET
    donor_notes = NULL;
  
  -- Clear chat messages
  TRUNCATE chat_messages CASCADE;
  
  -- Clear sessions and tokens
  TRUNCATE user_sessions CASCADE;
  
  -- Reset admin passwords to staging default
  -- (auth is in Zitadel, so this is Zitadel's staging instance)
  
  VACUUM FULL;
SQL

echo ">>> Staging database ready (sanitized)."
rm ${DUMP_FILE}

Triggered via: make env=staging clone-db or scheduled weekly via cron.

Super Admin Dashboard

A private, internal-only admin panel for you as the platform operator — separate from the church admin backend.

Super Admin Dashboard (Symfony + Twig + Vue):
├── Route: /superadmin (IP-restricted + separate auth)
├── Authentication: Zitadel with dedicated "superadmin" role + IP allowlist

├── Denominations & Churches
│   ├── List all denominations, churches, member counts
│   ├── Create/edit/disable denominations
│   ├── Onboard new church (wizard: create org in Zitadel, seed config, invite admin)
│   ├── View church health: active users, engagement metrics, storage usage
│   └── Feature flags per denomination/church (toggle features for rollout)

├── Users & Identity
│   ├── Cross-denomination user search
│   ├── View user profile (across all modules)
│   ├── Impersonate user (for debugging — audit logged)
│   ├── Force password reset / disable account
│   └── GDPR: full data export, right-to-deletion execution

├── System Health
│   ├── Embedded Grafana dashboards (server metrics, API latency, error rates)
│   ├── Embedded Uptime Kuma (uptime status)
│   ├── Background job queue status (Symfony Messenger: pending, failed, processing)
│   ├── Redis stats (memory, connections, hit rate)
│   ├── PostgreSQL stats (connections, slow queries, table sizes)
│   └── Centrifugo stats (connections, channels, messages/sec) — Phase 2

├── Deployments & Infrastructure
│   ├── Current deployed version (git SHA, deploy timestamp)
│   ├── Recent deploys log
│   ├── Trigger staging deploy (button → GitHub Actions webhook)
│   ├── Database migration status
│   └── OpenTofu state summary (servers, IPs, resources)

├── Content & Moderation
│   ├── Flagged content queue (across all churches)
│   ├── Global announcements (push to all denominations)
│   └── Translation status dashboard (pending, auto, reviewed per language)

├── Revenue & Billing (when you monetize)
│   ├── Church subscription status
│   ├── Usage metrics per church (MAU, storage, API calls)
│   └── Invoice generation

└── AI Operations (Phase 2+)
    ├── Translation queue status (DeepL API usage, costs, failures)
    ├── Sermon transcription queue
    └── AI model cost tracking

Build approach: Start with 2-3 pages in MVP (denominations list, system health, user search). Grow it as the platform grows. This is your cockpit — invest in it proportionally.

CI/CD Pipeline

Pipeline (GitLab CI/CD — free for private repos, unlimited with self-hosted runners):

├── On push to any branch:
│   ├── PHP lint + PHPStan (level 8 — strict)
│   ├── PHP unit tests (PHPUnit)
│   ├── PHP integration tests (against PostgreSQL testcontainer)
│   ├── TypeScript lint + type check (mobile app)
│   ├── React Native Jest tests
│   ├── Security audit (composer audit + npm audit)
│   └── Docker build (verify it builds)

├── On PR merge to main:
│   ├── All above +
│   ├── E2E tests (Detox for mobile, or Maestro)
│   ├── API contract tests (OpenAPI spec validation)
│   ├── Database migration test (apply all migrations on fresh DB)
│   ├── Build Docker images → push to GitLab Container Registry (built-in)
│   ├── OpenTofu plan (dry run for staging)
│   ├── Deploy to staging (automatic)
│   └── Run smoke tests against staging

├── On manual approval (or git tag):
│   ├── OpenTofu plan (dry run for production)
│   ├── Deploy to production
│   ├── Run smoke tests against production
│   ├── Notify Sentry of new release
│   └── Database migration (with pre-deploy health check)

├── Weekly (scheduled):
│   ├── Clone prod DB → staging (sanitized)
│   ├── Dependency vulnerability scan (Dependabot + composer audit)
│   ├── Database backup verification (restore test on throwaway server)
│   └── Performance baseline test (k6 load test against staging)

└── Deployment mechanics:
    ├── Docker images built in CI → pushed to GitLab Container Registry (built-in)
    ├── Staging/prod: SSH into server → docker compose pull → docker compose up -d
    ├── Zero-downtime: health check → swap → drain old containers
    ├── Rollback: docker compose up -d --force-recreate with previous image tag
    └── All orchestrated via Makefile targets callable from GitLab CI/CD

GitLab CI/CD cost: Free tier includes 400 CI/CD minutes/month on shared runners. For unlimited minutes, register a self-hosted runner on your Hetzner server (€0 extra). At ~5 min per pipeline run and ~10 runs/day, a self-hosted runner handles this easily.

App Store / Play Store publishing: Intentionally manual. A new denomination app is published maybe 2-3 times per year — automating this with Fastlane or CI-based store deployment would be over-engineering. A step-by-step checklist in internal docs (screenshots, provisioning profiles, store listing copy, review guidelines) is the right approach. OTA updates via Expo (for JS bundle changes) still work without store re-submission, so most updates never touch the store process anyway. → TODO: Write internal publishing checklist when first denomination is onboarded.

Phase 1: MVP Infrastructure (0-50 churches, < 5,000 MAU)

Infrastructure — Phase 1:
├── Hosting: Hetzner Cloud
│   ├── 1x App Server (CPX21: 3 vCPU, 4GB RAM, €7.50/mo)
│   │   ├── Docker Compose running:
│   │   │   ├── Symfony (PHP-FPM 8.3 + nginx)
│   │   │   ├── Redis 7
│   │   │   ├── Zitadel (Go binary, ~200MB RAM)
│   │   │   ├── Tolgee (translation management)
│   │   │   ├── Sentry (self-hosted, ~1GB RAM — or defer to Phase 2)
│   │   │   ├── Uptime Kuma (monitoring)
│   │   │   ├── Symfony Messenger workers (supervisord)
│   │   │   └── Prometheus + Grafana (monitoring)
│   │   └── Note: CPX21 may be tight with all services. Upgrade to CPX31 (€13/mo)
│   │         if memory pressure occurs. Still far cheaper than any alternative.
│   │
│   ├── 1x Database Server (CPX11: 2 vCPU, 2GB RAM, €4.50/mo)
│   │   ├── PostgreSQL 16 + PgBouncer
│   │   ├── Daily automated backups → Cloudflare R2
│   │   └── Firewall: only app server IP on port 5432
│   │
│   ├── 1x Staging Server (CPX11: 2 vCPU, 2GB RAM, €4.50/mo)
│   │   ├── Same Docker Compose stack, smaller
│   │   ├── Staging DB (sanitized prod clone)
│   │   └── Used for testing + demos
│   │
│   └── Cloudflare (free plan)
│       ├── DNS + CDN + DDoS protection
│       ├── R2: media storage (10GB free, then $0.015/GB/mo)
│       └── R2: database backups + OpenTofu state

├── Provisioning: OpenTofu
│   ├── All servers defined in HCL
│   ├── Firewall rules, SSH keys, DNS records
│   ├── State stored in Cloudflare R2
│   └── `make env=prod apply` creates everything from scratch

├── Deployment: GitLab CI/CD + Docker Compose
│   ├── Images built in CI → GitLab Container Registry (built-in)
│   ├── Deploy = SSH + docker compose pull + up -d
│   └── Rollback = deploy previous image tag

├── Monitoring (all self-hosted, €0):
│   ├── Uptime Kuma (uptime + status page)
│   ├── Prometheus + Grafana (metrics + dashboards)
│   ├── Sentry OSS (error tracking + performance)
│   └── Alerting: Grafana → Telegram/email (free) or PagerDuty free tier

├── Total monthly cost:
│   ├── App server:     €7.50 (CPX21) or €13 (CPX31)
│   ├── DB server:      €4.50
│   ├── Staging server:  €4.50
│   ├── Cloudflare:      €0 (free plan)
│   ├── Firebase:        €0 (free tier)
│   ├── GitLab CI/CD:    €0 (self-hosted runner)
│   ├── Domain:          ~€1/mo
│   ├── Apple Dev:       ~€8.50/mo (€99/year)
│   ├── Google Dev:      ~€2/mo (€25 one-time, amortized)
│   └── ─────────────────────────────────
│       TOTAL:          ~€30-35/month

That’s €30-35/month for a fully automated, multi-environment, monitored production platform. Try doing that on AWS.

Phase 2: Growth (50-200 churches, 5,000-50,000 MAU)

Infrastructure — Phase 2:
├── Hosting: Hetzner, separated services
│   ├── 1x App Server (CPX41: 8 vCPU, 16GB, ~€16/mo)
│   ├── 1x Worker Server (CPX21: 3 vCPU, 4GB, ~€7.50/mo)
│   │   └── Symfony Messenger consumers + Centrifugo
│   ├── 1x DB Primary (CCX13: 2 vCPU, 8GB dedicated, ~€14/mo)
│   ├── 1x DB Read Replica (CPX21: ~€7.50/mo)
│   ├── 1x Staging (CPX21: ~€7.50/mo)
│   └── Hetzner Load Balancer (~€6/mo)

├── New services (all self-hosted, €0):
│   ├── Centrifugo (on worker server — chat + live + presence)
│   ├── Meilisearch (on app server — if PostgreSQL FTS outgrown)
│   └── Redis: dedicated instance or Hetzner managed

├── External costs:
│   ├── DeepL API: ~€20-30/mo (content translation)
│   └── Cloudflare R2: ~€1-5/mo (beyond free tier)

├── Total: ~€80-120/month

Phase 3: Scale (200+ churches, 50,000+ MAU)

Infrastructure — Phase 3:
├── Stay on Hetzner as long as possible
│   ├── Hetzner dedicated servers for DB (better price/performance than cloud)
│   ├── k3s (lightweight Kubernetes) on Hetzner Cloud servers
│   │   └── k3s is free, runs on standard Hetzner VMs, no managed K8s needed
│   ├── KEDA for auto-scaling workers
│   ├── Redpanda (single binary, on dedicated server)
│   ├── PostgreSQL with streaming replication (multiple read replicas)
│   └── Meilisearch cluster

├── Only move to managed services when self-hosting becomes the bottleneck:
│   ├── Managed PostgreSQL (Supabase, Neon) — only if DBA tasks consume > 4h/week
│   ├── Managed Redpanda — only if event throughput requires dedicated ops
│   └── Grafana Cloud — only if self-hosted Grafana can't handle the metrics volume

├── Total: ~€300-800/month (still far below AWS equivalent)
└── Revenue at this scale: 200+ churches × €50-150/mo = €10K-30K MRR

11. Security, Data Privacy & Compliance

Data privacy is not a compliance checkbox — it’s a competitive moat. Swiss and German churches actively distrust US-hosted platforms. “All data on Swiss/EU servers, fully audited, GDPR + nDSG compliant” sells.

10a. Regulatory Landscape

RegulationJurisdictionKey RequirementsImpact on Architecture
GDPR (EU)All EU/EEAConsent, right to deletion, data portability, DPA, breach notification (72h), data minimizationCore compliance framework. Everything below serves GDPR.
nDSG / FADP (Switzerland)SwitzerlandSimilar to GDPR but independent. Data export restrictions, information duty, DPA required, breach notification to FDPIC.Swiss hosting preferred. Hetzner (DE/FI) acceptable under adequacy decision, but Swiss servers score extra trust points.
BDSG (Germany)GermanyGDPR supplement. Stricter rules on employee data, church-specific data protection law (KDG for Catholic, DSG-EKD for Protestant).Church-specific privacy law may apply for some denominations. The platform must support denomination-level privacy policies.
KDG / DSG-EKDGerman churchesCatholic (KDG) and Protestant (DSG-EKD) have their own data protection regulations with their own supervisory authorities.Some German church customers may require compliance with KDG/DSG-EKD instead of or in addition to GDPR. The platform’s privacy architecture must be flexible enough to accommodate this.

10b. Authentication

Authentication (delegated to Zitadel — see Section 6):
├── API: JWT (short-lived access tokens, 15min) + refresh tokens (7-30 days)
│   └── Issued by Zitadel, validated by Symfony against JWKS endpoint
├── Admin: OIDC login via Zitadel (SSO into admin backend)
├── Passkeys / FIDO2: phishing-resistant, handled entirely by IdP
├── Social login: Google, Apple — configured in IdP, zero code in Symfony
├── MFA: TOTP + passkeys (mandatory for denomination admins, optional for members)
└── Device management: revoke sessions per device (via IdP API)

10c. Authorization & Permissions

Authorization must be fine-grained, auditable, and denomination-scoped. A group leader in ICF Zürich must never accidentally see FEG Basel’s member data.

Permission Model:
├── Role Hierarchy (RBAC)
│   ├── superadmin          — platform operator (you). Access to everything.
│   ├── denomination_admin  — manages all churches in a denomination.
│   ├── church_admin        — manages one church location.
│   ├── group_leader        — manages their groups and sees group members.
│   ├── member              — regular church member. Sees their own data + public content.
│   └── guest               — unauthenticated or pre-registration. Sees public content only.

├── Resource-Level Permissions
│   ├── Every API endpoint checks: "Does this user's role + scope grant access to this resource?"
│   ├── Scope = denomination_id + church_id (extracted from JWT claims)
│   ├── A church_admin for church_id=5 cannot access church_id=6's data, even with valid JWT
│   ├── Group leaders see members of their groups only, not the full church directory
│   │   (unless church config allows open directory)
│   └── Members see other members based on privacy settings (see Privacy section below)

├── PostgreSQL Row-Level Security (defense-in-depth)
│   ├── RLS policies on all multi-tenant tables enforce church_id filtering at DB level
│   ├── Even if application code has a bug, the database won't return wrong-church data
│   ├── Example:
│   │   CREATE POLICY church_isolation ON people
│   │     USING (church_id = current_setting('app.current_church_id')::uuid);
│   └── app.current_church_id set per-request from JWT claims by Symfony middleware

├── API Tokens for Integrations
│   ├── Scoped per-denomination and per-permission set (read-only, specific modules)
│   ├── Revocable, audited, expiring
│   └── Example: ChurchTools adapter gets read/write on people + events, nothing else

└── Permission Configuration
    ├── Denomination admins can customize role permissions per denomination
    ├── Example: FEG allows group leaders to see full member profiles.
    │   EMK restricts group leaders to name + photo only.
    └── Stored as JSONB config per denomination, applied at authorization check time.

10d. The Observe & Protect Pipeline

Audit logging, change tracking, content moderation, and compliance monitoring are all the same architectural problem: every state change and every piece of user-generated content must flow through a single pipeline where multiple independent consumers process it for different purposes. Build this once at the infrastructure level — every module gets auditing, history, and moderation for free.

The core insight: Your event-driven architecture already requires that every domain action emits an event. The Observe & Protect Pipeline is simply a set of always-on consumers that listen to ALL events. No module needs to “call” the audit logger or “check” the moderation service. It happens automatically, at the infrastructure level, for everything.

┌─────────────────────────────────────────────────────────────────────┐
│                    THE OBSERVE & PROTECT PIPELINE                     │
│                                                                      │
│  Every domain event and every content submission flows through       │
│  this pipeline. Modules don't opt-in — it's infrastructure.         │
│                                                                      │
│  ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐  │
│  │  People  │ │  Groups  │ │   News   │ │   Chat   │ │  Giving  │  │
│  │  Module  │ │  Module  │ │  Module  │ │  Module  │ │  Module  │  │
│  └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘  │
│       │            │            │            │            │         │
│       ▼            ▼            ▼            ▼            ▼         │
│  ┌─────────────────────────────────────────────────────────────┐    │
│  │                      EVENT BUS                              │    │
│  │              (Redis Streams / Redpanda)                      │    │
│  │                                                             │    │
│  │  Events: member.updated, post.published, chat.message.sent, │    │
│  │  donation.received, group.membership.changed, ...           │    │
│  └──────┬──────────┬──────────┬──────────┬──────────┬──────┘    │
│         │          │          │          │          │            │
│         ▼          ▼          ▼          ▼          ▼            │
│   ┌──────────┐┌──────────┐┌──────────┐┌──────────┐┌──────────┐ │
│   │  AUDIT   ││ HISTORY  ││MODERATION││TRANSLATE ││ NOTIFY   │ │
│   │  LOG     ││ TRACKER  ││ ENGINE   ││ WORKER   ││ DISPATCH │ │
│   │          ││          ││          ││          ││          │ │
│   │ Who did  ││ What was ││ Is this  ││ Auto-    ││ Push,    │ │
│   │ what,    ││ the state││ content  ││ translate ││ email,   │ │
│   │ when,    ││ at each  ││ safe?    ││ to other ││ in-app   │ │
│   │ where?   ││ version? ││ Harmful? ││ languages││          │ │
│   │          ││          ││ Spam?    ││          ││          │ │
│   └──────────┘└──────────┘└──────────┘└──────────┘└──────────┘ │
│        │           │           │            │           │        │
│        ▼           ▼           ▼            ▼           ▼        │
│   audit_log   entity_     moderation_  translations  push/email │
│   (append     history     queue        JSONB column             │
│    only)      (snapshots) (flagged                              │
│                            content)                             │
└─────────────────────────────────────────────────────────────────────┘

Why this is powerful:

  1. Zero effort per module. When you add a new module (e.g., Pinboard), it emits events like everything else. Audit logging, change tracking, moderation, and translation happen automatically. No integration code needed.
  2. Consumers are independent. The Audit consumer can be down for maintenance — Moderation and Translation keep working. Consumers process at their own pace.
  3. One content pipeline for all user-generated content. Chat messages, news posts, pinboard items, prayer requests, event descriptions — all pass through the same moderation engine. One policy, consistently applied.
  4. Retroactive processing. If you add a new moderation rule, you can replay historical events through it (this is where Redpanda’s event replay becomes valuable in Phase 3).

10d-1. Audit Log Consumer

Consumes ALL events. Writes an immutable append-only record for every action.

CREATE TABLE audit_log (
    id BIGSERIAL PRIMARY KEY,
    timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    
    -- WHO
    actor_id UUID,
    actor_type VARCHAR(20) NOT NULL,     -- 'user', 'admin', 'system', 'api_token'
    actor_ip INET,
    
    -- WHAT
    action VARCHAR(50) NOT NULL,          -- from the domain event type
    
    -- WHERE
    resource_type VARCHAR(30) NOT NULL,
    resource_id UUID,
    church_id UUID NOT NULL,
    denomination_id UUID NOT NULL,
    
    -- CONTEXT
    details JSONB DEFAULT '{}',           -- old/new values, metadata
    
    -- IMMUTABILITY: NO UPDATE/DELETE permissions. Period.
);

CREATE INDEX idx_audit_church_time ON audit_log(church_id, timestamp DESC);
CREATE INDEX idx_audit_actor ON audit_log(actor_id, timestamp DESC);
CREATE INDEX idx_audit_resource ON audit_log(resource_type, resource_id);

What gets audited (everything — because it’s automatic):

CategoryExample EventsDetails
Authlogin.succeeded, login.failed, passkey.registeredIP, device, method, failure reason
Member datamember.viewed, member.updated, member.deletedFields accessed, old/new values
Groupsmembership.added, role.changedWho, by whom, old/new role
Contentpost.published, post.edited, chat.message.sentContent hash, language
Givingdonation.received, receipt.generatedAmount, fund (NOT card details)
Adminpermission.changed, settings.updatedOld/new config
Privacyexport.requested, deletion.executedWhat was affected
Moderationcontent.flagged, content.blocked, content.approvedReason, confidence score

10d-2. History Tracker Consumer

Consumes entity change events. Writes versioned snapshots for entities that benefit from full history.

CREATE TABLE entity_history (
    id BIGSERIAL PRIMARY KEY,
    entity_type VARCHAR(30) NOT NULL,
    entity_id UUID NOT NULL,
    version INT NOT NULL,
    changed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    changed_by UUID,
    change_type VARCHAR(10) NOT NULL,     -- 'created', 'updated', 'deleted'
    snapshot JSONB NOT NULL,               -- full state at this version
    diff JSONB,                            -- changed fields only
    
    UNIQUE(entity_type, entity_id, version)
);

Configurable per entity type — not everything needs full snapshots. Members, groups, church settings: yes. Individual chat messages: no (audit log is enough).

Church admin UX: “History” tab on member profiles and group pages showing a timeline of who changed what, when, with the ability to view any previous state.

10d-3. Content Moderation Consumer

This is the new piece. Every user-generated content event is analyzed for safety. Chat messages, news posts, pinboard items, prayer requests, event descriptions — all go through the same engine.

Content Moderation Flow:

  User submits content


  ┌──────────────┐
  │ Domain Module │
  │ stores content│
  │ status:       │
  │ "published"   │──── For most content, publish immediately.
  │ or "pending"  │     Churches can enable "review required" per content type.
  └──────┬───────┘
         │ Emits event: content.submitted

  ┌──────────────────────────────────────────────────────────┐
  │              MODERATION ENGINE (async consumer)           │
  │                                                          │
  │  1. Keyword filter (fast, regex-based)                   │
  │     ├── Configurable blocklist per denomination           │
  │     ├── Built-in list: profanity, slurs, hate speech      │
  │     └── Result: pass / flag / block                       │
  │                                                          │
  │  2. AI classification (if keyword filter passes)          │
  │     ├── Phase 1: OpenAI Moderation API (free, no cost!)  │
  │     │   └── Categories: hate, harassment, self-harm,      │
  │     │       sexual, violence — with confidence scores     │
  │     ├── Phase 2: Self-hosted model (e.g. detoxify,       │
  │     │   or fine-tuned classifier on your data)            │
  │     └── Result: safe / flagged / blocked + confidence     │
  │                                                          │
  │  3. Pattern detection (behavioral, not content-based)     │
  │     ├── Spam: same user posting identical content rapidly  │
  │     ├── Harassment: user sending many DMs to someone who  │
  │     │   hasn't responded (one-sided messaging pattern)     │
  │     ├── Grooming signals: adult → minor DM patterns       │
  │     │   (if age data available)                            │
  │     └── Bulk export: unusual data access patterns          │
  │                                                          │
  │  Decision matrix:                                         │
  │  ├── SAFE (all checks pass) → no action, content stays    │
  │  ├── FLAGGED (borderline) → content stays visible,        │
  │  │   queued for human review, church admin notified        │
  │  ├── BLOCKED (high confidence harmful) → content hidden,   │
  │  │   author notified, church admin notified                │
  │  └── All decisions audit-logged with reasoning             │
  └──────────────────────────────────────────────────────────┘

Moderation queue in admin backend:

Moderation Queue (church admin view):
├── Flagged content list (newest first)
│   ├── Content preview (text, image thumbnail)
│   ├── Author name + profile link
│   ├── Reason flagged (keyword match, AI classification, behavioral pattern)
│   ├── Confidence score
│   ├── Context (which group/chat/pinboard, thread context)
│   └── Actions: Approve / Remove / Warn author / Block author

├── Blocked content list (auto-blocked, for review)
│   └── Same as above — admin can override auto-block if false positive

├── Author history
│   ├── How many posts flagged/blocked for this user
│   ├── Previous moderation actions taken against them
│   └── Suggested action based on history (first offense → warn, repeat → restrict)

└── Moderation settings (per denomination / per church)
    ├── Content types requiring pre-approval: none / pinboard / chat / all
    ├── Keyword blocklist: default + custom additions
    ├── AI sensitivity level: low / medium / high
    ├── Auto-block threshold: confidence > 0.9 auto-block, 0.7-0.9 flag, < 0.7 pass
    └── Notification: who gets notified when content is flagged

Chat-specific moderation considerations:

Chat messages are high-volume and must be moderated without perceptible latency. The approach:

  1. Messages are published immediately (never hold a chat message for pre-moderation — it kills the UX).
  2. Moderation runs async (< 500ms behind). If a message is flagged, it’s hidden retroactively with a “[This message was removed by moderation]” placeholder.
  3. Behavioral patterns matter more than individual messages in chat. A single “damn” is fine. A user sending 50 aggressive messages to one person in an hour is harassment. The pattern detector watches for:
    • One-sided messaging (10 messages sent, 0 replies)
    • Escalating negativity in a conversation (sentiment analysis)
    • Rapid-fire messages to multiple different users (spam/harassment campaign)
    • Messages to minors from adults flagged for higher scrutiny (if birth date data available)
  4. Reporting by members. Any member can long-press a message → “Report” → select reason → goes to moderation queue. This is the most important moderation tool: community self-policing.

The cost question:

Moderation layerSolutionCost
Keyword filterSelf-built (regex + blocklist)€0
AI text classificationOpenAI Moderation API€0 (genuinely free, no rate limits worth worrying about at your scale)
Behavioral patternsSelf-built (SQL queries on message metadata)€0
Image moderationPhase 2: OpenAI Vision API or self-hosted NSFW classifier€0-20/mo
Member reportingSelf-built (flag + queue)€0

OpenAI’s Moderation endpoint is free, unlimited, and surprisingly good. It classifies text into: hate, hate/threatening, harassment, harassment/threatening, self-harm, sexual, sexual/minors, violence, violence/graphic. Each with a confidence score. It’s the rare case where a free external API genuinely has no viable self-hosted equivalent at the same quality — but because it only receives the text content (not user identity or metadata), the GDPR exposure is minimal. For absolute data sovereignty, swap to a self-hosted model (like detoxify or a fine-tuned BERT classifier) in Phase 3.

10d-4. Implementing the Pipeline in Symfony

The beauty of this approach: it’s just Symfony Messenger consumers. No new infrastructure needed.

// Every domain event already flows through the event bus.
// These consumers are registered once and process everything automatically.

// 1. AUDIT CONSUMER — logs every event
#[AsMessageHandler]
class AuditLogConsumer
{
    public function __invoke(DomainEvent $event): void
    {
        // Every DomainEvent has: actor, action, resource, church_id, denomination_id
        // Write to audit_log table. That's it.
        $this->auditRepository->append(
            action: $event->getAction(),
            actorId: $event->getActorId(),
            resourceType: $event->getResourceType(),
            resourceId: $event->getResourceId(),
            churchId: $event->getChurchId(),
            details: $event->getDetails(),
        );
    }
}

// 2. HISTORY CONSUMER — snapshots entity changes
#[AsMessageHandler]
class EntityHistoryConsumer
{
    public function __invoke(EntityChangedEvent $event): void
    {
        if (!$this->isTrackedEntity($event->getResourceType())) return;
        
        $this->historyRepository->writeSnapshot(
            entityType: $event->getResourceType(),
            entityId: $event->getResourceId(),
            changedBy: $event->getActorId(),
            snapshot: $event->getNewState(),
            diff: $event->getChangedFields(),
        );
    }
}

// 3. MODERATION CONSUMER — checks user-generated content
#[AsMessageHandler]
class ContentModerationConsumer
{
    public function __invoke(ContentSubmittedEvent $event): void
    {
        $content = $event->getTextContent();
        $result = $this->moderationPipeline->analyze($content);
        
        // $result = { decision: 'safe'|'flagged'|'blocked', reasons: [...], confidence: 0.95 }
        
        if ($result->decision === 'blocked') {
            $this->contentService->hideContent($event->getResourceId());
            $this->notifyChurchAdmin($event->getChurchId(), $event, $result);
        }
        
        if ($result->decision === 'flagged') {
            $this->moderationQueue->enqueue($event, $result);
            $this->notifyChurchAdmin($event->getChurchId(), $event, $result);
        }
        
        // Always audit-log the moderation decision
        // (happens automatically via the AuditLogConsumer since we emit an event)
        $this->eventBus->publish(new ContentModeratedEvent(
            resourceId: $event->getResourceId(),
            decision: $result->decision,
            reasons: $result->reasons,
        ));
    }
}

// 4. The Moderation Pipeline itself — composable checks
class ModerationPipeline
{
    private array $checks;
    
    public function __construct(
        private KeywordFilter $keywordFilter,
        private AIModerationCheck $aiCheck,          // OpenAI Moderation API
        private BehavioralPatternDetector $patterns,
    ) {
        $this->checks = [$keywordFilter, $aiCheck, $patterns];
    }
    
    public function analyze(string $content, ?ContentContext $context = null): ModerationResult
    {
        foreach ($this->checks as $check) {
            $result = $check->check($content, $context);
            if ($result->decision !== 'safe') {
                return $result;  // Short-circuit on first flag/block
            }
        }
        return ModerationResult::safe();
    }
}

The key architectural property: Adding a new consumer (say, an analytics consumer that tracks engagement metrics) requires writing one class, registering it as a Symfony Messenger handler, and deploying. Zero changes to any existing module. Zero changes to the event bus. The pipeline extends by addition, never by modification.

10e. Member Privacy Controls

Members control their own data visibility. This isn’t just GDPR compliance — it’s respect for the person.

Member Privacy Settings (per member, configurable in app profile):
├── Profile visibility
│   ├── Name: always visible to church members (required for community)
│   ├── Photo: visible to church / group only / hidden
│   ├── Email: visible to church / group leaders only / hidden
│   ├── Phone: visible to church / group leaders only / hidden
│   ├── Address: hidden by default, visible only if explicitly shared
│   └── Birthday: visible to church / group only / hidden

├── Directory discoverability
│   ├── Searchable in church directory: yes / no
│   ├── Visible in group member lists: yes (name only) / yes (full profile) / no
│   └── Contactable via in-app chat by non-group-members: yes / no

├── Notification preferences
│   ├── Per channel: push / email / in-app / off
│   ├── Per level: denomination / church / group
│   ├── Per type: announcements / events / chat / giving receipts
│   └── Digest mode: instant / daily digest / weekly digest

├── Data & consent
│   ├── View all stored personal data (GDPR Art. 15 — right of access)
│   ├── Export all personal data as JSON (GDPR Art. 20 — data portability)
│   ├── Request account deletion (GDPR Art. 17 — right to erasure)
│   │   └── Triggers: domain event → cascade across all modules → audit logged
│   ├── Consent tracking: what they consented to, when, which version of privacy policy
│   └── Withdraw consent: per-purpose granular withdrawal

└── Defaults
    ├── New members start with privacy-maximizing defaults
    ├── Church can configure default visibility per denomination
    └── Member can always override to be MORE private, never forced to be less

10f. Data Protection Architecture

Data Protection:
├── Encryption at rest
│   ├── PostgreSQL: disk-level encryption (Hetzner encrypted storage)
│   ├── Backups: encrypted before upload to R2 (GPG or age encryption)
│   └── S3/R2 media: server-side encryption enabled

├── Encryption in transit
│   ├── TLS 1.3 everywhere (Cloudflare edge + internal services)
│   ├── Database connections: SSL required (reject non-SSL connections)
│   └── Internal Docker network: encrypted if services span multiple hosts

├── PII handling
│   ├── PII fields explicitly marked in Doctrine entity annotations
│   ├── PII never logged in application logs (Symfony Monolog processor strips PII)
│   ├── PII never included in error reports (Sentry data scrubbing configured)
│   └── DB clone script sanitizes all PII before copying to staging

├── Data residency
│   ├── All infrastructure: Hetzner Germany (Nuremberg / Falkenstein)
│   ├── CDN edge: Cloudflare (data cached globally but origin stays in DE)
│   ├── No data leaves EU/CH — this is a contractual guarantee to customers
│   └── DeepL API: EU-hosted (DeepL is a German company, data processed in EU)

├── Data minimization
│   ├── Collect only what's needed. No tracking pixels, no analytics mining.
│   ├── Giving: store transaction reference, amount, fund. NOT card details (Stripe handles that).
│   ├── Chat: messages stored for history. Configurable retention per church.
│   └── Audit logs: retained 2 years (configurable per denomination), then purged.

├── Right to deletion (Art. 17 GDPR)
│   ├── Member requests deletion → domain event: member.deletion_requested
│   ├── All modules consume this event and delete/anonymize their data:
│   │   ├── People module: anonymize profile (name → "Deleted User", clear PII)
│   │   ├── Chat module: anonymize sender on messages (content optionally deleted)
│   │   ├── Giving module: anonymize donor identity (keep transaction for accounting)
│   │   ├── News module: anonymize author on posts
│   │   ├── Audit module: anonymize actor_id (keep audit record for compliance)
│   │   └── Search module: remove from search index
│   ├── IdP (Zitadel): delete user account via API
│   ├── Deletion confirmation sent to member
│   └── Entire process audit-logged (ironic but required)

├── Data portability (Art. 20 GDPR)
│   ├── Member requests export → generates JSON file containing:
│   │   ├── Profile data, group memberships, event registrations
│   │   ├── Chat messages sent by them
│   │   ├── Giving history (their donations)
│   │   ├── Sermon notes
│   │   └── Notification preferences
│   ├── File encrypted, download link sent via email (expires in 48h)
│   └── Audit-logged

├── Breach notification
│   ├── Automated breach detection: unusual access patterns, bulk data exports
│   ├── Incident response playbook documented
│   ├── GDPR: notify supervisory authority within 72 hours
│   ├── nDSG: notify FDPIC "as soon as possible"
│   └── Affected members notified with clear language about what happened

└── DPA (Data Processing Agreement)
    ├── Provided to every denomination as part of onboarding
    ├── Covers: data processing purposes, sub-processors (Hetzner, Cloudflare, DeepL, Stripe)
    ├── Updated when sub-processors change
    └── Template in admin backend, signable digitally

10g. Giving / Payment Security

├── PCI DSS: use Stripe / TWINT provider's hosted fields — NEVER touch card data
├── Payment service is strictly isolated (own module, minimal API surface)
├── Financial audit trail: immutable append-only log of all transactions
├── Giving receipts: generated as PDF, stored encrypted in R2
└── Annual tax summaries: generated per member, downloadable in app

10h. API Security

├── Rate limiting: per-tenant, per-endpoint (Redis-based)
├── Input validation: Symfony Validator on every endpoint, no raw user input in queries
├── SQL injection: Doctrine parameterized queries (never raw SQL)
├── CORS: strict origin whitelist (app domains + admin domain only)
├── Webhook signing: HMAC-SHA256 signatures on all outgoing webhooks
├── API versioning: breaking changes behind version prefix (v1, v2)
└── Request logging: all API requests logged with actor, endpoint, response code (not body)

10i. Operational Security

├── Secret management: environment variables via Docker secrets or .env (never in code, never in git)
├── SSH access: key-only, no password auth, fail2ban enabled
├── Dependency scanning: automated via GitLab Dependency Scanning + composer audit (in CI pipeline)
├── Container images: minimal base images, non-root user, read-only filesystem where possible
├── Penetration testing: annually (budget for it once you have revenue)
├── Incident response plan: documented in repo, with contact chain and escalation path
└── Backup testing: weekly automated restore test (restore backup to throwaway server, run health check)

10j. Privacy as a Feature (Marketing Angle)

This architecture enables a powerful trust story for sales conversations:

  • “Your data never leaves European servers.” Hetzner Germany, Cloudflare EU, DeepL EU. No AWS, no Google Cloud, no US-jurisdiction risk.
  • “Every access to member data is logged.” Show the audit log to a skeptical church council and watch trust build instantly.
  • “Members control their own privacy.” They choose what’s visible, who can contact them, and can delete their account with one tap.
  • “We comply with GDPR, nDSG, and church-specific data protection law.” KDG, DSG-EKD awareness signals serious commitment.
  • “Full data export anytime.” No lock-in. Your data is yours. Export as JSON, take it anywhere.
  • “We provide a signed Data Processing Agreement.” Professional, compliant, ready for your church council’s legal review.

For Swiss and German churches, this section of the product pitch may close more deals than any feature demo.


12. Architectural Decisions & Challenges

Decision 1: Event-Driven — Yes, But Incrementally

You said: “event driven architecture with self contained systems, maybe kafka.”

Challenge: Event-driven architecture is the right target state. But Kafka on day one is an anti-pattern for a startup. Here’s why:

  • Kafka requires ZooKeeper (or KRaft), brokers, schema registry, and operational knowledge. It’s a platform, not a library.
  • At MVP scale (< 100 events/second), Redis Streams or even PostgreSQL LISTEN/NOTIFY provides the same decoupling benefits with zero operational overhead.
  • The architectural benefit of events (loose coupling between services) is independent of the transport. You can emit events to Redis Streams today and swap to Kafka later without changing your domain code — if you design the abstraction right.

Recommended approach:

// Domain event interface — transport-agnostic
interface DomainEventBus {
    public function publish(DomainEvent $event): void;
    public function subscribe(string $eventType, callable $handler): void;
}

// Phase 1: Redis Streams implementation
class RedisStreamEventBus implements DomainEventBus { ... }

// Phase 2+: Redpanda implementation (Kafka-protocol-compatible, drop-in replacement)
class RedpandaEventBus implements DomainEventBus { ... }

When to move beyond Redis Streams:

  • Sustained throughput > 10,000 events/minute
  • Need for event replay (reprocessing historical events)
  • Need for exactly-once delivery guarantees
  • Multiple consumer groups processing the same events differently

Decision: Redpanda over Kafka. Redpanda is NOT managed Kafka — it’s a completely separate product rewritten from scratch in C++ that speaks the Kafka protocol. Any Kafka client library (including PHP’s php-rdkafka) works unchanged. The key differences:

KafkaRedpanda
LanguageJava (JVM)C++ (no JVM)
ZooKeeperRequired (or KRaft)Not needed
DeploymentMultiple processesSingle binary
Operational complexityHigh (JVM tuning, GC pauses)Low (one binary, one config)
Management UIThird-party (AKHQ, Kafka UI)Built-in Redpanda Console
API compatibilityIS the Kafka APIImplements the Kafka wire protocol
LicenseApache 2.0BSL 1.1 → Apache 2.0 after 4 years

Redpanda Console (formerly Kowl) is a built-in web UI for topic management, consumer group monitoring, message browsing with JavaScript filters, ACL management, and schema registry. It covers everything you’d need from a Kafka management UI.

The event bus abstraction ensures the swap from Redis Streams to Redpanda is a config change, not a rewrite:

Decision 2: DDD Modular Monolith, Deptrac Enforcement, Packages Later

Revised after critical analysis. The original plan had 15+ Composer packages at MVP. This was over-engineering for Phase 1. The revised approach:

Phase 1 (MVP): DDD-structured directory layout + Deptrac boundary enforcement in CI. One composer.json, one project. Boundaries are architectural (namespace rules, layer dependencies) not packaging. See Section 4 “Module System Architecture” for the full module contract, examples, and feature flag system.

Phase 2 (Growth): Extract proven, stable modules to Composer packages when independent deployment is justified.

Phase 3 (Scale): Extract services that need independent runtimes (Chat → own service, Moderation → own service). JSON-serializable events from day one make this a deployment operation, not a rewrite.

DDD is applied in every module. See Section 4 for the full directory structure. The key layers per module:

  • Domain/ — Pure PHP. Aggregates, value objects, domain events, repository interfaces. Zero framework dependencies. The Domain layer of the Groups module cannot import Doctrine, Symfony, or any infrastructure class. Deptrac enforces this.
  • Application/ — Use cases. Command/query handlers that orchestrate domain logic. May use other modules’ Contract interfaces. Never uses Infrastructure directly — depends on abstractions.
  • Infrastructure/ — Adapters. Doctrine repositories, external API clients, search indexers. Implements Domain interfaces.
  • Presentation/ — API controllers, HTTP DTOs. Thin layer that delegates to Application.
  • Contract/ — Public interfaces for other modules. The only part of a module that other modules may reference.

Deptrac runs in CI on every PR. Domain layers can only depend on Core/Contracts. Application layers can depend on their own Domain + other modules’ Contracts. Infrastructure implements Domain interfaces. No module can import another module’s Domain or Infrastructure. See the deptrac.yaml configuration in Section 4 for the full ruleset.

Events are JSON-serializable from day one. Every DomainEvent implements JsonSerializable with a version field for schema evolution. When a module is extracted to a service later, the event bus (Redpanda) carries JSON messages that any consumer in any language can read. This is the single most important decision for future extraction.

AI agents follow the .ai/ directory (see Section 2) for all code generation. Skill files define how to create a module, define an aggregate, write an event, scaffold an API endpoint. The guidelines are the codebase — the generated code is the artifact.

Decision 3: The Sunday Peak Problem

The scenario: 200 churches, average 300 members each. 10% use the app during Sunday service. That’s 6,000 concurrent users hitting the API simultaneously within a 2-hour window, with spikes during worship (giving), sermon (notes, polls), and post-service (chat, pinboard).

The problem isn’t throughput — it’s latency. 6,000 users is modest for a well-configured PHP stack. The real issue is that all 6,000 users expect < 200ms response times while simultaneously streaming, chatting, polling, and giving.

Mitigation strategies:

  1. Aggressive response caching. News feeds, event lists, group data — these change infrequently. Cache at the CDN (Cloudflare) with 60-second TTL, and at Redis with 5-minute TTL. Sunday morning reads are 95% cache hits.
  2. Read/write separation. Reads go to PostgreSQL read replicas. Writes go to primary. Replication lag of < 1 second is acceptable for feeds.
  3. Precomputed feeds. Don’t compute “what should this member see?” on every request. When a post is published, push it into each relevant member’s feed (fan-out on write). The read path is a simple SELECT * FROM feed WHERE member_id = ? ORDER BY created_at DESC.
  4. WebSocket for live features, not polling. The live poll results, prayer wall, and slide sync must use WebSocket (Centrifugo), not HTTP polling. Polling at 6,000 users × every 2 seconds = 3,000 requests/second of pure waste.
  5. Giving is write-heavy but low-volume. Even 500 simultaneous donations is < 10 writes/second. PostgreSQL handles this trivially. The payment provider (Stripe, TWINT) is the bottleneck, not your database.

Decision 4: Where PHP Ends and Sidecars Begin

Some components are better served by non-PHP technologies:

ComponentWhy not PHPRecommended sidecar
WebSocket serverPHP lacks native long-lived connection support. Ratchet/Swoole exist but are niche.Centrifugo (Go, open source, battle-tested)
Search engineFull-text search in PostgreSQL works but degrades at scale.Phase 1: PostgreSQL FTS (tsvector + pg_trgm). Phase 2: Meilisearch (Rust, fast, typo-tolerant). NOT ELK (overkill). NOT Algolia (managed = cost + GDPR processor).
Media processingImage resizing and video transcoding are CPU-intensive and block PHP workers.Async workers (FFmpeg via Symfony Messenger, or a separate Go service)
Chat (if self-built)Real-time bidirectional messaging is not PHP’s strength.Centrifugo or dedicated Node.js/Go service

13. MVP vs. Scale Phasing

What to build first (MVP — 3-4 months)

MVP Scope:
├── Mobile app (React Native / Expo)
│   ├── Auth via Zitadel (email + password, Google/Apple social login, passkeys ready)
│   ├── i18n via i18next (German + English at MVP, extensible)
│   ├── News feed (multi-level: church → group)
│   ├── Groups (list, detail, join, member list)
│   ├── Events (list, detail, RSVP)
│   ├── Push notifications (Firebase)
│   ├── Profile & settings (language preference)
│   └── Basic design system (10-12 components)

├── Identity Provider (Zitadel)
│   ├── Self-hosted on Hetzner (Docker) or Zitadel Cloud free tier
│   ├── One Organization per pilot denomination
│   ├── Email/password + Google + Apple login
│   ├── Passkeys enabled (optional for users)
│   ├── JWT tokens → validated by Symfony
│   └── Webhook on user.created → Symfony creates member profile

├── Backend (Symfony modular monolith)
│   ├── People module (profiles, directory) — consumes IdP webhooks
│   ├── Groups module (CRUD, hierarchy, memberships)
│   ├── Events module (CRUD, RSVP, group linking)
│   ├── News module (posts, announcements, feed)
│   ├── Notification module (push via Firebase)
│   └── REST API (OpenAPI documented, JWT-authenticated)

├── Admin backend (Symfony + Twig + Vue)
│   ├── Auth via Zitadel OIDC (SSO into admin)
│   ├── People list & management
│   ├── Group management
│   ├── Event management
│   ├── News composer & scheduler
│   └── Basic settings (branding, roles, languages)

├── Translation Management
│   ├── Static: i18next bundles (DE + EN), managed via Tolgee (self-hosted)
│   └── Dynamic: NOT in MVP (posts in church default language only)

├── Infrastructure (all automated via OpenTofu, ~€30-35/month total)
│   ├── Hetzner: App server (CPX21, €7.50) + DB server (CPX11, €4.50) + Staging (CPX11, €4.50)
│   ├── All services via Docker Compose: Symfony, Redis, Zitadel, Tolgee, Sentry OSS, Uptime Kuma
│   ├── Cloudflare free plan (CDN + R2 + DNS)
│   ├── GitLab CI/CD (self-hosted runner, free)
│   ├── OpenTofu for all infrastructure provisioning
│   ├── Automated DB backup → R2
│   ├── Prod → Staging DB clone script (sanitized)
│   └── Super admin dashboard (basic: denominations, health, user search)

└── NOT in MVP:
    ├── ❌ Dynamic content translation (DeepL — add in Phase 2)
    ├── ❌ Chat (skip entirely, or Stream Maker plan behind ChatProvider abstraction)
    ├── ❌ Giving (complex — payment provider integration, PCI scope)
    ├── ❌ Sunday live features (livestream embed is fine, live interaction is not)
    ├── ❌ Pinboard (can be added as a post type in News)
    ├── ❌ Webhooks (API is enough for MVP)
    ├── ❌ Meilisearch (PostgreSQL full-text search via tsvector + pg_trgm is fine for < 50k records)
    ├── ❌ Centrifugo (no real-time features yet beyond push notifications)
    ├── ❌ ChMS adapters
    ├── ❌ Kubernetes
    └── ❌ Redpanda (Redis Streams for event bus if needed, PostgreSQL LISTEN/NOTIFY otherwise)

Phase 2 (months 4-8): Core differentiators

├── Giving (TWINT, Stripe, QR code)
├── Dynamic content translation (DeepL API — auto-translate posts, events, announcements)
│   └── JSONB translations column on all content entities
├── Chat (Centrifugo + Symfony Chat module — migrate off Stream if used at MVP)
│   └── ~15-28 days effort. Abstract behind ChatProvider interface.
├── Sunday (sermon archive, notes, basic live features via Centrifugo)
├── Pinboard (as distinct feature)
├── Webhooks (outgoing, for API consumers)
├── Multi-denomination support (schema separation)
├── Centrifugo deployed (unified real-time layer for chat + Sunday + presence)
└── Second server + read replica

Phase 3 (months 8-14): Scale & differentiation

├── Sunday live interaction (polls, prayer wall, slide sync via Centrifugo)
├── AI features:
│   ├── Sermon transcription (Whisper, self-hosted) → auto-translated summaries
│   ├── Chat message translation (DeepL, near-instant, cached)
│   ├── LibreTranslate (self-hosted) for high-volume / lower-priority content
│   └── Smart content suggestions, engagement insights
├── Auth: Passkeys as primary login, SSO federation for denominations with existing IdP
├── ChMS adapters (ChurchTools first)
├── Meilisearch (if PostgreSQL FTS outgrown — likely at this scale)
├── Redis Streams → Redpanda migration (if event throughput justifies it)
│   └── NOT Kafka. Redpanda: same API, single C++ binary, no JVM, built-in Console UI.
├── Chat: E2E encryption for pastoral conversations, voice messages, AI moderation
├── Kubernetes (if load justifies it)
├── Additional languages via Tolgee community translation
├── Onboarding journeys
└── Church finder (denomination-level)

Cost Projection

PhaseMonthly infra costMonthly external servicesTotalWhat you get
MVP~€17 (3 Hetzner servers)~€10 (domain, App Store fees)~€30-35/moFull prod + staging, monitoring, auth, CI/CD
Growth~€60 (5 servers + LB)~€25 (DeepL, R2 overage)~€80-120/moChat, live features, translation, read replicas
Scale~€250 (dedicated servers, k3s)~€100 (DeepL, increased storage)~€300-800/moFull platform, 200+ churches

For comparison, the same setup on AWS would cost:

  • MVP: ~€200-400/month (EC2 + RDS + ALB + NAT Gateway + egress fees)
  • Growth: ~€800-1,500/month
  • Scale: ~€3,000-8,000/month

Hetzner saves you 5-10x at every phase. The savings compound: €150/month saved at MVP = €1,800/year = runway to hire a part-time contributor or attend a church tech conference.

Revenue needed to be profitable at each phase:

  • MVP: 1 paying church at €50/month covers infrastructure
  • Growth: 3 paying churches cover infrastructure
  • Scale: 10 paying churches cover infrastructure, everything above is margin

Key Risks & Mitigations

RiskLikelihoodImpactMitigation
Over-engineering at MVPHigh (your instinct for distributed systems is strong)Delays launch by monthsStrict MVP scope. Monolith first. No Redpanda, no Meilisearch, no K8s at MVP.
Sunday peak performanceMediumApp is slow when it matters mostCaching strategy + load testing with k6 before any Sunday pilot
Chat vendor lock-in (if using Stream at MVP)MediumExpensive to migrate laterAbstract chat behind ChatProvider interface from day one. Plan Centrifugo migration for Phase 2.
Hetzner server failureLowDowntimeAutomated backups to R2 (off-server). OpenTofu can recreate entire infra from scratch in < 30 minutes.
Solo founder burnoutHighProject stallsShip MVP fast, get 5 pilot churches, then decide if it’s worth scaling. Automate everything.
Running out of money before revenueMediumProject dies€30/mo infra = 3 years of runway on €1,000 savings. That’s the power of Hetzner + open source.
Self-hosted service maintenance overheadMediumOps consumes dev timeDocker Compose + automated updates. Monitor via Uptime Kuma. Most self-hosted services (Zitadel, Sentry, Tolgee) have automated Docker upgrade paths.
GDPR incidentLow but catastrophicData breach kills trustEncryption, RLS, audit logs, sanitized staging DB, incident response plan from day one
React Native perf ceilingLowOne specific screen feels jankyWrite that screen as a native module
Symfony Messenger at scaleMediumWorkers consume too much memorySupervisor auto-restart, memory limits, monitor with Prometheus

Documents in This Architecture Series

  1. Product Vision (product-vision.md) — features, USPs, market positioning
  2. Architecture Blueprint (this document) — tech stack, infrastructure, decisions, DDD approach, AI-agentic development model
  3. Chat Deep Dive (chat-architecture-deep-dive.md) — Stream vs. Matrix vs. Centrifugo analysis, data model, phased migration plan
  4. Competitive Analysis (church-app-competitive-analysis.jsx) — interactive comparison of ChurchTools, Communiapp, Donkey Mobile
  5. Domain Model (next) — bounded contexts, aggregates, event catalog, entity relationships

This is a living document. Revisit after MVP launch and after each phase transition. Status: Architecture Blueprint v0.5 — Pre-MVP