Ouranos Lab

Red Panda Approved™ Infrastructure as Code

10 Incus containers named after moons of Uranus, provisioned with Terraform and configured with Ansible. Accessible at ouranos.helu.ca

Red Panda Approved™

Project Overview

Ouranos is a comprehensive infrastructure-as-code project that provisions and manages a complete development sandbox environment. All infrastructure and configuration is tracked in Git for reproducible deployments.

DNS Domain: Incus resolves containers via the .incus suffix (e.g., oberon.incus). IPv4 addresses are dynamically assigned — always use DNS names, never hardcode IPs.

Terraform

Provisions the Uranian host containers with:

  • 10 specialised Incus containers (LXC)
  • DNS-resolved networking (.incus domain)
  • Security policies and nested Docker support
  • Port proxy devices and resource dependencies
  • Incus S3 buckets for object storage (Casdoor, LobeChat)
Ansible

Deploys and configures all services:

  • Docker engine on nested-capable hosts
  • Databases: PostgreSQL (Portia), Neo4j (Ariel)
  • Observability: Prometheus, Loki, Grafana (Prospero)
  • Application runtimes and LLM proxies
  • HAProxy TLS termination and Casdoor SSO (Titania)

Uranian Host Architecture

Hosts Summary
Name Role Key Services Nesting
ariel graph_database Neo4j 5.26.0
caliban agent_automation Agent S MCP Server, Kernos, MATE Desktop, GPU
miranda mcp_docker_host MCPO, Grafana MCP, Gitea MCP, Neo4j MCP, Argos MCP
oberon container_orchestration MCP Switchboard, RabbitMQ, Open WebUI, SearXNG, Home Assistant, smtp4dev
portia database PostgreSQL 16
prospero observability Prometheus, Loki, Grafana, PgAdmin, AlertManager
puck application_runtime JupyterLab, Gitea Runner, Django apps (6×)
rosalind collaboration Gitea, LobeChat, Nextcloud, AnythingLLM
sycorax language_models Arke LLM Proxy
titania proxy_sso HAProxy, Casdoor SSO, certbot
oberon — Container Orchestration

King of the Fairies orchestrating containers and managing MCP infrastructure.

  • Docker engine
  • MCP Switchboard (port 22785) — Django app routing MCP tool calls
  • RabbitMQ message queue
  • Open WebUI LLM interface (port 22088, PostgreSQL backend on Portia)
  • SearXNG privacy search (port 22073, behind OAuth2-Proxy)
  • Home Assistant (port 8123)
  • smtp4dev SMTP test server (port 22025)
portia — Relational Database

Intelligent and resourceful — the reliability of relational databases.

  • PostgreSQL 16 (port 5432)
  • Databases: arke, anythingllm, gitea, hass, lobechat, mcp_switchboard, nextcloud, openwebui, spelunker
ariel — Graph Database

Air spirit — ethereal, interconnected nature mirroring graph relationships.

  • Neo4j 5.26.0 (Docker)
  • HTTP API: port 25554
  • Bolt: port 7687
puck — Application Runtime

Shape-shifting trickster embodying Python's versatility.

  • Docker engine
  • JupyterLab (port 22071 via OAuth2-Proxy)
  • Gitea Runner CI/CD agent
  • Django apps: Angelia (22281), Athena (22481), Kairos (22581), Icarlos (22681), Spelunker (22881), Peitho (22981)
prospero — Observability Stack

Master magician observing all events.

  • PPLG stack via Docker Compose: Prometheus, Loki, Grafana, PgAdmin
  • Internal HAProxy with OAuth2-Proxy for all dashboards
  • AlertManager with Pushover notifications
  • Prometheus node-exporter metrics from all hosts
  • Loki log aggregation via Alloy (all hosts)
  • Grafana with Casdoor SSO integration
miranda — MCP Docker Host

Curious bridge between worlds — hosting MCP server containers.

  • Docker engine (API on port 2375 for MCP Switchboard)
  • MCPO OpenAI-compatible MCP proxy
  • Grafana MCP Server — Grafana API integration (port 25533)
  • Gitea MCP Server (port 25535)
  • Neo4j MCP Server
  • Argos MCP Server — web search via SearXNG (port 25534)
sycorax — Language Models

Original magical power wielding language magic.

  • Arke LLM API Proxy (port 25540)
  • Multi-provider support (OpenAI, Anthropic, etc.)
  • Session management with Memcached
  • Database backend on Portia
caliban — Agent Automation

Autonomous computer agent learning through environmental interaction.

  • Docker engine
  • Agent S MCP Server (MATE desktop, AT-SPI automation)
  • Kernos MCP Shell Server (port 22021)
  • GPU passthrough for vision tasks
  • RDP access (port 25521)
rosalind — Collaboration Services

Witty and resourceful moon for PHP, Go, and Node.js runtimes.

  • Gitea self-hosted Git (port 22082, SSH on 22022)
  • LobeChat AI chat interface (port 22081)
  • Nextcloud file sharing and collaboration (port 22083)
  • AnythingLLM document AI workspace (port 22084)
  • Nextcloud data on dedicated Incus storage volume
titania — Proxy & SSO Services

Queen of the Fairies managing access control and authentication.

  • HAProxy 3.x with TLS termination (port 443)
  • Let's Encrypt wildcard certificate via certbot DNS-01 (Namecheap)
  • HTTP to HTTPS redirect (port 80)
  • Gitea SSH proxy (port 22022)
  • Casdoor SSO (port 22081, local PostgreSQL)
  • Prometheus metrics at :8404/metrics

External Access via HAProxy

Titania provides TLS termination and reverse proxy for all services. Base domain: ouranos.helu.ca — HTTPS port 443, HTTP port 80 (redirects to HTTPS). Certificate: Let's Encrypt wildcard via certbot DNS-01 (Namecheap).

Route Table
Subdomain Backend Service
ouranos.helu.ca rootpuck.incus:22281Angelia (Django)
alertmanager.ouranos.helu.caprospero.incus:443 SSLAlertManager
angelia.ouranos.helu.capuck.incus:22281Angelia (Django)
anythingllm.ouranos.helu.carosalind.incus:22084AnythingLLM
arke.ouranos.helu.casycorax.incus:25540Arke LLM Proxy
athena.ouranos.helu.capuck.incus:22481Athena (Django)
gitea.ouranos.helu.carosalind.incus:22082Gitea
grafana.ouranos.helu.caprospero.incus:443 SSLGrafana
hass.ouranos.helu.caoberon.incus:8123Home Assistant
id.ouranos.helu.catitania.incus:22081Casdoor SSO
icarlos.ouranos.helu.capuck.incus:22681Icarlos (Django)
jupyterlab.ouranos.helu.capuck.incus:22071JupyterLab OAuth2-Proxy
kairos.ouranos.helu.capuck.incus:22581Kairos (Django)
lobechat.ouranos.helu.carosalind.incus:22081LobeChat
loki.ouranos.helu.caprospero.incus:443 SSLLoki
mcp-switchboard.ouranos.helu.caoberon.incus:22785MCP Switchboard
nextcloud.ouranos.helu.carosalind.incus:22083Nextcloud
openwebui.ouranos.helu.caoberon.incus:22088Open WebUI
peitho.ouranos.helu.capuck.incus:22981Peitho (Django)
pgadmin.ouranos.helu.caprospero.incus:443 SSLPgAdmin 4
prometheus.ouranos.helu.caprospero.incus:443 SSLPrometheus
searxng.ouranos.helu.caoberon.incus:22073SearXNG OAuth2-Proxy
smtp4dev.ouranos.helu.caoberon.incus:22085smtp4dev
spelunker.ouranos.helu.capuck.incus:22881Spelunker (Django)

Infrastructure Management

Quick Start
# Provision containers
cd terraform
terraform init
terraform plan
terraform apply

# Start all containers
cd ../ansible
source ~/env/agathos/bin/activate
ansible-playbook sandbox_up.yml

# Deploy all services
ansible-playbook site.yml

# Stop all containers
ansible-playbook sandbox_down.yml
Vault Management
# Edit secrets
ansible-vault edit \
  inventory/group_vars/all/vault.yml

# View secrets
ansible-vault view \
  inventory/group_vars/all/vault.yml

# Encrypt a new file
ansible-vault encrypt new_secrets.yml
Terraform Workflow
  1. Define — Containers, networks, and resources in *.tf files
  2. Plan — Review changes with terraform plan
  3. Apply — Provision with terraform apply
  4. Verify — Check outputs and container status
Ansible Workflow
  1. Bootstrap — Update packages, install essentials (apt_update.yml)
  2. Agents — Deploy Alloy and Node Exporter on all hosts
  3. Services — Configure databases, Docker, applications, observability
  4. Verify — Check service health and connectivity
S3 Storage Provisioning

Terraform provisions Incus S3 buckets for services requiring object storage:

ServiceHostPurpose
CasdoorTitaniaUser avatars and SSO resource storage
LobeChatRosalindFile uploads and attachments

S3 credentials are stored as sensitive Terraform outputs and in Ansible Vault with the vault_*_s3_* prefix.

Ansible Automation

PlaybookHost(s)Purpose
apt_update.ymlAllUpdate packages and install essentials
alloy/deploy.ymlAllGrafana Alloy log/metrics collection
prometheus/node_deploy.ymlAllNode Exporter metrics
docker/deploy.ymlOberon, Ariel, Miranda, Puck, Rosalind, Sycorax, Caliban, TitaniaDocker engine
smtp4dev/deploy.ymlOberonSMTP test server
pplg/deploy.ymlProsperoFull observability stack + internal HAProxy + OAuth2-Proxy
postgresql/deploy.ymlPortiaPostgreSQL with all databases
postgresql_ssl/deploy.ymlTitaniaDedicated PostgreSQL for Casdoor
neo4j/deploy.ymlArielNeo4j graph database
searxng/deploy.ymlOberonSearXNG privacy search
haproxy/deploy.ymlTitaniaHAProxy TLS termination and routing
casdoor/deploy.ymlTitaniaCasdoor SSO
mcpo/deploy.ymlMirandaMCPO MCP proxy
openwebui/deploy.ymlOberonOpen WebUI LLM interface
hass/deploy.ymlOberonHome Assistant
gitea/deploy.ymlRosalindGitea self-hosted Git
nextcloud/deploy.ymlRosalindNextcloud collaboration

PlaybookHostService
anythingllm/deploy.ymlRosalindAnythingLLM document AI
arke/deploy.ymlSycoraxArke LLM proxy
argos/deploy.ymlMirandaArgos MCP web search server
caliban/deploy.ymlCalibanAgent S MCP Server
certbot/deploy.ymlTitaniaLet's Encrypt certificate renewal
gitea_mcp/deploy.ymlMirandaGitea MCP Server
gitea_runner/deploy.ymlPuckGitea CI/CD runner
grafana_mcp/deploy.ymlMirandaGrafana MCP Server
jupyterlab/deploy.ymlPuckJupyterLab + OAuth2-Proxy
kernos/deploy.ymlCalibanKernos MCP shell server
lobechat/deploy.ymlRosalindLobeChat AI chat
neo4j_mcp/deploy.ymlMirandaNeo4j MCP Server
rabbitmq/deploy.ymlOberonRabbitMQ message queue

sandbox_up.yml

Start all Uranian host containers

site.yml

Full deployment orchestration

apt_update.yml

Update packages on all hosts

sandbox_down.yml

Gracefully stop all containers

Data Flow Architecture

Observability Pipeline
flowchart LR subgraph hosts["All Hosts"] alloy["Alloy\n(syslog + journal)"] node_exp["Node Exporter\n(metrics)"] end subgraph prospero["Prospero"] loki["Loki\n(logs)"] prom["Prometheus\n(metrics)"] grafana["Grafana\n(dashboards)"] alert["AlertManager"] end pushover["Pushover\n(notifications)"] alloy -->|"HTTP push"| loki node_exp -->|"scrape 15s"| prom loki --> grafana prom --> grafana grafana --> alert alert -->|"webhook"| pushover
Service Integration Points
ConsumerProviderConnection
All LLM appsArke (Sycorax)http://sycorax.incus:25540
Open WebUI, Arke, Gitea, Nextcloud, LobeChatPostgreSQL (Portia)portia.incus:5432
Neo4j MCPNeo4j (Ariel)ariel.incus:7687 (Bolt)
MCP SwitchboardDocker API (Miranda)tcp://miranda.incus:2375
MCP Switchboard, Kairos, SpelunkerRabbitMQ (Oberon)oberon.incus:5672
All apps (SMTP)smtp4dev (Oberon)oberon.incus:22025
All hosts (logs)Loki (Prospero)http://prospero.incus:3100
All hosts (metrics)Prometheus (Prospero)http://prospero.incus:9090

Important Notes

Alloy Host Variables Required

Every host with alloy in its services list must define alloy_log_level in inventory/host_vars/<host>.incus.yml. The playbook will fail with an undefined variable error if this is missing.

Alloy Syslog Listeners Required for Docker Services

Any Docker Compose service using the syslog logging driver must have a corresponding loki.source.syslog listener in the host's Alloy config template (ansible/alloy/<hostname>/config.alloy.j2). Missing listeners cause Docker containers to fail on start because the syslog driver cannot connect to its configured port.

Local Terraform State

This project uses local Terraform state (no remote backend). Do not run terraform apply from multiple machines simultaneously.

Nested Docker

Docker runs inside Incus containers (nested), requiring security.nesting = true and lxc.apparmor.profile=unconfined AppArmor override on all Docker-enabled hosts.

Deployment Order

Prospero (observability) must be fully deployed before other hosts, as Alloy on every host pushes logs and metrics to prospero.incus. Run pplg/deploy.yml before site.yml on a fresh environment.