Docker compose deployment for my authentik instance, sso.s1q.dev.
Find a file
2025-07-22 15:41:36 +02:00
data/nginx change to use wildcard certificates 2025-01-28 18:49:38 +01:00
docker/nginx Add cert scripts; update readme; update gitignore; add nginx 2024-11-19 10:54:05 +01:00
scripts transition to seperate templates for prod and test; update script to add ssh-key deployment 2025-01-31 21:33:45 +01:00
.gitignore add restore-cli container, to restore from backups; update compose file to 2024.12 requirements; 2025-01-31 21:21:54 +01:00
docker-compose.prod.yml Update docker-compose files to version 2025.2.1 and enforce required environment variables 2025-03-11 11:48:11 +01:00
docker-compose.test.yml Update docker-compose files to version 2025.2.1 and enforce required environment variables 2025-03-11 11:48:11 +01:00
env.prod.template Upgrade to 2025.6 2025-07-22 14:04:46 +02:00
env.test.template Upgrade to 2025.6 2025-07-22 14:03:27 +02:00
README.md changed login example 2025-07-22 15:41:36 +02:00

sso.base23.de - Base23 SSO for all services

Authentik based SSO for our sevices.

Table of Contents

Prerequisites

Server Setup

apt update \
  && apt upgrade -y \
  && for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt remove $pkg; done \
  && apt install ca-certificates curl \
  && install -m 0755 -d /etc/apt/keyrings \
  && curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
  && chmod a+r /etc/apt/keyrings/docker.asc \
  && echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null \
  && apt update \
  && apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin \
  && mkdir -p /var/lib/apps \
  && ln -s /var/lib/apps \
  && apt install -y git vim \
  && TEMP_DIR=$(mktemp -d) \
  && curl -fsSL https://github.com/go-acme/lego/releases/download/v4.20.2/lego_v4.20.2_linux_amd64.tar.gz -o ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz \
  && tar xzvf ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz --directory=${TEMP_DIR} \
  && install -m 755 -o root -g root "${TEMP_DIR}/lego" "/usr/local/bin" \
  && rm -rf ${TEMP_DIR} \
  && unset TEMP_DIR

Tailscale

printf "Enter preauthkey for Tailscale: " \
  && read -rs TAILSCALE_PREAUTHKEY \
  && curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null \
  && curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list \
  && apt-get update \
  && apt-get install tailscale \
  && tailscale up --login-server https://vpn.base23.de --authkey ${TAILSCALE_PREAUTHKEY} --advertise-tags=tag:prod-servers \
  && sleep 2 \
  && tailscale status \
  && unset TAILSCALE_PREAUTHKEY

Base23 Docker registry login

docker login -u gitlab+deploy-token-5 registry.git.base23.de

CrowdSec

Setup CrowdSec Repo

apt update \
  && apt upgrade -y \
  && apt install -y debian-archive-keyring \
  && apt install -y curl gnupg apt-transport-https \
  && mkdir -p /etc/apt/keyrings/ \
  && curl -fsSL https://packagecloud.io/crowdsec/crowdsec/gpgkey | gpg --dearmor > /etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg \
  && cat << EOF > /etc/apt/sources.list.d/crowdsec_crowdsec.list \
  && apt update
deb [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
deb-src [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
EOF

Install CrowdSec

Install CrowdSec:

printf "Enter CrowdSec context: " \
  && read -rs CROWDSEC_CONTEXT \
  && apt install -y crowdsec crowdsec-firewall-bouncer-iptables \
  && cscli completion bash | tee /etc/bash_completion.d/cscli \
  && source ~/.bashrc \
  && cscli console enroll -e context ${CROWDSEC_CONTEXT} \
  && unset CROWDSEC_CONTEXT

Restart CordSec Service, after accepting the enrollment on the CrowdSec Console:

systemctl restart crowdsec; systemctl status crowdsec.service

Configure CrowdSec

Whitelist Tailscale IPs:

cat << EOF > /etc/crowdsec/parsers/s02-enrich/01-base23-tailscale.yaml \
  && systemctl restart crowdsec; journalctl -xef -u crowdsec.service
name: base23/tailscale ## Must be unqiue
description: "Whitelist events from Tailscale Subnet"
whitelist:
  reason: "Tailscale clients"
  cidr: 
    - "100.64.0.0/10"
EOF

Whitelist our current Public IPs:

mkdir -p /etc/crowdsec/postoverflows/s01-whitelist/ \
  && cat << EOF > /etc/crowdsec/postoverflows/s01-whitelist/01-base23-public-ips.yaml \
  && crowdsec -t && systemctl restart crowdsec; systemctl status crowdsec.service
name: base23/public-ips ## Must be unqiue
description: "Whitelist events from base23 public IPs"
whitelist:
  reason: "Base23 Public IPs"
  expression:
    - evt.Overflow.Alert.Source.IP in LookupHost("asterix.ddns.base23.de")
EOF

Add Authentik integration:

cscli collections install firix/authentik \
  && cat << EOF > /etc/crowdsec/acquis.d/authentik.yaml \
  && crowdsec -t && systemctl restart crowdsec
---
source: docker
container_name_regexp:
  - sso-base23-de-server-*
  - sso-base23-de-worker-*
labels:
  type: authentik
EOF

Enable increasing ban time:

sed -i -e 's/^#duration_expr/duration_expr/g' /etc/crowdsec/profiles.yaml \
  && crowdsec -t && systemctl restart crowdsec

Setup notifications:

Installation

Clone & configure initially

  1. Create a Storage Box sub account.
  2. Enter the username to env.template.
  3. Run the initial configuration script:
cd /root/apps \
  && git clone ssh://git@git.base23.de:222/base23/sso.base23.de.git \
  && cd sso.base23.de \
  && ./scripts/init.sh
  1. Use the generated SSH key and copy it to the Hetzner Storage box for backups:
TARGET_DOMAIN=cloud.backup.base23.de \
TARGET_KEY_TYPES="ecdsa-sha2-nistp521,ed25519,ed25519-sk,rsa,dsa,ecdsa,ecdsa-sk" \
TARGET_IPV4=$(dig +short "${TARGET_DOMAIN}" A | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$') \
TARGET_IPV6=$(dig +short "${TARGET_DOMAIN}" AAAA | grep -E '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$') \
  && cat ./data/restic/ssh/id_ed25519.pub | ssh -p23 u291924-sub4@${TARGET_DOMAIN} install-ssh-key \
  && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} > ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts

Fist run

docker compose build --no-cache \
    --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
    --build-arg SRC_REV=$(git rev-parse --short HEAD) \
  && docker compose up -d; docker compose logs -f

Upgrade

Test

This is intended for testing the upgrades before rollout on prod.

  1. Check if the backups are up to date: docker compose run --rm restore-cli /usr/local/bin/restic snapshots
  2. Update AUTHENTIK_TAG to the desired tag in env.test.template.
  3. Check the upstream docker-compose.yml file against ours for changes made in the configuration. Check the Postgresql and Redis docker tags. Minor revisions of Postgresql should be fine, check the changelogs for any issues and if none are present raise to the latest Minor version (e.g. 16.6 -> 16.9). Redis should be less problematic for upgrades, check nonetheless.
  4. Run diff --color='auto' env.test.template .env to display the diff between env.test.template and .env.
  5. Port the changes made to .env.
  6. docker-compose-2.32.4 pull
  7. docker-compose-2.32.4 down
  8. docker-compose-2.32.4 up -d; docker-compose-2.32.4 logs -f
  9. Check the logs for any issues during during startup. Check if https://sso.test.base23.de is available and test the sso login (e.g https://whoami.test.base23.de)
  10. Apply changes for test to prod files (docker-compose.<stage>.yml, env.<stage>.template), commit & push changes to the Repo in a new branch and create a merge request in preparation for the prod upgrade

Prod

It is expected that the Upgrade has already been performed and tested on https://sso.test.base23.de, and the changes have been merged into main

  1. Perform the Upgrade on test first.
  2. Check if the backups are up to date: docker compose run --rm restore-cli /usr/local/bin/restic snapshots
  3. Perform a git pull
  4. Run diff --color='auto' env.prod.template .env to display the diff between env.prod.template and .env.
  5. Run diff --color='auto' docker-compose.prod.yml docker-compose.yml to display the diff between docker-compose.prod.ymland docker-compose.yml`
  6. Port the changes made to .env and docker-compose.yml
  7. docker compose pull
  8. docker compose down
  9. docker compose up -d; docker compose logs -f
  10. Check the logs for any issues during during startup. Check if https://sso.base23.de is available and test the sso login (e.g https://vpn.base23.de/admin).

Disaster recovery / restore

IMPORTANT:
You've to use different docker cli clients on prod/test.

  • Prod
    • Docker: docker
    • Docker compose: docker compose
  • Test
    • Docker: docker
    • Docker compose: docker-compose-2.32.4

For the ease of readability I'll use docker and docker compose in the documentation below, please repalce it for restores on test!

  1. Run the restore cli
    docker compose run --rm restore-cli
    
  2. Run the restore command and follow it's instructions
    restore
    
  3. If the restore was successfull, exit the restore container.
    DO NOT START THE APPLICATION YET!
  4. Run the PostgreSQL container without starting the main application
     docker compose run --rm postgresql
    
  5. Open another shell in the sso git directory.
  6. Execute a shell in the running PostgreSQL container (replace <containerid> with the actual container id)
    docker exec -it sso-base23-de-postgresql-run-<containerid> bash
    
  7. If the database already contains data, delete an recreate it:
    dropdb -U ${PG_USER:-authentik} ${PG_DB:-authentik}
    createdb -U ${PG_USER:-authentik} ${PG_DB:-authentik}
    
  8. Restore the database
    psql ${PG_USER:-authentik} -d ${PG_DB:-authentik} -f /var/lib/postgresql/backups/authentik.sql
    
  9. After the database is restored, exit the container
  10. Now it's safe to start the complete application stack again
    docker compose up -d; docker compose logs -f
    

Rebuild containers locally

docker compose build --no-cache \
  --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
  --build-arg SRC_REV=$(git rev-parse --short HEAD)