Docker compose deployment for my authentik instance, sso.s1q.dev.
				
			
		| data/nginx | ||
| docker/nginx | ||
| scripts | ||
| .gitignore | ||
| docker-compose.override.yml | ||
| docker-compose.prod.yml | ||
| docker-compose.test.yml | ||
| docker-compose.yml | ||
| env.prod.template | ||
| env.test.template | ||
| README.md | ||
sso.base23.de - Base23 SSO for all services
Authentik based SSO for our sevices.
Table of Contents
Prerequisites
Server Setup
apt update \
  && apt upgrade -y \
  && for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt remove $pkg; done \
  && apt install ca-certificates curl \
  && install -m 0755 -d /etc/apt/keyrings \
  && curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
  && chmod a+r /etc/apt/keyrings/docker.asc \
  && echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  tee /etc/apt/sources.list.d/docker.list > /dev/null \
  && apt update \
  && apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin \
  && mkdir -p /var/lib/apps \
  && ln -s /var/lib/apps \
  && apt install -y git vim \
  && TEMP_DIR=$(mktemp -d) \
  && curl -fsSL https://github.com/go-acme/lego/releases/download/v4.20.2/lego_v4.20.2_linux_amd64.tar.gz -o ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz \
  && tar xzvf ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz --directory=${TEMP_DIR} \
  && install -m 755 -o root -g root "${TEMP_DIR}/lego" "/usr/local/bin" \
  && rm -rf ${TEMP_DIR} \
  && unset TEMP_DIR
Tailscale
printf "Enter preauthkey for Tailscale: " \
  && read -rs TAILSCALE_PREAUTHKEY \
  && curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null \
  && curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list \
  && apt-get update \
  && apt-get install tailscale \
  && tailscale up --login-server https://vpn.base23.de --authkey ${TAILSCALE_PREAUTHKEY} --advertise-tags=tag:prod-servers \
  && sleep 2 \
  && tailscale status \
  && unset TAILSCALE_PREAUTHKEY
Base23 Docker registry login
docker login -u gitlab+deploy-token-5 registry.git.base23.de
CrowdSec
Setup CrowdSec Repo
apt update \
  && apt upgrade -y \
  && apt install -y debian-archive-keyring \
  && apt install -y curl gnupg apt-transport-https \
  && mkdir -p /etc/apt/keyrings/ \
  && curl -fsSL https://packagecloud.io/crowdsec/crowdsec/gpgkey | gpg --dearmor > /etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg \
  && cat << EOF > /etc/apt/sources.list.d/crowdsec_crowdsec.list \
  && apt update
deb [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
deb-src [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
EOF
Install CrowdSec
Install CrowdSec:
printf "Enter CrowdSec context: " \
  && read -rs CROWDSEC_CONTEXT \
  && apt install -y crowdsec crowdsec-firewall-bouncer-iptables \
  && cscli completion bash | tee /etc/bash_completion.d/cscli \
  && source ~/.bashrc \
  && cscli console enroll -e context ${CROWDSEC_CONTEXT} \
  && unset CROWDSEC_CONTEXT
Restart CordSec Service, after accepting the enrollment on the CrowdSec Console:
systemctl restart crowdsec; systemctl status crowdsec.service
Configure CrowdSec
Whitelist Tailscale IPs:
cat << EOF > /etc/crowdsec/parsers/s02-enrich/01-base23-tailscale.yaml \
  && systemctl restart crowdsec; journalctl -xef -u crowdsec.service
name: base23/tailscale ## Must be unqiue
description: "Whitelist events from Tailscale Subnet"
whitelist:
  reason: "Tailscale clients"
  cidr: 
    - "100.64.0.0/10"
EOF
Whitelist our current Public IPs:
mkdir -p /etc/crowdsec/postoverflows/s01-whitelist/ \
  && cat << EOF > /etc/crowdsec/postoverflows/s01-whitelist/01-base23-public-ips.yaml \
  && crowdsec -t && systemctl restart crowdsec; systemctl status crowdsec.service
name: base23/public-ips ## Must be unqiue
description: "Whitelist events from base23 public IPs"
whitelist:
  reason: "Base23 Public IPs"
  expression:
    - evt.Overflow.Alert.Source.IP in LookupHost("asterix.ddns.base23.de")
EOF
Add Authentik integration:
cscli collections install firix/authentik \
  && cat << EOF > /etc/crowdsec/acquis.d/authentik.yaml \
  && crowdsec -t && systemctl restart crowdsec
---
source: docker
container_name_regexp:
  - sso-base23-de-server-*
  - sso-base23-de-worker-*
labels:
  type: authentik
EOF
Enable increasing ban time:
sed -i -e 's/^#duration_expr/duration_expr/g' /etc/crowdsec/profiles.yaml \
  && crowdsec -t && systemctl restart crowdsec
Setup notifications:
Installation
Clone & configure initially
- Create a Storage Box sub account.
 - Enter the username to 
env.template. - Run the initial configuration script:
 
cd /root/apps \
  && git clone ssh://git@git.base23.de:222/base23/sso.base23.de.git \
  && cd sso.base23.de \
  && ./scripts/init.sh
- Use the generated SSH key and copy it to the Hetzner Storage box for backups:
 
TARGET_DOMAIN=cloud.backup.base23.de \
TARGET_KEY_TYPES="ecdsa-sha2-nistp521,ed25519,ed25519-sk,rsa,dsa,ecdsa,ecdsa-sk" \
TARGET_IPV4=$(dig +short "${TARGET_DOMAIN}" A | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$') \
TARGET_IPV6=$(dig +short "${TARGET_DOMAIN}" AAAA | grep -E '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$') \
  && cat ./data/restic/ssh/id_ed25519.pub | ssh -p23 u291924-sub4@${TARGET_DOMAIN} install-ssh-key \
  && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} > ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
  && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts
Fist run
./scripts/compose.sh build --no-cache \
    --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
    --build-arg SRC_REV=$(git rev-parse --short HEAD) \
  && ./scripts/compose.sh up -d; ./scripts/compose.sh logs -f
Upgrade
Test
- Check if the backups are up to date: 
./scripts/compose.sh run --rm restore-cli /usr/local/bin/restic snapshots - Create a new branch 
git checkout -b <version>. - Download the the docker compose for the version you want to update:
curl -fsSL -o docker-compose.yml https://goauthentik.io/version/<version>/docker-compose.yml - Update 
AUTHENTIK_TAGto the desired tag inenv.test.template. - Commit & push changes to the Repo.
 - Run 
diff --color='auto' env.test.template .envto display the diff betweenenv.test.templateand.env. - Port the made changes to 
.env. ./scripts/compose.sh pull./scripts/compose.sh down./scripts/compose.sh up -d; ./scripts/compose.sh logs -f
Prod
- Check if the backups are up to date: 
./scripts/compose.sh run --rm restore-cli /usr/local/bin/restic snapshots - Create a new branch 
git checkout -b <version>. - Download the the docker compose for the version you want to update:
curl -fsSL -o docker-compose.yml https://goauthentik.io/version/<version>/docker-compose.yml - Update 
AUTHENTIK_TAGto the desired tag inenv.prod.template. - Commit & push changes to the Repo.
 - Run 
diff --color='auto' env.prod.template .envto display the diff betweenenv.prod.templateand.env. - Port the made changes to 
.env. ./scripts/compose.sh pull./scripts/compose.sh down./scripts/compose.sh up -d; ./scripts/compose.sh logs -f
Disaster recovery / restore
IMPORTANT:
You've to use different docker cli clients on prod/test.
- Prod
- Docker: 
docker - Docker compose: 
docker compose 
 - Docker: 
 - Test
- Docker: 
docker - Docker compose: 
docker-compose-2.32.4 
 - Docker: 
 
For the ease of readability I'll use docker and docker compose in the documentation below,
please repalce it for restores on test!
- Run the restore cli
docker compose run --rm restore-cli - Run the restore command and follow it's instructions
restore - If the restore was successfull, exit the restore container.
DO NOT START THE APPLICATION YET! - Run the PostgreSQL container without starting the main application
docker compose run --rm postgresql - Open another shell in the sso git directory.
 - Execute a shell in the running PostgreSQL container (replace 
<containerid>with the actual container id)docker exec -it sso-base23-de-postgresql-run-<containerid> bash - If the database already contains data, delete an recreate it:
dropdb -U ${PG_USER:-authentik} ${PG_DB:-authentik} createdb -U ${PG_USER:-authentik} ${PG_DB:-authentik} - Restore the database
psql ${PG_USER:-authentik} -d ${PG_DB:-authentik} -f /var/lib/postgresql/backups/authentik.sql - After the database is restored, exit the container
 - Now it's safe to start the complete application stack again
docker compose up -d; docker compose logs -f 
Rebuild containers locally
docker compose build --no-cache \
  --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
  --build-arg SRC_REV=$(git rev-parse --short HEAD)