Docker compose deployment for my authentik instance, sso.s1q.dev.
scripts | ||
.gitignore | ||
docker-compose.override.yml | ||
docker-compose.yml | ||
env.prod.template | ||
env.test.template | ||
README.md |
sso.s1q.dev
- Base23 SSO for all services
Authentik based SSO for our sevices.
Table of Contents
Prerequisites
- dokploy
Tailscale
printf "Enter preauthkey for Tailscale: " \
&& read -rs TAILSCALE_PREAUTHKEY \
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null \
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \
&& apt-get install tailscale \
&& tailscale up --login-server https://vpn.s1q.dev --authkey ${TAILSCALE_PREAUTHKEY} \
&& sleep 2 \
&& tailscale status \
&& unset TAILSCALE_PREAUTHKEY
CrowdSec
Setup CrowdSec Repo
apt update \
&& apt upgrade -y \
&& apt install -y debian-archive-keyring \
&& apt install -y curl gnupg apt-transport-https \
&& mkdir -p /etc/apt/keyrings/ \
&& curl -fsSL https://packagecloud.io/crowdsec/crowdsec/gpgkey | gpg --dearmor > /etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg \
&& cat << EOF > /etc/apt/sources.list.d/crowdsec_crowdsec.list \
&& apt update
deb [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
deb-src [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
EOF
Install CrowdSec
Install CrowdSec:
printf "Enter CrowdSec context: " \
&& read -rs CROWDSEC_CONTEXT \
&& apt install -y crowdsec crowdsec-firewall-bouncer-iptables \
&& cscli completion bash | tee /etc/bash_completion.d/cscli \
&& source ~/.bashrc \
&& cscli console enroll -e context ${CROWDSEC_CONTEXT} \
&& unset CROWDSEC_CONTEXT
Restart CordSec Service, after accepting the enrollment on the CrowdSec Console:
systemctl restart crowdsec; systemctl status crowdsec.service
Configure CrowdSec
Whitelist Tailscale IPs:
cat << EOF > /etc/crowdsec/parsers/s02-enrich/01-s1q-dev-tailscale.yaml \
&& systemctl restart crowdsec; journalctl -xef -u crowdsec.service
name: s1q-dev/tailscale ## Must be unqiue
description: "Whitelist events from Tailscale Subnet"
whitelist:
reason: "Tailscale clients"
cidr:
- "100.64.0.0/10"
EOF
Whitelist my current Public IPs:
mkdir -p /etc/crowdsec/postoverflows/s01-whitelist/ \
&& cat << EOF > /etc/crowdsec/postoverflows/s01-whitelist/01-s1q-dev-public-ips.yaml \
&& crowdsec -t && systemctl restart crowdsec; systemctl status crowdsec.service
name: s1q-dev/public-ips ## Must be unqiue
description: "Whitelist events from s1q-dev public IPs"
whitelist:
reason: "s1q-dev Public IPs"
expression:
- evt.Overflow.Alert.Source.IP in LookupHost("r3w.de")
EOF
Add Authentik integration:
cscli collections install firix/authentik \
&& cat << EOF > /etc/crowdsec/acquis.d/authentik.yaml \
&& crowdsec -t && systemctl restart crowdsec
---
source: docker
container_name_regexp:
- sso-s1q-dev-de-server-*
- sso-s1q-dev-de-worker-*
labels:
type: authentik
EOF
Enable increasing ban time:
sed -i -e 's/^#duration_expr/duration_expr/g' /etc/crowdsec/profiles.yaml \
&& crowdsec -t && systemctl restart crowdsec
Setup notifications:
Installation
Clone & configure initially
- Create a Storage Box sub account.
- Enter the username to
env.template
. - Run the initial configuration script:
cd /root/apps \
&& git clone ssh://git@git.base23.de:222/base23/sso.s1q.dev.git \
&& cd sso.s1q.dev \
&& ./scripts/init.sh
- Use the generated SSH key and copy it to the Hetzner Storage box for backups:
TARGET_DOMAIN=cloud.backup.base23.de \
TARGET_KEY_TYPES="ecdsa-sha2-nistp521,ed25519,ed25519-sk,rsa,dsa,ecdsa,ecdsa-sk" \
TARGET_IPV4=$(dig +short "${TARGET_DOMAIN}" A | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$') \
TARGET_IPV6=$(dig +short "${TARGET_DOMAIN}" AAAA | grep -E '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$') \
&& cat ./data/restic/ssh/id_ed25519.pub | ssh -p23 u291924-sub4@${TARGET_DOMAIN} install-ssh-key \
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} > ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts
Fist run
./scripts/compose.sh build --no-cache \
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg SRC_REV=$(git rev-parse --short HEAD) \
&& ./scripts/compose.sh up -d; ./scripts/compose.sh logs -f
Upgrade
Test
This is intended for testing the upgrades before rollout on prod.
- Check if the backups are up to date:
./scripts/compose.sh run --rm restore-cli /usr/local/bin/restic snapshots
- Create a new branch
git checkout -b <version>
. - Download the the docker compose for the version you want to update:
curl -fsSL -o docker-compose.yml https://goauthentik.io/version/<version>/docker-compose.yml
- Update
AUTHENTIK_TAG
to the desired tag inenv.test.template
. - Check the upstream
docker-compose.yml
file against ours for changes made in the configuration. Check the Postgresql and Redis docker tags. Minor revisions of Postgresql should be fine, check the changelogs for any issues and if none are present raise to the latest Minor version (e.g. 16.6 -> 16.9). Redis should be less problematic for upgrades, check nonetheless. - Run
diff --color='auto' env.test.template .env
to display the diff betweenenv.test.template
and.env
. - Port the changes made to
.env
. ./scripts/compose.sh pull
./scripts/compose.sh down
./scripts/compose.sh up -d; ./scripts/compose.sh logs -f
- Check the logs for any issues during during startup. Check if https://sso.test.base23.de is available and test the sso login (e.g https://whoami.test.base23.de)
- Apply changes for test to prod files (
docker-compose.<stage>.yml
,env.<stage>.template
), commit & push changes to the Repo in a new branch and create a merge request in preparation for the prod upgrade
Prod
It is expected that the Upgrade has already been performed and tested on https://sso.test.base23.de, and the changes have been merged into main
- Check if the backups are up to date:
./scripts/compose.sh run --rm restore-cli /usr/local/bin/restic snapshots
- Create a new branch
git checkout -b <version>
. - Download the the docker compose for the version you want to update:
curl -fsSL -o docker-compose.yml https://goauthentik.io/version/<version>/docker-compose.yml
- Update
AUTHENTIK_TAG
to the desired tag inenv.prod.template
. - Commit & push changes to the Repo.
- Run
diff --color='auto' env.prod.template .env
to display the diff betweenenv.prod.template
and.env
. - Port the made changes to
.env
. ./scripts/compose.sh pull
./scripts/compose.sh down
./scripts/compose.sh up -d; ./scripts/compose.sh logs -f
Disaster recovery / restore
IMPORTANT:
You've to use different docker cli clients on prod/test.
- Prod
- Docker:
docker
- Docker compose:
docker compose
- Docker:
- Test
- Docker:
docker
- Docker compose:
docker-compose-2.32.4
- Docker:
For the ease of readability I'll use docker
and docker compose
in the documentation below,
please repalce it for restores on test!
- Run the restore cli
docker compose run --rm restore-cli
- Run the restore command and follow it's instructions
restore
- If the restore was successfull, exit the restore container.
DO NOT START THE APPLICATION YET! - Run the PostgreSQL container without starting the main application
docker compose run --rm postgresql
- Open another shell in the sso git directory.
- Execute a shell in the running PostgreSQL container (replace
<containerid>
with the actual container id)docker exec -it sso-base23-de-postgresql-run-<containerid> bash
- If the database already contains data, delete an recreate it:
dropdb -U ${PG_USER:-authentik} ${PG_DB:-authentik} createdb -U ${PG_USER:-authentik} ${PG_DB:-authentik}
- Restore the database
psql ${PG_USER:-authentik} -d ${PG_DB:-authentik} -f /var/lib/postgresql/backups/authentik.sql
- After the database is restored, exit the container
- Now it's safe to start the complete application stack again
docker compose up -d; docker compose logs -f
Rebuild containers locally
docker compose build --no-cache \
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg SRC_REV=$(git rev-parse --short HEAD)