12 KiB
12 KiB
sso.base23.de
- Base23 SSO for all services
Authentik based SSO for our sevices.
Table of Contents
Prerequisites
Server Setup
apt update \
&& apt upgrade -y \
&& for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt remove $pkg; done \
&& apt install ca-certificates curl \
&& install -m 0755 -d /etc/apt/keyrings \
&& curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
&& chmod a+r /etc/apt/keyrings/docker.asc \
&& echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/null \
&& apt update \
&& apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin \
&& mkdir -p /var/lib/apps \
&& ln -s /var/lib/apps \
&& apt install -y git vim \
&& TEMP_DIR=$(mktemp -d) \
&& curl -fsSL https://github.com/go-acme/lego/releases/download/v4.20.2/lego_v4.20.2_linux_amd64.tar.gz -o ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz \
&& tar xzvf ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz --directory=${TEMP_DIR} \
&& install -m 755 -o root -g root "${TEMP_DIR}/lego" "/usr/local/bin" \
&& rm -rf ${TEMP_DIR} \
&& unset TEMP_DIR
Tailscale
printf "Enter preauthkey for Tailscale: " \
&& read -rs TAILSCALE_PREAUTHKEY \
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null \
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list \
&& apt-get update \
&& apt-get install tailscale \
&& tailscale up --login-server https://vpn.base23.de --authkey ${TAILSCALE_PREAUTHKEY} --advertise-tags=tag:prod-servers \
&& sleep 2 \
&& tailscale status \
&& unset TAILSCALE_PREAUTHKEY
Base23 Docker registry login
docker login -u gitlab+deploy-token-5 registry.git.base23.de
CrowdSec
Setup CrowdSec Repo
apt update \
&& apt upgrade -y \
&& apt install -y debian-archive-keyring \
&& apt install -y curl gnupg apt-transport-https \
&& mkdir -p /etc/apt/keyrings/ \
&& curl -fsSL https://packagecloud.io/crowdsec/crowdsec/gpgkey | gpg --dearmor > /etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg \
&& cat << EOF > /etc/apt/sources.list.d/crowdsec_crowdsec.list \
&& apt update
deb [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
deb-src [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
EOF
Install CrowdSec
Install CrowdSec:
printf "Enter CrowdSec context: " \
&& read -rs CROWDSEC_CONTEXT \
&& apt install -y crowdsec crowdsec-firewall-bouncer-iptables \
&& cscli completion bash | tee /etc/bash_completion.d/cscli \
&& source ~/.bashrc \
&& cscli console enroll -e context ${CROWDSEC_CONTEXT} \
&& unset CROWDSEC_CONTEXT
Restart CordSec Service, after accepting the enrollment on the CrowdSec Console:
systemctl restart crowdsec; systemctl status crowdsec.service
Configure CrowdSec
Whitelist Tailscale IPs:
cat << EOF > /etc/crowdsec/parsers/s02-enrich/01-base23-tailscale.yaml \
&& systemctl restart crowdsec; journalctl -xef -u crowdsec.service
name: base23/tailscale ## Must be unqiue
description: "Whitelist events from Tailscale Subnet"
whitelist:
reason: "Tailscale clients"
cidr:
- "100.64.0.0/10"
EOF
Whitelist our current Public IPs:
mkdir -p /etc/crowdsec/postoverflows/s01-whitelist/ \
&& cat << EOF > /etc/crowdsec/postoverflows/s01-whitelist/01-base23-public-ips.yaml \
&& crowdsec -t && systemctl restart crowdsec; systemctl status crowdsec.service
name: base23/public-ips ## Must be unqiue
description: "Whitelist events from base23 public IPs"
whitelist:
reason: "Base23 Public IPs"
expression:
- evt.Overflow.Alert.Source.IP in LookupHost("asterix.ddns.base23.de")
EOF
Add Authentik integration:
cscli collections install firix/authentik \
&& cat << EOF > /etc/crowdsec/acquis.d/authentik.yaml \
&& crowdsec -t && systemctl restart crowdsec
---
source: docker
container_name_regexp:
- sso-base23-de-server-*
- sso-base23-de-worker-*
labels:
type: authentik
EOF
Enable increasing ban time:
sed -i -e 's/^#duration_expr/duration_expr/g' /etc/crowdsec/profiles.yaml \
&& crowdsec -t && systemctl restart crowdsec
Setup notifications:
Installation
Clone & configure initially
- Create a Storage Box sub account.
- Enter the username to
env.template
. - Run the initial configuration script:
cd /root/apps \
&& git clone ssh://git@git.base23.de:222/base23/sso.base23.de.git \
&& cd sso.base23.de \
&& ./scripts/init.sh
- Use the generated SSH key and copy it to the Hetzner Storage box for backups:
TARGET_DOMAIN=cloud.backup.base23.de \
TARGET_KEY_TYPES="ecdsa-sha2-nistp521,ed25519,ed25519-sk,rsa,dsa,ecdsa,ecdsa-sk" \
TARGET_IPV4=$(dig +short "${TARGET_DOMAIN}" A | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$') \
TARGET_IPV6=$(dig +short "${TARGET_DOMAIN}" AAAA | grep -E '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$') \
&& cat ./data/restic/ssh/id_ed25519.pub | ssh -p23 u291924-sub4@${TARGET_DOMAIN} install-ssh-key \
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} > ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts
Fist run
docker compose build --no-cache \
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg SRC_REV=$(git rev-parse --short HEAD) \
&& docker compose up -d; docker compose logs -f
Upgrade
Test
This is intended for testing the upgrades before rollout on prod.
- Check if the backups are up to date:
docker compose run --rm restore-cli /usr/local/bin/restic snapshots
- Update
AUTHENTIK_TAG
to the desired tag inenv.test.template
. - Check the upstream
docker-compose.yml
file against ours for changes made in the configuration. Check the Postgresql and Redis docker tags. Minor revisions of Postgresql should be fine, check the changelogs for any issues and if none are present raise to the latest Minor version (e.g. 16.6 -> 16.9). Redis should be less problematic for upgrades, check nonetheless. - Run
diff --color='auto' env.test.template .env
to display the diff betweenenv.test.template
and.env
. - Port the changes made to
.env
. docker-compose-2.32.4 pull
docker-compose-2.32.4 down
docker-compose-2.32.4 up -d; docker-compose-2.32.4 logs -f
- Check the logs for any issues during during startup. Check if https://sso.test.base23.de is available and test the sso login (e.g https://whoami.test.base23.de)
- Apply changes for test to prod files (
docker-compose.<stage>.yml
,env.<stage>.template
), commit & push changes to the Repo in a new branch and create a merge request in preparation for the prod upgrade
Prod
It is expected that the Upgrade has already been performed and tested on https://sso.test.base23.de, and the changes have been merged into main
- Perform the Upgrade on test first.
- Check if the backups are up to date:
docker compose run --rm restore-cli /usr/local/bin/restic snapshots
- Perform a git pull
- Run
diff --color='auto' env.prod.template .env
to display the diff betweenenv.prod.template
and.env
. - Run
diff --color='auto' docker-compose.prod.yml docker-compose.yml
to display the diff betweendocker-compose.prod.yml
and docker-compose.yml` - Port the changes made to
.env
anddocker-compose.yml
docker compose pull
docker compose down
docker compose up -d; docker compose logs -f
- Check the logs for any issues during during startup. Check if https://sso.base23.de is available and test the sso login (e.g https://vpn.base23.de/admin).
Disaster recovery / restore
IMPORTANT:
You've to use different docker cli clients on prod/test.
- Prod
- Docker:
docker
- Docker compose:
docker compose
- Docker:
- Test
- Docker:
docker
- Docker compose:
docker-compose-2.32.4
- Docker:
For the ease of readability I'll use docker
and docker compose
in the documentation below,
please repalce it for restores on test!
- Run the restore cli
docker compose run --rm restore-cli
- Run the restore command and follow it's instructions
restore
- If the restore was successfull, exit the restore container.
DO NOT START THE APPLICATION YET! - Run the PostgreSQL container without starting the main application
docker compose run --rm postgresql
- Open another shell in the sso git directory.
- Execute a shell in the running PostgreSQL container (replace
<containerid>
with the actual container id)docker exec -it sso-base23-de-postgresql-run-<containerid> bash
- If the database already contains data, delete an recreate it:
dropdb -U ${PG_USER:-authentik} ${PG_DB:-authentik} createdb -U ${PG_USER:-authentik} ${PG_DB:-authentik}
- Restore the database
psql ${PG_USER:-authentik} -d ${PG_DB:-authentik} -f /var/lib/postgresql/backups/authentik.sql
- After the database is restored, exit the container
- Now it's safe to start the complete application stack again
docker compose up -d; docker compose logs -f
Rebuild containers locally
docker compose build --no-cache \
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
--build-arg SRC_REV=$(git rev-parse --short HEAD)