# `sso.s1q.dev` - Base23 SSO for all services [Authentik](https://goauthentik.io/) based SSO for our sevices. ## Table of Contents - [`sso.s1q.dev` - Base23 SSO for all services](#ssos1qdev---base23-sso-for-all-services) - [Table of Contents](#table-of-contents) - [Prerequisites](#prerequisites) - [Tailscale](#tailscale) - [CrowdSec](#crowdsec) - [Setup CrowdSec Repo](#setup-crowdsec-repo) - [Install CrowdSec](#install-crowdsec) - [Configure CrowdSec](#configure-crowdsec) - [Installation](#installation) - [Clone \& configure initially](#clone--configure-initially) - [Fist run](#fist-run) - [Upgrade](#upgrade) - [Test](#test) - [Prod](#prod) - [Disaster recovery / restore](#disaster-recovery--restore) - [Rebuild containers locally](#rebuild-containers-locally) ## Prerequisites - dokploy ### Tailscale ```shell printf "Enter preauthkey for Tailscale: " \ && read -rs TAILSCALE_PREAUTHKEY \ && curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null \ && curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list \ && apt-get update \ && apt-get install tailscale \ && tailscale up --login-server https://vpn.s1q.dev --authkey ${TAILSCALE_PREAUTHKEY} \ && sleep 2 \ && tailscale status \ && unset TAILSCALE_PREAUTHKEY ``` ### CrowdSec #### Setup CrowdSec Repo ```shell apt update \ && apt upgrade -y \ && apt install -y debian-archive-keyring \ && apt install -y curl gnupg apt-transport-https \ && mkdir -p /etc/apt/keyrings/ \ && curl -fsSL https://packagecloud.io/crowdsec/crowdsec/gpgkey | gpg --dearmor > /etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg \ && cat << EOF > /etc/apt/sources.list.d/crowdsec_crowdsec.list \ && apt update deb [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main deb-src [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main EOF ``` #### Install CrowdSec Install CrowdSec: ```shell printf "Enter CrowdSec context: " \ && read -rs CROWDSEC_CONTEXT \ && apt install -y crowdsec crowdsec-firewall-bouncer-iptables \ && cscli completion bash | tee /etc/bash_completion.d/cscli \ && source ~/.bashrc \ && cscli console enroll -e context ${CROWDSEC_CONTEXT} \ && unset CROWDSEC_CONTEXT ``` Restart CordSec Service, after accepting the enrollment on the [CrowdSec Console](https://app.crowdsec.net/): ```shell systemctl restart crowdsec; systemctl status crowdsec.service ``` #### Configure CrowdSec Whitelist Tailscale IPs: ```shell cat << EOF > /etc/crowdsec/parsers/s02-enrich/01-s1q-dev-tailscale.yaml \ && systemctl restart crowdsec; journalctl -xef -u crowdsec.service name: s1q-dev/tailscale ## Must be unqiue description: "Whitelist events from Tailscale Subnet" whitelist: reason: "Tailscale clients" cidr: - "100.64.0.0/10" EOF ``` Whitelist my current Public IPs: ```shell mkdir -p /etc/crowdsec/postoverflows/s01-whitelist/ \ && cat << EOF > /etc/crowdsec/postoverflows/s01-whitelist/01-s1q-dev-public-ips.yaml \ && crowdsec -t && systemctl restart crowdsec; systemctl status crowdsec.service name: s1q-dev/public-ips ## Must be unqiue description: "Whitelist events from s1q-dev public IPs" whitelist: reason: "s1q-dev Public IPs" expression: - evt.Overflow.Alert.Source.IP in LookupHost("r3w.de") EOF ``` Add Authentik integration: ```shell cscli collections install firix/authentik \ && cat << EOF > /etc/crowdsec/acquis.d/authentik.yaml \ && crowdsec -t && systemctl restart crowdsec --- source: docker container_name_regexp: - sso-s1q-dev-de-server-* - sso-s1q-dev-de-worker-* labels: type: authentik EOF ``` Enable increasing ban time: ```shell sed -i -e 's/^#duration_expr/duration_expr/g' /etc/crowdsec/profiles.yaml \ && crowdsec -t && systemctl restart crowdsec ``` Setup notifications: ## Installation ### Clone & configure initially 1. [Create a Storage Box sub account](https://confluence.base23.de/pages/viewpage.action?pageId=27820074). 2. Enter the username to `env.template`. 3. Run the initial configuration script: ```shell cd /root/apps \ && git clone ssh://git@git.base23.de:222/base23/sso.s1q.dev.git \ && cd sso.s1q.dev \ && ./scripts/init.sh ``` 4. Use the generated SSH key and copy it to the Hetzner Storage box for backups: ```shell TARGET_DOMAIN=cloud.backup.base23.de \ TARGET_KEY_TYPES="ecdsa-sha2-nistp521,ed25519,ed25519-sk,rsa,dsa,ecdsa,ecdsa-sk" \ TARGET_IPV4=$(dig +short "${TARGET_DOMAIN}" A | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$') \ TARGET_IPV6=$(dig +short "${TARGET_DOMAIN}" AAAA | grep -E '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$') \ && cat ./data/restic/ssh/id_ed25519.pub | ssh -p23 u291924-sub4@${TARGET_DOMAIN} install-ssh-key \ && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} > ./data/restic/ssh/known_hosts \ && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \ && ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts \ && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} >> ./data/restic/ssh/known_hosts \ && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \ && ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts ``` ### Fist run ```shell ./scripts/compose.sh build --no-cache \ --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \ --build-arg SRC_REV=$(git rev-parse --short HEAD) \ && ./scripts/compose.sh up -d; ./scripts/compose.sh logs -f ``` ## Upgrade ### Test This is intended for testing the upgrades before rollout on prod. 1. Check if the backups are up to date: `./scripts/compose.sh run --rm restore-cli /usr/local/bin/restic snapshots` 2. Create a new branch `git checkout -b `. 3. Download the the docker compose for the version you want to update: `curl -fsSL -o docker-compose.yml https://goauthentik.io/version//docker-compose.yml` 4. Update `AUTHENTIK_TAG` to the desired tag in `env.test.template`. 5. Check the upstream `docker-compose.yml` file against ours for changes made in the configuration. Check the Postgresql and Redis docker tags. Minor revisions of Postgresql *should be fine*, check the changelogs for any issues and if none are present raise to the latest Minor version (e.g. 16.6 -> 16.9). Redis should be less problematic for upgrades, check nonetheless. 6. Run `diff --color='auto' env.test.template .env` to display the diff between `env.test.template` and `.env`. 7. Port the changes made to `.env`. 8. `./scripts/compose.sh pull` 9. `./scripts/compose.sh down` 10. `./scripts/compose.sh up -d; ./scripts/compose.sh logs -f` 9. Check the logs for any issues during during startup. Check if https://sso.test.base23.de is available and test the sso login (e.g https://whoami.test.base23.de) 10. Apply changes for test to prod files (`docker-compose..yml`, `env..template`), commit & push changes to the Repo in a new branch and create a merge request in preparation for the prod upgrade ### Prod It is expected that the Upgrade has already been performed and tested on https://sso.test.base23.de, and the changes have been merged into main 1. Check if the backups are up to date: `./scripts/compose.sh run --rm restore-cli /usr/local/bin/restic snapshots` 2. Create a new branch `git checkout -b `. 3. Download the the docker compose for the version you want to update: `curl -fsSL -o docker-compose.yml https://goauthentik.io/version//docker-compose.yml` 4. Update `AUTHENTIK_TAG` to the desired tag in `env.prod.template`. 5. Commit & push changes to the Repo. 6. Run `diff --color='auto' env.prod.template .env` to display the diff between `env.prod.template` and `.env`. 7. Port the made changes to `.env`. 8. `./scripts/compose.sh pull` 9. `./scripts/compose.sh down` 10. `./scripts/compose.sh up -d; ./scripts/compose.sh logs -f` ## Disaster recovery / restore **IMPORTANT:** You've to use different docker cli clients on prod/test. - Prod - Docker: `docker` - Docker compose: `docker compose` - Test - Docker: `docker` - Docker compose: `docker-compose-2.32.4` For the ease of readability I'll use `docker` and `docker compose` in the documentation below, please repalce it for restores on test! 1. Run the restore cli ```shell docker compose run --rm restore-cli ``` 2. Run the restore command and follow it's instructions ```shell restore ``` 3. If the restore was successfull, exit the restore container. **DO NOT START THE APPLICATION YET!** 4. Run the PostgreSQL container without starting the main application ```shell docker compose run --rm postgresql ``` 5. Open another shell in the sso git directory. 6. Execute a shell in the running PostgreSQL container (replace `` with the actual container id) ```shell docker exec -it sso-base23-de-postgresql-run- bash ``` 7. If the database already contains data, delete an recreate it: ```shell dropdb -U ${PG_USER:-authentik} ${PG_DB:-authentik} createdb -U ${PG_USER:-authentik} ${PG_DB:-authentik} ``` 8. Restore the database ```shell psql ${PG_USER:-authentik} -d ${PG_DB:-authentik} -f /var/lib/postgresql/backups/authentik.sql ```` 9. After the database is restored, exit the container 10. Now it's safe to start the complete application stack again ```shell docker compose up -d; docker compose logs -f ``` ## Rebuild containers locally ```shell docker compose build --no-cache \ --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \ --build-arg SRC_REV=$(git rev-parse --short HEAD) ```