299 lines
12 KiB
Markdown
299 lines
12 KiB
Markdown
# `sso.base23.de` - Base23 SSO for all services
|
|
|
|
[Authentik](https://goauthentik.io/) based SSO for our sevices.
|
|
|
|
## Table of Contents
|
|
|
|
- [`sso.base23.de` - Base23 SSO for all services](#ssobase23de---base23-sso-for-all-services)
|
|
- [Table of Contents](#table-of-contents)
|
|
- [Prerequisites](#prerequisites)
|
|
- [Server Setup](#server-setup)
|
|
- [Tailscale](#tailscale)
|
|
- [Base23 Docker registry login](#base23-docker-registry-login)
|
|
- [CrowdSec](#crowdsec)
|
|
- [Setup CrowdSec Repo](#setup-crowdsec-repo)
|
|
- [Install CrowdSec](#install-crowdsec)
|
|
- [Configure CrowdSec](#configure-crowdsec)
|
|
- [Installation](#installation)
|
|
- [Clone \& configure initially](#clone--configure-initially)
|
|
- [Fist run](#fist-run)
|
|
- [Upgrade](#upgrade)
|
|
- [Test](#test)
|
|
- [Prod](#prod)
|
|
- [Disaster recovery / restore](#disaster-recovery--restore)
|
|
- [Rebuild containers locally](#rebuild-containers-locally)
|
|
|
|
## Prerequisites
|
|
|
|
### Server Setup
|
|
|
|
```shell
|
|
apt update \
|
|
&& apt upgrade -y \
|
|
&& for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt remove $pkg; done \
|
|
&& apt install ca-certificates curl \
|
|
&& install -m 0755 -d /etc/apt/keyrings \
|
|
&& curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc \
|
|
&& chmod a+r /etc/apt/keyrings/docker.asc \
|
|
&& echo \
|
|
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
|
|
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
|
|
tee /etc/apt/sources.list.d/docker.list > /dev/null \
|
|
&& apt update \
|
|
&& apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin \
|
|
&& mkdir -p /var/lib/apps \
|
|
&& ln -s /var/lib/apps \
|
|
&& apt install -y git vim \
|
|
&& TEMP_DIR=$(mktemp -d) \
|
|
&& curl -fsSL https://github.com/go-acme/lego/releases/download/v4.20.2/lego_v4.20.2_linux_amd64.tar.gz -o ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz \
|
|
&& tar xzvf ${TEMP_DIR}/lego_v4.20.2_linux_amd64.tar.gz --directory=${TEMP_DIR} \
|
|
&& install -m 755 -o root -g root "${TEMP_DIR}/lego" "/usr/local/bin" \
|
|
&& rm -rf ${TEMP_DIR} \
|
|
&& unset TEMP_DIR
|
|
```
|
|
|
|
### Tailscale
|
|
|
|
```shell
|
|
printf "Enter preauthkey for Tailscale: " \
|
|
&& read -rs TAILSCALE_PREAUTHKEY \
|
|
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null \
|
|
&& curl -fsSL https://pkgs.tailscale.com/stable/debian/bookworm.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list \
|
|
&& apt-get update \
|
|
&& apt-get install tailscale \
|
|
&& tailscale up --login-server https://vpn.base23.de --authkey ${TAILSCALE_PREAUTHKEY} --advertise-tags=tag:prod-servers \
|
|
&& sleep 2 \
|
|
&& tailscale status \
|
|
&& unset TAILSCALE_PREAUTHKEY
|
|
```
|
|
|
|
### Base23 Docker registry login
|
|
|
|
```shell
|
|
docker login -u gitlab+deploy-token-5 registry.git.base23.de
|
|
```
|
|
|
|
### CrowdSec
|
|
|
|
#### Setup CrowdSec Repo
|
|
|
|
```shell
|
|
apt update \
|
|
&& apt upgrade -y \
|
|
&& apt install -y debian-archive-keyring \
|
|
&& apt install -y curl gnupg apt-transport-https \
|
|
&& mkdir -p /etc/apt/keyrings/ \
|
|
&& curl -fsSL https://packagecloud.io/crowdsec/crowdsec/gpgkey | gpg --dearmor > /etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg \
|
|
&& cat << EOF > /etc/apt/sources.list.d/crowdsec_crowdsec.list \
|
|
&& apt update
|
|
deb [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
|
|
deb-src [signed-by=/etc/apt/keyrings/crowdsec_crowdsec-archive-keyring.gpg] https://packagecloud.io/crowdsec/crowdsec/any any main
|
|
EOF
|
|
```
|
|
|
|
#### Install CrowdSec
|
|
|
|
Install CrowdSec:
|
|
|
|
```shell
|
|
printf "Enter CrowdSec context: " \
|
|
&& read -rs CROWDSEC_CONTEXT \
|
|
&& apt install -y crowdsec crowdsec-firewall-bouncer-iptables \
|
|
&& cscli completion bash | tee /etc/bash_completion.d/cscli \
|
|
&& source ~/.bashrc \
|
|
&& cscli console enroll -e context ${CROWDSEC_CONTEXT} \
|
|
&& unset CROWDSEC_CONTEXT
|
|
```
|
|
|
|
Restart CordSec Service, after accepting the enrollment on the [CrowdSec Console](https://app.crowdsec.net/):
|
|
|
|
```shell
|
|
systemctl restart crowdsec; systemctl status crowdsec.service
|
|
```
|
|
|
|
#### Configure CrowdSec
|
|
|
|
Whitelist Tailscale IPs:
|
|
|
|
```shell
|
|
cat << EOF > /etc/crowdsec/parsers/s02-enrich/01-base23-tailscale.yaml \
|
|
&& systemctl restart crowdsec; journalctl -xef -u crowdsec.service
|
|
name: base23/tailscale ## Must be unqiue
|
|
description: "Whitelist events from Tailscale Subnet"
|
|
whitelist:
|
|
reason: "Tailscale clients"
|
|
cidr:
|
|
- "100.64.0.0/10"
|
|
EOF
|
|
```
|
|
|
|
Whitelist our current Public IPs:
|
|
|
|
```shell
|
|
mkdir -p /etc/crowdsec/postoverflows/s01-whitelist/ \
|
|
&& cat << EOF > /etc/crowdsec/postoverflows/s01-whitelist/01-base23-public-ips.yaml \
|
|
&& crowdsec -t && systemctl restart crowdsec; systemctl status crowdsec.service
|
|
name: base23/public-ips ## Must be unqiue
|
|
description: "Whitelist events from base23 public IPs"
|
|
whitelist:
|
|
reason: "Base23 Public IPs"
|
|
expression:
|
|
- evt.Overflow.Alert.Source.IP in LookupHost("asterix.ddns.base23.de")
|
|
EOF
|
|
```
|
|
|
|
Add Authentik integration:
|
|
|
|
```shell
|
|
cscli collections install firix/authentik \
|
|
&& cat << EOF > /etc/crowdsec/acquis.d/authentik.yaml \
|
|
&& crowdsec -t && systemctl restart crowdsec
|
|
---
|
|
source: docker
|
|
container_name_regexp:
|
|
- sso-base23-de-server-*
|
|
- sso-base23-de-worker-*
|
|
labels:
|
|
type: authentik
|
|
EOF
|
|
```
|
|
|
|
Enable increasing ban time:
|
|
|
|
```shell
|
|
sed -i -e 's/^#duration_expr/duration_expr/g' /etc/crowdsec/profiles.yaml \
|
|
&& crowdsec -t && systemctl restart crowdsec
|
|
```
|
|
|
|
Setup notifications:
|
|
|
|
## Installation
|
|
|
|
### Clone & configure initially
|
|
|
|
1. [Create a Storage Box sub account](https://confluence.base23.de/pages/viewpage.action?pageId=27820074).
|
|
2. Enter the username to `env.template`.
|
|
3. Run the initial configuration script:
|
|
|
|
```shell
|
|
cd /root/apps \
|
|
&& git clone ssh://git@git.base23.de:222/base23/sso.base23.de.git \
|
|
&& cd sso.base23.de \
|
|
&& ./scripts/init.sh
|
|
```
|
|
|
|
4. Use the generated SSH key and copy it to the Hetzner Storage box for backups:
|
|
|
|
```shell
|
|
TARGET_DOMAIN=cloud.backup.base23.de \
|
|
TARGET_KEY_TYPES="ecdsa-sha2-nistp521,ed25519,ed25519-sk,rsa,dsa,ecdsa,ecdsa-sk" \
|
|
TARGET_IPV4=$(dig +short "${TARGET_DOMAIN}" A | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$') \
|
|
TARGET_IPV6=$(dig +short "${TARGET_DOMAIN}" AAAA | grep -E '^(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$') \
|
|
&& cat ./data/restic/ssh/id_ed25519.pub | ssh -p23 u291924-sub4@${TARGET_DOMAIN} install-ssh-key \
|
|
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} > ./data/restic/ssh/known_hosts \
|
|
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
|
|
&& ssh-keyscan -p 23 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts \
|
|
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_DOMAIN} >> ./data/restic/ssh/known_hosts \
|
|
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV4} >> ./data/restic/ssh/known_hosts \
|
|
&& ssh-keyscan -p 22 -t ${TARGET_KEY_TYPES} ${TARGET_IPV6} >> ./data/restic/ssh/known_hosts
|
|
|
|
```
|
|
|
|
### Fist run
|
|
|
|
```shell
|
|
docker compose build --no-cache \
|
|
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
|
|
--build-arg SRC_REV=$(git rev-parse --short HEAD) \
|
|
&& docker compose up -d; docker compose logs -f
|
|
```
|
|
|
|
## Upgrade
|
|
|
|
|
|
|
|
### Test
|
|
This is intended for testing the upgrades before rollout on prod.
|
|
|
|
1. Check if the backups are up to date: `docker compose run --rm restore-cli /usr/local/bin/restic snapshots`
|
|
2. Update `AUTHENTIK_TAG` to the desired tag in `env.test.template`.
|
|
3. Check the upstream `docker-compose.yml` file against ours for changes made in the configuration. Check the Postgresql and Redis docker tags. Minor revisions of Postgresql *should be fine*, check the changelogs for any issues and if none are present raise to the latest Minor version (e.g. 16.6 -> 16.9). Redis should be less problematic for upgrades, check nonetheless.
|
|
4. Run `diff --color='auto' env.test.template .env` to display the diff between
|
|
`env.test.template` and `.env`.
|
|
5. Port the changes made to `.env`.
|
|
6. `docker-compose-2.32.4 pull`
|
|
7. `docker-compose-2.32.4 down`
|
|
8. `docker-compose-2.32.4 up -d; docker-compose-2.32.4 logs -f`
|
|
9. Check the logs for any issues during during startup. Check if https://sso.test.base23.de is available and test the sso login (e.g https://vpn-test.base23.de/admin)
|
|
10. Apply changes for test to prod files (`docker-compose.<stage>.yml`, `env.<stage>.template`), commit & push changes to the Repo in a new branch and create a merge request in preparation for the prod upgrade
|
|
|
|
### Prod
|
|
It is expected that the Upgrade has already been performed and tested on https://sso.test.base23.de, and the changes have been merged into main
|
|
|
|
0. Perform the Upgrade on test first.
|
|
1. Check if the backups are up to date: `docker compose run --rm restore-cli /usr/local/bin/restic snapshots`
|
|
2. Perform a git pull
|
|
4. Run `diff --color='auto' env.prod.template .env` to display the diff between `env.prod.template` and `.env`.
|
|
5. Run `diff --color='auto' docker-compose.prod.yml docker-compose.yml` to display the diff between `docker-compose.prod.yml`and docker-compose.yml`
|
|
6. Port the changes made to `.env` and `docker-compose.yml`
|
|
7. `docker compose pull`
|
|
8. `docker compose down`
|
|
9. `docker compose up -d; docker compose logs -f`
|
|
10. Check the logs for any issues during during startup. Check if https://sso.base23.de is available and test the sso login (e.g https://vpn.base23.de/admin).
|
|
|
|
## Disaster recovery / restore
|
|
|
|
**IMPORTANT:**
|
|
You've to use different docker cli clients on prod/test.
|
|
|
|
- Prod
|
|
- Docker: `docker`
|
|
- Docker compose: `docker compose`
|
|
- Test
|
|
- Docker: `docker`
|
|
- Docker compose: `docker-compose-2.32.4`
|
|
|
|
For the ease of readability I'll use `docker` and `docker compose` in the documentation below,
|
|
please repalce it for restores on test!
|
|
|
|
1. Run the restore cli
|
|
```shell
|
|
docker compose run --rm restore-cli
|
|
```
|
|
2. Run the restore command and follow it's instructions
|
|
```shell
|
|
restore
|
|
```
|
|
3. If the restore was successfull, exit the restore container.
|
|
**DO NOT START THE APPLICATION YET!**
|
|
4. Run the PostgreSQL container without starting the main application
|
|
```shell
|
|
docker compose run --rm postgresql
|
|
```
|
|
5. Open another shell in the sso git directory.
|
|
6. Execute a shell in the running PostgreSQL container (replace `<containerid>` with the actual container id)
|
|
```shell
|
|
docker exec -it sso-base23-de-postgresql-run-<containerid> bash
|
|
```
|
|
7. If the database already contains data, delete an recreate it:
|
|
```shell
|
|
dropdb -U ${PG_USER:-authentik} ${PG_DB:-authentik}
|
|
createdb -U ${PG_USER:-authentik} ${PG_DB:-authentik}
|
|
```
|
|
8. Restore the database
|
|
```shell
|
|
psql ${PG_USER:-authentik} -d ${PG_DB:-authentik} -f /var/lib/postgresql/backups/authentik.sql
|
|
````
|
|
9. After the database is restored, exit the container
|
|
10. Now it's safe to start the complete application stack again
|
|
```shell
|
|
docker compose up -d; docker compose logs -f
|
|
```
|
|
|
|
## Rebuild containers locally
|
|
|
|
```shell
|
|
docker compose build --no-cache \
|
|
--build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') \
|
|
--build-arg SRC_REV=$(git rev-parse --short HEAD)
|
|
```
|