Home Server
Table of Contents
- Hardware
- Installation
- Ubuntu
- Install Important software
- Terminal Problem
- Minor Modifications of
~/.inputrc
- Partition and Format Disk Drives
- MergerFS and FStab
- Automating with SnapRAID Runner
- Install Docker
- Executing the Docker Command Without Sudo
- Install Docker-Compose
- Setup Docker Networks
- Change Timezone
- Secure the Web Server
- Automatic Security Updates
- Setup cronjobs
- Run
docker-compose
- Maintenance - How To
- Docker-Compose
- Networks
- Logging
traefik
- Application proxyhomer
- Home pagesnapraid
- Manage local backup with parity diskportainer
- Manage dockerwireguard
- VPNgitea
- Git servercaddy
- Research Pagescaddy
- Dotfilesnginx
- Roothugo
- Wiki + Blogsyncthing
- File Synchronizationminiflux
- RSS readerhomeassistant
- Home Automationjellyfin
- Media serverfilebrowser
- Web file browserscrutiny
- Hard drive monitoringtransmission
- Torrent serveraria2
- Download daemonaria2-ui
- Download web UIlinkding
- Bookmark managerradicale
- CalDAC/CardDAV server (link)restic
- Automatic online backupsoctoprint
- Web interface for 3D printingadguardhome
- Web interface for 3D printingmealie
- Recipe Managerdiun
- Notification for Docker image updates
.env
- Variable used for Docker Compose- Cron Jobs
Hardware
Part | Model |
---|---|
Case | Fractal Design Node 804 |
Motherboard | ASUS PRIME B450M-A |
CPU | AMD Ryzen 3 3200G |
RAM | Corsair Vengeance LPX 16Go (2x8Go) DDR4 3200MHz |
Cooler | ARCTIC Freezer 34 eSports DUO |
PSU | Corsair SF450 |
SSD M.2 | Samsung 970 EVO Plus 250Gb |
Disk Drives | Various drives ranging from 3Tb to 8Tb |
Installation
Ubuntu
- Download Ubuntu Server 20.04 LTS (link).
- Activate OpenSSH and add SSH Keys
- Account:
thomas
, hostname:homelab
Install Important software
sudo apt install neovim tmux fd-find ripgrep apache2-utils unrar ranger fzf stow
Terminal Problem
On the local host, using Termite:
infocmp > termite.terminfo # export Termite's Terminfo scp termite.terminfo user@remote-host:~/ # or any other method to copy to the remote host
On the remote host, in the directory where you copied termite.terminfo
:
tic -x termite.terminfo # import Terminfo for current user rm termite.terminfo # optional: remove Terminfo file
Minor Modifications of ~/.inputrc
Modify ~/.inputrc
, like so:
"\e[A": history-search-backward # arrow up "\e[B": history-search-forward # arrow down
Partition and Format Disk Drives
A nice tutorial is available here.
lsblk
sudo parted /dev/sda mklabel gpt
sudo parted -a opt /dev/sda mkpart "partitionname" ext4 0% 100%
sudo mkfs.ext4 -L partitionname /dev/sda1
MergerFS and FStab
MergerFS is a transparent layer that sits on top of the data drives providing a single mount point for reads / writes (link).
sudo apt install mergerfs
Create mount points
sudo mkdir /mnt/disk0 sudo mkdir /mnt/disk1 sudo mkdir /mnt/parity
Create folder where disks will be merged.
sudo mkdir /srv/storage
Edit /etc/fstab
.
/dev/disk/by-uuid/7fb7873c-83bd-4805-98ab-506e6c7b56fa /mnt/disk0 ext4 defaults 0 0 /dev/disk/by-uuid/6574b7ae-321c-4078-9793-bc41a4fa5588 /mnt/disk1 ext4 defaults 0 0 /dev/disk/by-uuid/6fcd38b9-0886-46bd-900d-cb1f170dbcee /mnt/parity ext4 defaults 0 0 /mnt/disk* /srv/storage fuse.mergerfs direct_io,defaults,allow_other,minfreespace=50G,fsname=mergerfs 0 0
Automating with SnapRAID Runner
Install Docker
The procedure is well explained here.
If docker is already installed, remove it:
sudo apt remove docker
Executing the Docker Command Without Sudo
sudo usermod -aG docker ${USER}
To apply the new group membership, log out of the server and back in, or type the following:
su - ${USER}
Install Docker-Compose
sudo apt install docker-compose
Setup Docker Networks
docker network create --gateway 192.168.90.1 --subnet 192.168.90.0/24 t2_proxy docker network create docker_default
Change Timezone
sudo timedatectl set-timezone Europe/Paris
Secure the Web Server
Most of it comes from here.
- Set
PasswordAuthentication
no in/etc/ssh/sshd_config
Automatic Security Updates
The procedure is well explained here.
sudo apt install unattended-upgrades update-notifier-common
Edit /etc/apt/apt.conf.d/50unattended-upgrades
, and change the following lines:
Unattended-Upgrade::Remove-Unused-Dependencies "true"; Unattended-Upgrade::Automatic-Reboot "true"; Unattended-Upgrade::Automatic-Reboot-Time "04:00";
Edit /etc/apt/apt.conf.d/20auto-upgrades
:
APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1";
Setup cronjobs
Create a folder ~/cron
with all the scripts and logs related to cron.
To edit the cron jobs, type crontab -e
and add a line like:
*/5 * * * * /home/thomas/cron/caddy_update.sh >> /home/thomas/cron/caddy_update.log 2>&1
That will run every 5 minutes. To check how the first part of the crontab works, check this website.
Run docker-compose
cd ~/docker && docker-compose up -d
Maintenance - How To
Update System/Packages
sudo -- sh -c 'apt-get update; apt-get upgrade -y; apt-get dist-upgrade -y; apt-get autoremove -y; apt-get autoclean -y'
Docker Commands
- Starting a container:
$ docker start homeassistant
- Stopping a container:
$ docker stop homeassistant
- Restarting a container:
$ docker restart homeassistant
- Listing the running containers:
$ docker ps or $ cd ~/docker/ && docker-compose ps
- View the logs of a container:
$ docker logs -f homeassistant
- Drop a shell into a container:
$ docker exec -it homeassistant /bin/bash
- Update specific container:
docker-compose pull --ignore-pull-failures homeassistant
Update All Containers
cd ~/docker/ && docker-compose pull --ignore-pull-failures && docker-compose up -d
Clean up Docker environment This will delete all unused images, volumes and networks.
docker system prune -f && docker image prune -f && docker volume prune -f
Add User and Password for Basic Authentication
- Go to https://www.web2generators.com/apache-tools/htpasswd-generator and type the username and password
- Alternatively, type
htpasswd -nb username mystrongpassword
in the shell - Or use the following docker container:
docker run --rm -it httpd echo $(htpasswd -nb username-here password-here) | sed -e s/\\$/\\$\\$/g
- Paste the output in
~/docker/shared/.htpasswd
Snapraid
To see all files “backed up” by snapraid, use:
docker exec -ti snapraid snapraid list | fzf
In reality, snapraid is ran from the docker container:
docker exec -ti snapraid snapraid fix -f <path_to_file>
The path to file should be relative: /srv/storage/Cloud/org/file.org
-> Cloud/org/file.org
Restore Online backup with restic
To list backups:
docker exec restic restic snapshots
ID Time Host Tags Paths -------------------------------------------------------------------------------- a7b98408 2020-09-03 21:18:00 4803c2af7d4e /data/documents/manuals 088e31a4 2020-09-03 21:50:26 4803c2af7d4e /data/documents/manuals 9cf0b480 2020-09-03 22:05:47 4803c2af7d4e /data/documents/manuals -------------------------------------------------------------------------------- 3 snapshots
Force backup of folder:
docker exec restic restic backup /data/documents/manuals
Files: 0 new, 2 changed, 8475 unmodified Dirs: 0 new, 2 changed, 0 unmodified Added to the repo: 1.010 KiB processed 8477 files, 589.800 MiB in 0:02 snapshot 9cf0b480 saved
Find the path to the file within the snapshot:
docker exec restic restic find file_name
Find files only for a specific snapshot:
docker exec restic restic find -s latest file_name
Restore files/folders (replace file/folders):
docker exec restic restic restore --include /data/documents/manuals --target / 088e31a4
You can use latest
instead of the ID.
If indeed, we want to make a copy of the file, we can use the backup folder
docker exec restic restic restore --include /data/documents/manuals --target /backup 088e31a4
Docker-Compose
version: "3.4"
Networks
networks: t2_proxy: external: name: t2_proxy backend: external: false default: driver: bridge
Logging
x-logging: &default-logging driver: "json-file" options: max-size: "200k" max-file: "10"
traefik
- Application proxy
services:
traefik: container_name: traefik image: traefik:2.2.1 restart: unless-stopped networks: t2_proxy: ipv4_address: 192.168.90.254 # You can specify a static IP security_opt: - no-new-privileges:true ports: - 80:80 - 443:443 - 8080:8080 - 8448:8448 volumes: - $CONFIGDIR/traefik2/rules:/rules - $CONFIGDIR/traefik2/acme/acme.json:/acme.json - $CONFIGDIR/traefik2/shared:/shared - $CONFIGDIR/traefik2/traefik.yaml:/etc/traefik/traefik.yaml - $CONFIGDIR/traefik2/usersfile:/usersfile - /var/log/traefik:/var/log - /var/run/docker.sock:/var/run/docker.sock:ro environment: - CF_API_EMAIL=$CLOUDFLARE_EMAIL - CF_API_KEY=$CLOUDFLARE_API_KEY labels: - "traefik.enable=true" # HTTP-to-HTTPS Redirect - "traefik.http.routers.http-catchall.entrypoints=http" - "traefik.http.routers.http-catchall.rule=HostRegexp(`{host:.+}`)" - "traefik.http.routers.http-catchall.middlewares=redirect-to-https" - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https" # HTTP Routers - "traefik.http.routers.traefik-rtr.entrypoints=https" - "traefik.http.routers.traefik-rtr.rule=Host(`traefik.$DOMAINNAME`)" - "traefik.http.routers.traefik-rtr.tls=true" # - "traefik.http.routers.traefik-rtr.tls.certresolver=dns-cloudflare" # Comment out this line after first run of traefik to force the use of wildcard certs - "traefik.http.routers.traefik-rtr.tls.domains[0].main=$DOMAINNAME" - "traefik.http.routers.traefik-rtr.tls.domains[0].sans=*.$DOMAINNAME" # Services - API - "traefik.http.routers.traefik-rtr.service=api@internal" # Middlewares - "traefik.http.routers.traefik-rtr.middlewares=middlewares-basic-auth@file" - "traefik.http.routers.traefik-rtr.middlewares=middlewares-rate-limit@file,middlewares-basic-auth@file" # - "traefik.http.routers.traefik-rtr.middlewares=test" - "traefik.http.middlewares.traefik-auth.basicauth.users=tdehaeze:$$apr1$$d.JmbY5J$$K8btOi1fwwVYOkCnicCVi." - "traefik.http.middlewares.public-auth.basicauth.users=tdehaeze:$$apr1$$d.JmbY5J$$K8btOi1fwwVYOkCnicCVi.,dehaeze:$$apr1$$ICU0hKjc$$D7buBzZDvokvMP1O6ptc5/" # Authelia # - 'traefik.http.middlewares.authelia.forwardauth.address=http://authelia:9091/api/verify?rd=https://login.$DOMAINNAME/' # - 'traefik.http.middlewares.authelia.forwardauth.trustForwardHeader=true' # - 'traefik.http.middlewares.authelia.forwardauth.authResponseHeaders=Remote-User, Remote-Groups' logging: *default-logging
usersfile
tdehaeze:$$apr1$$d.JmbY5J$$K8btOi1fwwVYOkCnicCVi.
traefik.yaml
global: checkNewVersion: true sendAnonymousUsage: false entryPoints: traefik: address: :8080 http: address: :80 https: address: :443 forwardedHeaders: trustedIPs: 173.245.48.0/20,103.21.244.0/22,103.22.200.0/22,103.31.4.0/22,141.101.64.0/18,108.162.192.0/18,190.93.240.0/20,188.114.96.0/20,197.234.240.0/22,198.41.128.0/17,162.158.0.0/15,104.16.0.0/12,172.64.0.0/13,131.0.72.0/22 synapse: address: :8448 api: dashboard: true log: level: ERROR accessLog: filePath: /var/log/access.log filters: statusCodes: 400-499 providers: docker: endpoint: unix:///var/run/docker.sock defaultrule: Host(`{{ index .Labels "com.docker.compose.service" }}.$DOMAINNAME`) exposedByDefault: false network: t2_proxy swarmMode: false file: directory: /rules watch: true certificatesResolvers: dns-cloudflare: acme: email: $CLOUDFLARE_EMAIL storage: /acme.json dnsChallenge: provider: cloudflare resolvers: 1.1.1.1:53,1.0.0.1:53
homer
- Home page
homer: container_name: homer image: b4bz/homer restart: unless-stopped networks: - t2_proxy environment: - UID=$PUID - GID=$PGID - TZ=$TZ volumes: - $CONFIGDIR/homer/assets/:/www/assets labels: - "traefik.enable=true" - "traefik.http.routers.homer-rtr.entrypoints=https" - "traefik.http.routers.homer-rtr.rule=Host(`homer.$DOMAINNAME`)" - "traefik.http.routers.homer-rtr.tls=true" - "traefik.http.routers.homer-rtr.service=homer-svc" - "traefik.http.services.homer-svc.loadbalancer.server.port=8080" logging: *default-logging
config.yml
--- title: "Homepage" subtitle: "" logo: "assets/homer.png" header: false footer: false columns: "auto" connectivityCheck: false theme: default links: [] services: - name: "Websites" icon: "fas fa-desktop" items: - name: "Wiki" logo: "/assets/tools/brain.png" subtitle: "Digital Brain" url: "https://brain.tdehaeze.xyz" - name: "Research" logo: "/assets/tools/orgmode.png" subtitle: "Research Pages" url: "https://research.tdehaeze.xyz" - name: "Dotfiles" logo: "/assets/tools/dotfiles.png" subtitle: "My Literate Dotfiles" url: "https://dotfiles.tdehaeze.xyz" - name: "Miam" logo: "/assets/tools/miam.png" subtitle: "Personnal Recipes" url: "https://miam.tdehaeze.xyz" - name: "Utilities" icon: "fas fa-rss" items: - name: "Miniflux" logo: "/assets/tools/miniflux.png" subtitle: "RSS Feeds" url: "https://rss.tdehaeze.xyz" # - name: "Bitwarden" # logo: "/assets/tools/bitwarden.png" # subtitle: "Password Manager" # url: "https://bw.tdehaeze.xyz" - name: "Home Assistant" logo: "/assets/tools/homeassistant.png" subtitle: "Home Assistant" url: "http://home.tdehaeze.xyz:8123" # - name: "Guacamole" # logo: "/assets/tools/guacamole.png" # subtitle: "SSH Access" # url: "https://guacamole.tdehaeze.xyz/" - name: "Cloud" icon: "fas fa-cloud" items: - name: "Cloud" logo: "/assets/tools/cloud.png" subtitle: "Simple Personnal Could" url: "https://cloud.tdehaeze.xyz" - name: "Syncthing" logo: "/assets/tools/syncthing.png" subtitle: "P2P Sync" url: "https://syncthing.tdehaeze.xyz" - name: "Gitea" logo: "/assets/tools/gitea.png" subtitle: "Git Server" url: "https://git.tdehaeze.xyz" - name: "Download" icon: "fas fa-download" items: - name: "Transmission" logo: "/assets/tools/transmission.png" subtitle: "Torrents" url: "https://torrent.tdehaeze.xyz/transmission/web/" # - name: "transfer" # logo: "/assets/tools/transfer.png" # subtitle: "Transfer.sh" # url: "https://file.tdehaeze.xyz" - name: "deemix" subtitle: "Download Music" logo: "/assets/tools/deezer.png" url: "https://deemix.tdehaeze.xyz" - name: "qobuz" subtitle: "Qobuz-DL" logo: "/assets/tools/qobuz.png" url: "https://qobuz.tdehaeze.xyz" - name: "Aria2" logo: "/assets/tools/aria2.png" subtitle: "Direct Downloads" url: "http://dl.tdehaeze.xyz" - name: "Media" icon: "fas fa-film" items: - name: "Jellyfin" logo: "/assets/tools/jellyfin.png" subtitle: "Media Library" url: "https://jellyfin.tdehaeze.xyz" - name: "Config" icon: "fas fa-cog" items: - name: "Portainer" logo: "/assets/tools/portainer.png" subtitle: "Manger Docker" url: "https://portainer.tdehaeze.xyz" - name: "Traefik" logo: "/assets/tools/traefik.png" subtitle: "Reverse Proxy" url: "https://traefik.tdehaeze.xyz" - name: "Local" icon: "fas fa-home" items: # - name: "Jackett" # logo: "/assets/tools/jackett.png" # subtitle: "Download API" # url: "http://192.168.1.150:9117/" # - name: "Radarr" # logo: "/assets/tools/radarr.png" # subtitle: "Movie Manager" # url: "http://192.168.1.150:7878/" # - name: "Sonarr" # logo: "/assets/tools/sonarr.png" # subtitle: "TV Shows Manager" # url: "http://192.168.1.150:8989/" # - name: "Ombi" # logo: "/assets/tools/ombi.png" # subtitle: "Request Content" # url: "https://ombi.tdehaeze.xyz/" # - name: "Bazarr" # logo: "/assets/tools/bazarr.png" # subtitle: "Subtitles Manager" # url: "http://192.168.1.150:6767/" - name: "Scrutiny" logo: "/assets/tools/scrutiny.png" subtitle: "S.M.A.R.T" url: "http://192.168.1.150:8089/web/dashboard" - name: "OctoPrint" logo: "/assets/tools/octoprint.png" subtitle: "3D-Printing" url: "https://octoprint.tdehaeze.xyz/"
snapraid
- Manage local backup with parity disk
snapraid: container_name: snapraid image: xagaba/snapraid restart: unless-stopped privileged: true volumes: - /mnt:/mnt - $CONFIGDIR/snapraid:/config - type: "bind" source: /dev/disk target: /dev/disk environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ logging: *default-logging
snapraid.conf
# Defines the file to use as parity storage # It must NOT be in a data disk # Format: "parity FILE_PATH" parity /mnt/parity/snapraid.parity # Defines the files to use as content list # You can use multiple specification to store more copies # You must have least one copy for each parity file plus one. Some more don't # hurt # They can be in the disks used for data, parity or boot, # but each file must be in a different disk # Format: "content FILE_PATH" content /var/snapraid.content content /mnt/disk0/.snapraid.content content /mnt/disk1/.snapraid.content # Defines the data disks to use # The order is relevant for parity, do not change it # Format: "disk DISK_NAME DISK_MOUNT_POINT" disk d0 /mnt/disk0 disk d1 /mnt/disk1 # Excludes hidden files and directories (uncomment to enable). #nohidden # Defines files and directories to exclude # Remember that all the paths are relative at the mount points # Format: "exclude FILE" # Format: "exclude DIR/" # Format: "exclude /PATH/FILE" # Format: "exclude /PATH/DIR/" exclude *.unrecoverable exclude /tmp/ exclude /lost+found/ exclude *.!sync exclude .AppleDouble exclude ._AppleDouble exclude .DS_Store exclude ._.DS_Store exclude .Thumbs.db exclude .fseventsd exclude .Spotlight-V100 exclude .TemporaryItems exclude .Trashes exclude .AppleDB
snapraid-runner.conf
[snapraid] ; path to the snapraid executable (e.g. /bin/snapraid) executable = /usr/bin/snapraid ; path to the snapraid config to be used config = /config/snapraid.conf ; abort operation if there are more deletes than this, set to -1 to disable deletethreshold = -1 ; if you want touch to be ran each time touch = false [logging] ; logfile to write to, leave empty to disable file = /config/snapraid.log ; maximum logfile size in KiB, leave empty for infinite maxsize = 5000 ; [email] ; ; when to send an email, comma-separated list of [success, error] ; sendon = success,error ; ; set to false to get full programm output via email ; short = true ; subject = [SnapRAID] Status Report: ; from = ; to = ; ; maximum email size in KiB ; maxsize = 500 ; ; [smtp] ; host = ; ; leave empty for default port ; port = ; ; set to "true" to activate ; ssl = false ; tls = false ; user = ; password = [scrub] ; set to true to run scrub after sync enabled = false percentage = 12 older-than = 10
portainer
- Manage docker
portainer: container_name: portainer image: portainer/portainer restart: unless-stopped command: -H unix:///var/run/docker.sock command: --no-auth networks: - t2_proxy security_opt: - no-new-privileges:true volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - $CONFIGDIR/portainer:/data environment: - TZ=$TZ labels: - "traefik.enable=true" - "traefik.http.routers.portainer-rtr.entrypoints=https" - "traefik.http.routers.portainer-rtr.rule=Host(`portainer.$DOMAINNAME`)" - "traefik.http.routers.portainer-rtr.tls=true" - "traefik.http.routers.portainer-rtr.service=portainer-svc" - "traefik.http.routers.portainer-rtr.middlewares=traefik-auth" - "traefik.http.services.portainer-svc.loadbalancer.server.port=9000" logging: *default-logging
wireguard
- VPN
wireguard: container_name: wireguard image: linuxserver/wireguard restart: unless-stopped networks: - t2_proxy cap_add: - NET_ADMIN - SYS_MODULE environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ - SERVERURL=wireguard.tdehaeze.xyz - SERVERPORT=51820 - PEERS=3 - PEERDNS=auto volumes: - $CONFIGDIR/wireguard:/config - /lib/modules:/lib/modules ports: - 51820:51820/udp logging: *default-logging
gitea
- Git server
gitea: container_name: git image: gitea/gitea:1.13.2 depends_on: - gitea_db restart: unless-stopped networks: - t2_proxy - backend volumes: - $CONFIGDIR/gitea:/data environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ - SSH_PORT=$GITEA_SSH_PORT ports: - "2222:22" labels: - "traefik.enable=true" - "traefik.http.routers.git-rtr.entrypoints=https" - "traefik.http.routers.git-rtr.rule=Host(`git.$DOMAINNAME`)" - "traefik.http.routers.git-rtr.tls=true" - "traefik.http.routers.git-rtr.service=git-svc" - "traefik.http.services.git-svc.loadbalancer.server.port=3000" logging: *default-logging
gitea_db: container_name: gitea_db image: mariadb:10 restart: unless-stopped networks: - backend ports: - 3306:3306 environment: - MYSQL_ROOT_PASSWORD=$GITEA_DB_MYSQL_ROOT_PASSWORD - MYSQL_DATABASE=gitea - MYSQL_USER=gitea - MYSQL_PASSWORD=$GITEA_DB_MYSQL_PASSWORD volumes: - $CONFIGDIR/mariadb:/var/lib/mysql
caddy
- Research Pages
caddy: container_name: caddy image: abiosoft/caddy:1.0.3-no-stats restart: unless-stopped networks: - t2_proxy environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ - PLUGINS=git volumes: - $CONFIGDIR/caddy/Caddyfile:/etc/Caddyfile - $CONFIGDIR/web:/srv # - ~/.ssh:/root/.ssh labels: - "traefik.enable=true" - "traefik.http.routers.caddy-rtr.entrypoints=https" - "traefik.http.routers.caddy-rtr.rule=Host(`research.$DOMAINNAME`)" - "traefik.http.routers.caddy-rtr.tls=true" - "traefik.http.routers.caddy-rtr.service=caddy-svc" - "traefik.http.services.caddy-svc.loadbalancer.server.port=2015" logging: *default-logging
Caddyfile
0.0.0.0:2015 { root /srv/www/ git { repo https://git.tdehaeze.xyz/tdehaeze/research-home-page path /srv/www/ interval -1 hook /research-home-page/webhook QHZgAKjD8q2v54Ru then git submodule update --init --recursive --merge } }
caddy
- Dotfiles
dotfiles: container_name: dotfiles image: abiosoft/caddy:1.0.3-no-stats restart: unless-stopped networks: - t2_proxy environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ - PLUGINS=git volumes: - $CONFIGDIR/dotfiles/Caddyfile:/etc/Caddyfile - $CONFIGDIR/dotfiles/www:/srv/www labels: - "traefik.enable=true" - "traefik.http.routers.dotfiles-rtr.entrypoints=https" - "traefik.http.routers.dotfiles-rtr.rule=Host(`dotfiles.$DOMAINNAME`)" - "traefik.http.routers.dotfiles-rtr.tls=true" - "traefik.http.routers.dotfiles-rtr.service=dotfiles-svc" - "traefik.http.services.dotfiles-svc.loadbalancer.server.port=2015" logging: *default-logging
Caddyfile
0.0.0.0:2015 { root /srv/www/docs/ git { repo https://git.tdehaeze.xyz/tdehaeze/literate-dotfiles path /srv/www/ interval -1 hook /literate-dotfiles/webhook QHZgAKjD8q2v54Ru } }
nginx
- Root
root: container_name: root image: nginx restart: unless-stopped networks: - t2_proxy environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ volumes: - $CONFIGDIR/root/nginx.conf:/etc/nginx/nginx.conf labels: - "traefik.enable=true" - "traefik.http.routers.root-rtr.entrypoints=https" - "traefik.http.routers.root-rtr.rule=Host(`$DOMAINNAME`)" - "traefik.http.routers.root-rtr.tls=true" - "traefik.http.routers.root-rtr.service=root-svc" - "traefik.http.services.root-svc.loadbalancer.server.port=8080" logging: *default-logging
nginx.conf
events { } http { server { server_name tdehaeze.xyz; listen 8080; location /.well-known/matrix/client { proxy_pass https://matrix.tdehaeze.xyz/.well-known/matrix/client; proxy_set_header X-Forwarded-For $remote_addr; } location /.well-known/matrix/server { proxy_pass https://matrix.tdehaeze.xyz/.well-known/matrix/server; proxy_set_header X-Forwarded-For $remote_addr; } } }
hugo
- Wiki + Blog
hugo: container_name: hugo image: tdehaeze/hugo-caddy restart: unless-stopped networks: - t2_proxy environment: - REPO=git.tdehaeze.xyz/tdehaeze/digital-brain labels: - "traefik.enable=true" - "traefik.http.routers.hugo-rtr.entrypoints=https" - "traefik.http.routers.hugo-rtr.rule=Host(`brain.$DOMAINNAME`)" - "traefik.http.routers.hugo-rtr.tls=true" - "traefik.http.routers.hugo-rtr.service=hugo-svc" - "traefik.http.services.hugo-svc.loadbalancer.server.port=2015" logging: *default-logging
syncthing
- File Synchronization
syncthing: container_name: syncthing image: linuxserver/syncthing restart: unless-stopped networks: - t2_proxy environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ - UMASK_SET=022 volumes: - $CONFIGDIR/syncthing:/config - /srv/storage/Cloud:/Cloud - /srv/storage/Cloud/pictures/phone:/Pictures - /srv/storage/Cloud/pdfs:/Onyx/Download - /srv/storage/Cloud/pdfs-notes:/Onyx/note - /srv/storage/Cloud/.stfolder:/Onyx/.stfolder - /srv/storage/.password-store:/.password-store ports: - 22000:22000 - 21027:21027/udp labels: - "traefik.enable=true" - "traefik.http.routers.syncthing-rtr.entrypoints=https" - "traefik.http.routers.syncthing-rtr.rule=Host(`syncthing.$DOMAINNAME`)" - "traefik.http.routers.syncthing-rtr.tls=true" - "traefik.http.routers.syncthing-rtr.service=syncthing-svc" - "traefik.http.routers.syncthing-rtr.middlewares=traefik-auth" - "traefik.http.services.syncthing-svc.loadbalancer.server.port=8384" logging: *default-logging
miniflux
- RSS reader
miniflux: container_name: miniflux image: miniflux/miniflux restart: unless-stopped networks: - t2_proxy - backend depends_on: - miniflux_db environment: - DATABASE_URL=postgres://miniflux:SCJWWXqHwehP7f8g@miniflux_db/miniflux?sslmode=disable - RUN_MIGRATIONS=1 - CREATE_ADMIN=1 - ADMIN_USERNAME=$MINIFLUX_ADMIN_NAME - ADMIN_PASSWORD=$MINIFLUX_ADMIN_PASS labels: - "traefik.enable=true" - "traefik.http.routers.miniflux-rtr.entrypoints=https" - "traefik.http.routers.miniflux-rtr.rule=Host(`rss.$DOMAINNAME`)" - "traefik.http.routers.miniflux-rtr.tls=true" - "traefik.http.routers.miniflux-rtr.service=miniflux-svc" - "traefik.http.services.miniflux-svc.loadbalancer.server.port=8080" logging: *default-logging
miniflux_db: container_name: miniflux_db image: postgres:12 restart: unless-stopped networks: - backend environment: - POSTGRES_USER=miniflux - POSTGRES_PASSWORD=$MINIFLUX_POSTGRES_PASSWORD volumes: - $CONFIGDIR/miniflux_db:/var/lib/postgresql/data logging: *default-logging
homeassistant
- Home Automation
homeassistant: container_name: homeassistant image: homeassistant/home-assistant restart: unless-stopped #networks: # - t2_proxy #ports: # - target: 8123 # published: 8123 # protocol: tcp # mode: host privileged: true network_mode: host volumes: - $CONFIGDIR/homeassistant:/config - /etc/localtime:/etc/localtime:ro - /dev/bus/usb:/dev/bus/usb # - ${USERDIR}/docker/shared:/shared environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ labels: - "traefik.enable=true" - "traefik.http.routers.homeassistant-rtr.entrypoints=https,http" - "traefik.http.routers.homeassistant-rtr.rule=Host(`home.$DOMAINNAME`)" - "traefik.http.routers.homeassistant-rtr.tls=true" - "traefik.http.routers.homeassistant-rtr.service=homeassistant-svc" - "traefik.http.services.homeassistant-svc.loadbalancer.servers.url=http://172.17.0.1:8123" #- "traefik.http.services.homeassistant-svc.loadbalancer.server.port=8123" logging: *default-logging
jellyfin
- Media server
jellyfin: container_name: jellyfin image: linuxserver/jellyfin restart: unless-stopped networks: - t2_proxy volumes: - $CONFIGDIR/jellyfin:/config - /srv/storage/TVShows:/data/tvshows - /srv/storage/LiveMusic:/data/livemusic - /srv/storage/Animes:/data/animes - /srv/storage/Movies:/data/movies - /srv/storage/Music:/data/music environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ labels: - "traefik.enable=true" - "traefik.http.routers.jellyfin-rtr.entrypoints=https" - "traefik.http.routers.jellyfin-rtr.rule=Host(`jellyfin.$DOMAINNAME`)" - "traefik.http.routers.jellyfin-rtr.tls=true" - "traefik.http.routers.jellyfin-rtr.service=jellyfin-svc" - "traefik.http.services.jellyfin-svc.loadbalancer.server.port=8096" logging: *default-logging
filebrowser
- Web file browser
filebrowser: container_name: filebrowser image: filebrowser/filebrowser restart: unless-stopped networks: - t2_proxy volumes: - $CONFIGDIR/filebrowser/database.db:/database.db - $CONFIGDIR/filebrowser/.filebrowser.json:/.filebrowser.json - /srv/storage:/srv/storage user: "${PUID}:${PGID}" environment: - PUID=$PUID - PGID=$PGID - TZ=$TZ labels: - "traefik.enable=true" - "traefik.http.routers.filebrowser-rtr.entrypoints=https" - "traefik.http.routers.filebrowser-rtr.rule=Host(`cloud.$DOMAINNAME`)" - "traefik.http.routers.filebrowser-rtr.tls=true" - "traefik.http.routers.filebrowser-rtr.service=filebrowser-svc" - "traefik.http.services.filebrowser-svc.loadbalancer.server.port=80" logging: *default-logging
.filebrowser.json
{ "port": 80, "baseURL": "", "address": "", "log": "stdout", "database": "/database.db", "root": "/srv/storage" }
scrutiny
- Hard drive monitoring
scrutiny: container_name: scrutiny image: linuxserver/scrutiny restart: unless-stopped networks: - backend cap_add: - SYS_RAWIO - SYS_ADMIN environment: - PUID=$PUID - PGID=$PGID - SCRUTINY_API_ENDPOINT=http://localhost:8080 - TZ=$TZ - SCRUTINY_WEB=true - SCRUTINY_COLLECTOR=true volumes: - $CONFIGDIR/scrutiny:/config - /run/udev:/run/udev:ro devices: - /dev/sda:/dev/sda - /dev/sdb:/dev/sdb - /dev/sdc:/dev/sdc - /dev/sdd:/dev/sdd - /dev/nvme0n1:/dev/nvme0n1 ports: - 8089:8080 logging: *default-logging
transmission
- Torrent server
transmission-openvpn: container_name: transmission image: haugene/transmission-openvpn restart: unless-stopped networks: - t2_proxy - backend environment: - PUID=$PUID - PGID=$PGID - CREATE_TUN_DEVICE=true - ENABLE_UFW=true - WEBPROXY_ENABLED=false - TRANSMISSION_WEB_UI=combustion - OPENVPN_PROVIDER=NORDVPN - OPENVPN_USERNAME=$NORDVPN_NAME - OPENVPN_PASSWORD=$NORDVPN_PASS - NORDVPN_COUNTRY=FR - NORDVPN_CATEGORY=P2P - NORDVPN_PROTOCOL=tcp - LOCAL_NETWORK=192.168.0.0/16 volumes: - /srv/storage/Downloads:/data - /etc/localtime:/etc/localtime:ro cap_add: - NET_ADMIN ports: - 9091:9091 - 51413:51413 - 51413:51413/udp labels: - "traefik.enable=true" - "traefik.http.routers.transmission-rtr.entrypoints=https" - "traefik.http.routers.transmission-rtr.rule=Host(`torrent.$DOMAINNAME`)" - "traefik.http.routers.transmission-rtr.tls=true" - "traefik.http.routers.transmission-rtr.service=transmission-svc" - "traefik.http.routers.transmission-rtr.middlewares=traefik-auth" - "traefik.http.services.transmission-svc.loadbalancer.server.port=9091" logging: *default-logging
aria2
- Download daemon
aria2: container_name: aria2 image: opengg/aria2 restart: unless-stopped networks: - t2_proxy environment: - PUID=$PUID - PGID=$PGID user: "${PUID}:${PGID}" volumes: - $CONFIGDIR/aria2:/config - /srv/storage/Downloads:/downloads ports: - 6800:6800 logging: *default-logging
aria2.conf
save-session=/config/aria2.session input-file=/config/aria2.session save-session-interval=60 dir=/downloads file-allocation=prealloc disk-cache=128M enable-rpc=true rpc-listen-port=6800 rpc-allow-origin-all=true rpc-listen-all=true rpc-secret=<<get-password(passname="nas/aria2")>> auto-file-renaming=false max-connection-per-server=16 min-split-size=1M split=16
aria2-ui
- Download web UI
aria2-ui: container_name: aria2-ui image: p3terx/ariang restart: unless-stopped networks: - t2_proxy environment: - PUID=$PUID - PGID=$PGID ports: - 6880:6880 labels: - "traefik.enable=true" - "traefik.http.routers.aria2-rtr.entrypoints=http" - "traefik.http.routers.aria2-rtr.rule=Host(`dl.$DOMAINNAME`)" - "traefik.http.routers.aria2-rtr.tls=false" - "traefik.http.routers.aria2-rtr.service=aria2-svc" - "traefik.http.services.aria2-svc.loadbalancer.server.port=6880" logging: *default-logging
linkding
- Bookmark manager
linkding: container_name: linkding image: sissbruecker/linkding:latest restart: unless-stopped networks: - t2_proxy volumes: - $CONFIGDIR/linkding:/etc/linkding/data environment: - TZ=$TZ - PUID=$PUID - PGID=$PGID labels: - "traefik.enable=true" - "traefik.http.routers.linkding-rtr.entrypoints=https" - "traefik.http.routers.linkding-rtr.rule=Host(`bm.$DOMAINNAME`)" - "traefik.http.routers.linkding-rtr.tls=true" - "traefik.http.routers.linkding-rtr.service=linkding-svc" - "traefik.http.routers.linkding-rtr.middlewares=traefik-auth" - "traefik.http.services.linkding-svc.loadbalancer.server.port=9090" logging: *default-logging
radicale
- CalDAC/CardDAV server (link)
radicale: container_name: radicale image: tomsquest/docker-radicale:latest restart: unless-stopped networks: - t2_proxy volumes: - $CONFIGDIR/radicale/config:/config:ro - $CONFIGDIR/radicale/data:/data environment: - TZ=$TZ - UID=$PUID - GID=$PGID labels: - "traefik.enable=true" - "traefik.http.routers.radicale-rtr.entrypoints=https" - "traefik.http.routers.radicale-rtr.rule=Host(`radicale.$DOMAINNAME`)" - "traefik.http.routers.radicale-rtr.tls=true" - "traefik.http.routers.radicale-rtr.service=radicale-svc" - "traefik.http.services.radicale-svc.loadbalancer.server.port=5232" logging: *default-logging
config
[server] hosts = 0.0.0.0:5232 [auth] # Authentication method # Value: none | htpasswd | remote_user | http_x_remote_user #type = none # Htpasswd filename #htpasswd_filename = /etc/radicale/users # Htpasswd encryption method # Value: plain | bcrypt | md5 # bcrypt requires the installation of radicale[bcrypt]. #htpasswd_encryption = md5 # Incorrect authentication delay (seconds) #delay = 1 # Message displayed in the client when a password is needed #realm = Radicale - Password Required [rights] # Rights backend # Value: none | authenticated | owner_only | owner_write | from_file #type = owner_only # File for rights management from_file #file = /etc/radicale/rights [storage] filesystem_folder = /data/collections # Delete sync token that are older (seconds) #max_sync_token_age = 2592000 # Command that is run after changes to storage # Example: ([ -d .git ] || git init) && git add -A && (git diff --cached --quiet || git commit -m "Changes by "%(user)s) #hook = ([ -d .git ] || git init) && git add -A && (git diff --cached --quiet || git commit -m "Changes by "%(user)s) [web] # Web interface backend # Value: none | internal #type = internal [logging] # Threshold for the logger # Value: debug | info | warning | error | critical #level = warning # Don't include passwords in logs #mask_passwords = True [headers] # Additional HTTP headers #Access-Control-Allow-Origin = *
restic
- Automatic online backups
restic: container_name: restic image: mazzolino/restic restart: unless-stopped networks: - t2_proxy environment: - BACKUP_CRON=0 30 0 * * * - RESTIC_REPOSITORY=b2:tdehaeze:/restic - RESTIC_PASSWORD=$RESTIC_PASSWORD - RESTIC_BACKUP_SOURCES=/source - RESTIC_FORGET_ARGS=--keep-daily 7 --keep-weekly 4 --keep-monthly 12 - RESTIC_BACKUP_ARGS=--exclude-file /exclude.txt - B2_ACCOUNT_ID=$RESTIC_B2_ACCOUNT_ID - B2_ACCOUNT_KEY=$RESTIC_B2_ACCOUNT_KEY - UID=$PUID - GID=$PGID - TZ=$TZ volumes: - $CONFIGDIR/restic/exclude.txt:/exclude.txt:ro - /srv/storage/Cloud/thesis:/source/Cloud/thesis:ro - /home/thomas/docker:/source/docker:ro logging: *default-logging
exclude.txt
- Exclude files
*.db *.log *.log.* /source/docker/config/gitea/git/ /source/docker/config/guacamole/ /source/docker/config/guacamole_db/ /source/docker/config/mariadb/ /source/docker/config/miniflux_db/ /source/docker/config/jellyfin/data/ /source/docker/config/dotfiles/www/ /source/docker/config/web/www/
octoprint
- Web interface for 3D printing
octoprint: container_name: octoprint image: octoprint/octoprint restart: unless-stopped networks: - t2_proxy environment: - UID=$PUID - GID=$PGID - TZ=$TZ privileged: true volumes: - $CONFIGDIR/octoprint:/octoprint - /dev/bus/usb:/dev/bus/usb labels: - "traefik.enable=true" - "traefik.http.routers.octoprint-rtr.entrypoints=https" - "traefik.http.routers.octoprint-rtr.rule=Host(`octoprint.$DOMAINNAME`)" - "traefik.http.routers.octoprint-rtr.tls=true" - "traefik.http.routers.octoprint-rtr.service=octoprint-svc" - "traefik.http.routers.octoprint-rtr.middlewares=traefik-auth" - "traefik.http.services.octoprint-svc.loadbalancer.server.port=80" logging: *default-logging
adguardhome
- Web interface for 3D printing
ports for DHCP server:
adguardhome: container_name: adguardhome image: adguard/adguardhome restart: unless-stopped networks: - t2_proxy environment: - UID=$PUID - GID=$PGID - TZ=$TZ volumes: - $CONFIGDIR/adguardhome/work:/opt/adguardhome/work - $CONFIGDIR/adguardhome/conf:/opt/adguardhome/conf ports: - 53:53 - 853:853 labels: - "traefik.enable=true" - "traefik.http.routers.adguardhome-rtr.entrypoints=https" - "traefik.http.routers.adguardhome-rtr.rule=Host(`adguardhome.$DOMAINNAME`)" - "traefik.http.routers.adguardhome-rtr.tls=true" - "traefik.http.routers.adguardhome-rtr.service=adguardhome-svc" - "traefik.http.routers.adguardhome-rtr.middlewares=traefik-auth" - "traefik.http.services.adguardhome-svc.loadbalancer.server.port=3000" logging: *default-logging
mealie
- Recipe Manager
miam: container_name: miam image: hkotel/mealie restart: unless-stopped networks: - t2_proxy environment: - db_type=sqlite - UID=$PUID - GID=$PGID - TZ=$TZ volumes: - $CONFIGDIR/mealie:/app/data labels: - "traefik.enable=true" - "traefik.http.routers.miam-rtr.entrypoints=https" - "traefik.http.routers.miam-rtr.rule=Host(`miam.$DOMAINNAME`)" - "traefik.http.routers.miam-rtr.tls=true" - "traefik.http.routers.miam-rtr.service=miam-svc" - "traefik.http.services.miam-svc.loadbalancer.server.port=80" logging: *default-logging
diun
- Notification for Docker image updates
diun: container_name: diun image: crazymax/diun restart: unless-stopped networks: - backend environment: - TZ=$TZ - LOG_LEVEL=info - LOG_JSON=false - DIUN_WATCH_WORKERS=20 - DIUN_WATCH_SCHEDULE=0 7 * * 6 - DIUN_PROVIDERS_DOCKER=true - DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT=true - DIUN_NOTIF_MAIL_HOST=smtp.gmail.com - DIUN_NOTIF_MAIL_PORT=587 - DIUN_NOTIF_MAIL_SSL=true - DIUN_NOTIF_MAIL_USERNAME=tdehaeze.xyz@gmail.com - DIUN_NOTIF_MAIL_PASSWORD=$GMAIL_PASS - DIUN_NOTIF_MAIL_FROM=tdehaeze.xyz@gmail.com - DIUN_NOTIF_MAIL_TO=dehaeze.thomas@gmail.com volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - $CONFIGDIR/diun:/data
.env
- Variable used for Docker Compose
PUID=1000 PGID=1000 TZ=Europe/Paris CONFIGDIR=/home/thomas/docker/config DOMAINNAME=tdehaeze.xyz
CLOUDFLARE_EMAIL=dehaeze.thomas@gmail.com CLOUDFLARE_API_KEY=<<get-password(passname="nas/cloudflare_api_key")>>
MINIFLUX_ADMIN_NAME=tdehaeze MINIFLUX_ADMIN_PASS=<<get-password(passname="nas/miniflux_admin_pass")>> MINIFLUX_POSTGRES_PASSWORD=<<get-password(passname="nas/miniflux_postgres_pass")>>
RESTIC_PASSWORD=<<get-password(passname="nas/restic_pass")>> RESTIC_B2_ACCOUNT_ID=<<get-password(passname="nas/restic_B2_id")>> RESTIC_B2_ACCOUNT_KEY=<<get-password(passname="nas/restic_B2_key")>>
GITEA_DB_MYSQL_ROOT_PASSWORD=<<get-password(passname="nas/gitea_mysql_root_pass")>> GITEA_DB_MYSQL_PASSWORD=<<get-password(passname="nas/gitea_mysql_pass")>> GITEA_SSH_PORT=2222
NORDVPN_NAME=dehaeze.thomas@gmail.com NORDVPN_PASS=<<get-password(passname="nordvpn.com/dehaeze.thomas@gmail.com")>>
QOBUZNAME=jeanmarie.dehaeze@wanadoo.fr QOBUZPASS=<<get-password(passname="qobuz.com/jeanmarie.dehaeze@wanadoo.fr")>> JELLYFINTOKEN=<<get-password(passname="nas/jellyfin_token")>>
GOTIFY_DEFAULTUSER_NAME=tdehaeze GOTIFY_DEFAULTUSER_PASS=<<get-password(passname="nas/gotify_pass")>>
GUACAMOLE_POSTGRES_PASSWORD=<<get-password(passname="nas/guacamole_postgres_pass")>>
DEEMIX_ARL=<<get-password(passname="nas/deemix_arl")>>
GMAIL_PASS=<<get-password(passname="google.com/tdehaeze.xyz")>>
Cron Jobs
Caddy Update
Create a script ~/cron/caddy_update.sh
with:
docker exec caddy /bin/sh -c "cd /srv/www && echo -e \"Update repo $(date)\" && git submodule update --recursive --remote --merge"
Type crontab -e
and add this line:
*/5 * * * * /home/thomas/cron/caddy_update.sh >> /home/thomas/cron/caddy_update.log 2>&1