Self-hosting with Podman

I've been a self-hoster for a while. The adventure started with regular mani-pc manufactured by HP. 32G of RAM, Intel gen 10, and 1T HDD drive. However, as long as it was a great experience at the beginning, with time it became a challenge. My stack was built with portainer and a bunch of docker-compose files. It leads to specific issues: with portainer, you don't own the Compose files, they are living inside your tool, but nowhere on the filesystem or git repo.

update Feb-2025: portioner supports GitOps

Additionally at some point people behind this product decided to change the licensing model, and allow the use of community editions for up to 5 nodes. It wasn't my case, but that pushed me to use something more independent. So I started using dockge, then added another service for docker logs, version monitor, and keeps adding applications that are fun to use, for example homebox or bookstack. It was fun until I released the cost of energy and maintenance effort need to keep it running, at my home. Every internet issue, or power issue takes my setup down. Maybe it was not happening very often, but when I wasn’t home, and the hardware was down, there was no chance to fix it remotely. And I started relaying on that service. That is why I simply decided to migrate to hetzner, and podman at the same time, and use remote NFS. However, let's start from the beginning.

Why Hetzner

As I'm an AWS Community Builder every renewal cycle I'm receiving 500$ for AWS services, and getting EC2 on ARM(yes, I decided to switch CPU arch too) with 4 cores, and 8G of RAM, which was more than 500$ per year. As we're talking about a server which is running 24/7, additional VPC, EBS, etc. Estimated cost was just 111.744$/month. For EC2 only!

24 * 30 * 0.1552 = 111.744$

Hetzner is much more affordable. Standard AmpereOne VM, with backups, public IP, and 80G of SSD, was about 7$ per month, then I added 1T storagebox for an additional 3$. Based on complex math, my setup was only 10x cheaper than AWS can provide.

Note: we're talking about constant running server, without the need of scalability or high availability.

Using Hetnzer is pretty simple, if we know how to use Terraform and AWS, German company provider is less complex. For example spinning one VM, with a public IP address, backup enabled is just:

 1terraform {
 2  required_providers {
 3    hcloud = {
 4      source  = "hetznercloud/hcloud"
 5      version = "~> 1.49.1"
 6    }
 7  }
 8  required_version = "~> 1.9.8"
 9}
10
11
12provider "hcloud" {
13  token = var.hcloud_token
14}
15
16variable "hcloud_token" {
17  sensitive = true
18  type      = string
19}
20
21
22variable "local_ip" {
23  type    = string
24  default = "79.184.235.150"
25  #default = "31.61.169.52"
26}
27
28resource "hcloud_primary_ip" "box_ip" {
29  name          = "box-ip"
30  datacenter    = "fsn1-dc14"
31  type          = "ipv4"
32  assignee_type = "server"
33  auto_delete   = true
34  labels = {
35    "arch" : "arm64",
36    "managed_by" : "terraform",
37    "env": "prod",
38    "location": "de"
39  }
40}
41
42
43resource "hcloud_firewall" "box_fw" {
44  name = "box-firewall"
45
46  rule {
47    direction  = "in"
48    protocol   = "tcp"
49    port       = "22"
50    source_ips = [var.local_ip]
51  }
52  labels = {
53    "arch" : "arm64",
54    "managed_by" : "terraform",
55    "env": "prod",
56    "location": "de"
57  }
58}
59
60resource "hcloud_server" "box" {
61  name        = "box"
62  image       = "centos-stream-9"
63  server_type = "cax21"
64  datacenter  = "fsn1-dc14"
65  ssh_keys    = ["mbp@home"]
66  backups     = true
67  labels = {
68    "arch" : "arm64",
69    "managed_by" : "terraform",
70    "env": "prod",
71    "location": "de"
72  }
73  public_net {
74    ipv4_enabled = true
75    ipv4         = hcloud_primary_ip.box_ip.id
76    ipv6_enabled = false
77  }
78
79  firewall_ids = [hcloud_firewall.box_fw.id]
80}
81
82output "box_public_ip" {
83  value = hcloud_server.box.ipv4_address
84}
85
86output "ssh_box" {
87  value = "ssh -i ~/.ssh/id_ed25519_local root@${hcloud_server.box.ipv4_address}"
88}

Then we can execute our code with simple:

1terraform apply -var="local_ip=$(curl -s ifconfig.me)"

To list accessible images use hcloud image list

So when we have our centos box, let's install some baseline packages there.

Ansible

The standard tool for this will be ansible - a simple, stable, and solid product. 100% open source. For setting up my server I decided to write a custom role. The structure of my role's task folder is simple, but IMO requires some explanation.

  • tasks started with 01 and 02 are core services
  • tasks started with 03 are responsible for packages
  • tasks started with 1 are application services
 1tasks
 2├── 010_shost.yml
 3├── 020_ssh.yml
 4├── 030_packages.yml
 5├── 035_tailscale.yml
 6├── 036_storagebox.yml
 7├── 100_containers.yml
 8├── 101_linkwarden.yml
 9├── 102_miniflux.yml
10├── 103_umami.yml
11├── 104_internal.yml
12├── 105_immich.yml
13├── 106_jellyfin.yml
14└── main.yml

main.yml is a role control point, I have tags here and custom logic:

 1# tasks file for roles/hetzner
 2- name: Ensure that shost exists, it's a main user, and root cannot access the server
 3  tags:
 4    - baseline
 5  ansible.builtin.import_tasks:
 6    file: 010_shost.yaml
 7
 8- name: Ensure that ssh config is correct
 9  tags:
10    - baseline
11  ansible.builtin.import_tasks:
12    file: 020_ssh.yaml
13
14- name: Ensure that needed packages are installed
15  tags:
16    - packages
17  ansible.builtin.import_tasks:
18    file: 030_packages.yaml
19
20- name: Ensure that Tailscale is installed if needed
21  tags:
22    - vpn
23  ansible.builtin.import_tasks:
24    file: 035_tailscale.yaml
25
26- name: Ensure that StorageBox was attached
27  tags:
28    - storage
29  when:
30    - hetzner_storagebox_enabled
31  ansible.builtin.import_tasks:
32    file: 036_storagebox.yaml
33
34- name: Ensure that containers have latest configuration
35  tags:
36    - never
37    - containers
38  ansible.builtin.import_tasks:
39    file: 100_containers.yaml

Then every new application has dedicated playbook, which will look like that 101_linkwarden.yml.

 1- name: Supply system with Linkwarden network configuration
 2  notify:
 3    - Restart Linkwarden
 4  ansible.builtin.template:
 5    src: linkwarden.network.j2
 6    dest: /home/shost/.config/containers/systemd/linkwarden.network
 7    mode: '0700'
 8    owner: shost
 9    group: shost
10
11- name: Supply system with Linkwarden service
12  notify:
13    - Restart Linkwarden
14  ansible.builtin.template:
15    src: linkwarden.service.j2
16    dest: /home/shost/.config/systemd/user/linkwarden.service
17    mode: '0700'
18    owner: shost
19    group: shost
20
21- name: Supply system with Linkwarden Postgresq config
22  notify:
23    - Restart Linkwarden
24  ansible.builtin.template:
25    src: linkwarden-postgresql.container.j2
26    dest: /home/shost/.config/containers/systemd/linkwarden-postgresql.container
27    mode: '0700'
28    owner: shost
29    group: shost
30
31- name: Supply system with Linkwarden server
32  notify:
33    - Restart Linkwarden
34  ansible.builtin.template:
35    src: linkwarden-app.container.j2
36    dest: /home/shost/.config/containers/systemd/linkwarden-app.container
37    mode: '0700'
38    owner: shost
39    group: shost
40
41- name: Supply system with Linkwarden Tunnel
42  notify:
43    - Restart Linkwarden
44  ansible.builtin.template:
45    src: linkwarden-tunnel.container.j2
46    dest: /home/shost/.config/containers/systemd/linkwarden-tunnel.container
47    mode: '0700'
48    owner: shost
49    group: shost

You may be wondering, why I'm not using docker-compose, and have a lot of systemd services instead. Let me explain it.

Podman

By default, Podman supports rootless containers. What does it mean? Basically, you don't need to be root, or a member of the docker group to run the container. All the magic happens inside the user namespace, and have access only to the user's date. As an extra feature, we have SELinux context on file as another security layer. However as good as community.docker.docker_compose_v2 modele is, there is no podman equivalent. Folks responsible for the project are saying that you should quadlets, not composes. WTF are quadlets? Generally speaking, systemd services live in a user namespace that orchestrates podman containers. Unfortunately one, by one. Wait what? Yes, the user needs to write new service per container, ah and network, and service dependencies. Sound like a fun, isn’t it?

There is a project in the internet, which allows you to convert your docker-compose directly into quadlets. However I will show you my systemd services, which could be helpful.

  • linkwarden.service.j2 is a dummy service, which allows me to control the whole application with one service.
 1[Unit]
 2Description=Linkwarden
 3
 4[Service]
 5Type=oneshot
 6ExecStart=/bin/true
 7RemainAfterExit=yes
 8
 9[Install]
10WantedBy=basic.target
  • linkwarden.network.j2 simple definition of my separated network, only for this particular app.
1[Unit]
2Description=Linkwarden - Network
3PartOf=linkwarden.service
4
5[Network]
  • linkwarden-app.container.j2 main service for my app, as I'm using Ansible and Jinj2, avoiding hardcoded credentials is easy (use sops)
 1[Unit]
 2Description=Linkwarden - Server
 3PartOf=linkwarden.service
 4After=linkwarden.service
 5After=linkwarden-network.service
 6After=linkwarden-postgresql.service
 7
 8[Container]
 9Image=ghcr.io/linkwarden/linkwarden:v{{ hetzner_linkwarden_app_version }}
10ContainerName=linkwarden-app
11Network=systemd-linkwarden
12Volume=linkwarden-data:/data/data
13LogDriver=journald
14
15Environment="DATABASE_URL=postgresql://postgres:{{ linkwarden_postgresql_password }}@linkwarden-postgresql:5432/postgres"
16Environment=NEXTAUTH_SECRET={{ linkwarden_next_auth_secret }
17Environment=NEXTAUTH_URL=http://localhost:3000/api/v1/auth
18Environment=NEXT_PUBLIC_DISABLE_REGISTRATION=true
19
20[Service]
21Restart=always
22
23[Install]
24WantedBy=linkwarden.service

As you may noticed, specifying volume in very straightforward way, usage of After/PartOf is needed, and what is tricky whole image path is needed, as podman could have issues with fiding images like linkwarden/linkwarden.

  • linkwarden-postgresql.container.j2 - database service
 1[Unit]
 2Description=Linkwarden - Postgresql
 3PartOf=linkwarden.service
 4After=linkwarden.service
 5After=linkwarden-network.service
 6
 7[Container]
 8Image=docker.io/library/postgres:16-alpine
 9ContainerName=linkwarden-postgresql
10Network=systemd-linkwarden
11Volume=linkwarden-postgresql:/var/lib/postgresql/data
12LogDriver=journald
13
14Environment="POSTGRES_PASSWORD={{ linkwarden_postgresql_password }}"
15
16[Service]
17Restart=always
18
19[Install]
20WantedBy=linkwarden.service
  • linkwarden-tunnel.container.j2 - cloudflare tunnels
 1[Unit]
 2Description=Linkwarden - Server
 3PartOf=linkwarden.service
 4After=linkwarden.service
 5After=linkwarden-postgresql.service
 6After=linkwarden-network.service
 7Requires=linkwarden-app.service
 8
 9[Container]
10ContainerName=linkwarden-tunnel
11Exec=tunnel --no-autoupdate run
12Image=docker.io/cloudflare/cloudflared:{{ hetzner_tunnel_version }}
13Network=systemd-linkwarden
14Volume=linkwarden-tunnel:/etc/cloudflared
15LogDriver=journald
16
17Environment="TUNNEL_TOKEN={{ linkwarden_tunnel_token }}"
18
19[Service]
20Restart=always
21
22[Install]
23WantedBy=linkwarden.service

I like idea of tunnels, even If all of they are commercial software. They allow me to expose my service to the internet with a need to set up NGINX or Caddy, and more importantly, hardening them.

Summary

So far so good, the solution seems to be complex, but after the initial setup is very secure and stable. The regular path of upgrading my services is changing a version of images I’m using in group_vars file

1$ diff --git a/apps.yaml b/apps.yaml
2index 63b65b2..e08c7a3 100644
3--- a/apps.yaml
4+++ b/apps.yaml
5@@ -1 +1 @@
6-hetzner_linkwarden_app_version: 2.8.3
7+hetzner_linkwarden_app_version: 2.9.3

Then just running:

1$ ansible-playbook \
2  selfhost-hetzner.yaml \
3  --tags containers -u shost

That is it, what I'm thinking about 4 months of using Hetzner on production? I'm very happy with them, price is unbeatable, stable and user experience is very high level, for project like this one, I can't recommend it more.

Ah, a few words about the AmpereOne CPUs, for the services I'm using there is no problem with using ARM64 binaries.

  • Miniflux
  • Jellfin
  • Immich
  • Linkwarden
  • N8N
  • Caddy
  • Actual-budget
  • uptime-kuma
  • ghost

All of them are running very well on ARM64 CPU, 4CPU and 8GB of RAM, to be precise. This is probably due popularity of Raspberry Pi in the self-hosting landscape. So yes, If you have a chance to use ARM CPUs, let's give them a spin. It will be cheaper, probably more efficient, and you could give yourself an ‘innovative soul’ award.