NixOS on Hetzner cloud VPS

?? Views
nixos-cloud.jpg

This guide assumes you have basic knowledge of NixOS and nix flakes.

Hetzner is a great cloud server provider. It's very affordable and has servers in Germany, Finland, and the US. As I'm located in Finland, the Finnish datacenter is particularly useful to me. I can have a very low-latency VPS in the same country for a very cheap price.

As a NixOS enthusiast, I like to run NixOS on everything, including my cloud servers and even my Raspberry Pi 4. Unfortunately, Hetzner does not offer NixOS as one of its OS options when setting up a server. This is where nixos-anywhere comes in.

By using nixos-anywhere, it's possible to replace the operating system of the server while it's running, through an SSH connection. In this blog post, I will go into detail on what I had to configure to make NixOS work smoothly in a Hetzner VM. This process involved a lot of trial and error, resulting in several unbootable servers.

Hetzner setup

First, you have to set up a new Hetzner server by following the steps in the cloud dashboard. Simply choose Ubuntu as the OS and install your SSH keys as usual. If you are able to SSH as root now, you're good to go.

ssh root@1.2.3.4

Disks

For formatting the disks, we are going to use disko. This integrates with nixos-anywhere and runs the required fdisk commands during the installation process.

First, we need the device names for the block devices we are going to format. You could simply use /dev/sda, /dev/nvme, etc., but it's not very reliable, especially if your server has additional block devices attached (for example, Hetzner volumes). Those volume labels can switch around during reboot and cause the system to try to boot from the wrong disk. For this reason, we will use /dev/disk/by-id. You could also use UUID or something else that doesn't change. By running these commands, you should be able to determine the devices you need:

lsblk ls -la /dev/disk/by-id

Now let's create the disk-config.nix file. Here is an example that defines a single GPT disk with three partitions: boot, ESP, and root. The first two are used by the UEFI bootloader, and the rest of the disk is used as the root storage of the server. Note the device string: This should be the one you got from the previous step.

{ disko.devices.disk.os = { device = "/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_52101387"; type = "disk"; content = { type = "gpt"; partitions = { boot = { type = "EF02"; size = "1M"; }; ESP = { type = "EF00"; size = "512M"; content = { type = "filesystem"; format = "vfat"; mountpoint = "/boot"; }; }; root = { size = "100%"; content = { type = "filesystem"; format = "ext4"; mountpoint = "/"; }; }; }; }; }; }

In case you need additional volumes, more disks can be added. This example is for Hetzner block storage:

disko.devices.disk.block = { device = "/dev/disk/by-id/scsi-0HC_Volume_100874627"; type = "disk"; content = { type = "filesystem"; format = "ext4"; mountpoint = "/data"; }; };

System configuration

First we have to add disko modules to our flake.nix:

inputs = { disko = { url = "github:nix-community/disko"; inputs.nixpkgs.follows = "nixpkgs"; }; };

Now we can create a new host in the flake. The host should consist of configuration.nix and disk-config.nix in the same directory. Depending on the layout of your flake, it could look something like this:

nixosConfigurations = { hetzner = lib.nixosSystem { inherit specialArgs; modules = [ ./hosts/hetzner/configuration.nix ]; }; };

We will start building our configuration.nix with the following skeleton, importing some things we will need — including our disk configuration — and setting some basic variables. The qemu-guest.nix is imported from nixpkgs modules and is responsible for loading the necessary kernel modules to make the VM bootable. Hetzner uses qemu under the hood for their cloud VMs.

{ self, inputs, modulesPath, lib, ... }: { imports = [ (modulesPath + "/profiles/qemu-guest.nix") inputs.disko.nixosModules.disko ./disk-config.nix ]; system.stateVersion = "24.05"; # the current version at time of writing nixpkgs.hostPlatform = "x86_64-linux"; hardware.enableRedistributableFirmware = true; networking.hostName = "hetzner"; time.timeZone = "UTC"; }

For networking, I've found using DHCP to be sufficient. The net.ifnames=0 kernel parameter makes the network interfaces use more familiar names (eth0).

networking.useDHCP = true; boot.kernelParams = [ "net.ifnames=0" ];

Grub as the bootloader works great with Hetzner. We will also be using UEFI.

boot.loader.grub = { efiSupport = true; efiInstallAsRemovable = true; };

Next, we need to add our user and enable SSH login. Don't forget this step, or you will be locked out of the system. It's also a good idea to add some essential packages and enable bash completion.

users.users.admin = { isNormalUser = true; openssh.authorizedKeys.keys = [ "ssh-ed25519 XXX" ]; extraGroups = [ "wheel" ]; }; services.openssh.enable = true; environment.systemPackages = with pkgs; [ vim git ]; programs.bash.enableCompletion = true;

Finally, enable the user to use sudo without a password. This makes it easier to update the server remotely.

security.sudo = { enable = true; wheelNeedsPassword = false; };

Installation

Now we are ready to proceed with the installation. nixos-anywhere can be easily used with nix run. This command builds the flake attribute #hetzner and installs it on the server. More information is available in the nixos-anywhere docs.

nix run github:nix-community/nixos-anywhere -- --flake .#hetzner root@1.2.3.4

Once the installation completes, your server will be running NixOS! You can now try rebooting to ensure everything boots up fine and check for errors in journalctl.

Updating

You might have a NixOS server now, but it's not doing anything yet. After you have added more services to your configuration, you will need to deploy this new configuration to the server. The simplest way to accomplish this is to use the built-in flags of nixos-rebuild:

nixos-rebuild switch --target-host admin@1.2.3.4 --use-remote-sudo --flake .#hetzner

For more advanced setups with multiple servers, I recommend using deploy-rs.