config of the Nix machines
Find a file
2023-09-25 21:58:29 +02:00
nixos Merge branch 'nyarlathtop' of ssh://gitea.mathebau.de:3022/Fachschaft/nixConfig into nyarlathtop 2023-09-25 21:58:29 +02:00
.gitignore Initial commit 2023-06-11 12:59:47 +00:00
.sops.yaml [#5] adding sops support 2023-09-25 21:03:23 +02:00
flake-module.nix added some beginning docu 2023-06-12 12:02:01 +02:00
flake.lock updated dependencies 2023-09-25 21:50:35 +02:00
flake.nix [#5] adding sops support 2023-09-25 21:03:23 +02:00
LICENSE changed License from GPL to AGPL 2023-06-12 11:42:12 +02:00
README.md Merge branch 'nyarlathtop' of ssh://gitea.mathebau.de:3022/dennis/nixConfig into nyarlathtop 2023-09-25 21:57:04 +02:00

nixConfig

Build a machine

There are multiple ways to build and deploy a machine configuration. Which is the most appropriate depends on the context and scenario. So first there will be a general explanation how this works and afterwards we will talk about some scenarios.

If you run nix flake show you should get an output similiar to this

$ nix flake show
git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb749287d1e9e793811759dfa29469ab706dc
├───apps
│   └───x86_64-linux
├───checks
│   └───x86_64-linux
├───devShells
│   └───x86_64-linux
├───formatter
├───legacyPackages
│   └───x86_64-linux omitted (use '--legacy' to show)
├───nixosConfigurations
│   └───nyarlathotep: NixOS configuration
├───nixosModules
├───overlays
└───packages
    └───x86_64-linux

we can see there is an output callled nixosConfigurations.nyarlathotep. Which contains the config of the machine called nyarlathotep. nixosConfigurations is special in that sense, that nixos-rebuild will automatically look for this key and assume how it is structured. The interesting part for us is the derivation config.system.build.toplevel. Its closure contains the whole system and the resulting derivation a script that changes the current system to that derivation. (called /bin/switch-to-configuration).

So what we want to archive is populate the nix store of the target machine with the closure of the derivation .#nixosConfigurations.<name>.config.system.build.toplevel and run the the resulting script on the target machine.

Local

It has multiple benefits to build the system config on the local computer and push it to the target server. For example one doesn't stress the server with the load coming with evaluating the expression. Also the server doesn't need to fetch the build dependencies this way. One has a local check if at least the nix syntax was correct. And so on...

Build

If you have this repository local in your current directory you can just run:

$ nix build .#nixosConfigurations.<name>.config.system.build.toplevel

But you don't need to clone this repository for more on flake urls see the nix flake --help documentation.

Copy

After we build the derivation we need to get the closure onto the target system. Luckily nix has tools to do that via ssh. We could just run:

$ nix copy -s --to <however you setup your ssh stuff> .#nixosConfigurations.<name>.config.system.build.toplevel

we do not need the flake anymore, instead of specifying the derivation name we could also give the store path directly.

The -s is important it makes the target machine substitute all derivations it can (by default from chache.nixos.org). So you only upload config files and self build things.

To be able to copy things to a machine they need to be signed by someone trusted. Additional trusted nix keys are handled in ./nixos/roles/nix_keys.nix. So to get yourself trusted you either need to install one derivation from the machine itself, or find someone who is already trusted.

For more information on signing and key creation see nix store sign --help and nix key --help.

Activate

Log into the remote machine and execute

# /nix/store/<storepath>/bin/switch-to-configuration boot

That will setup a configuration switch at reboot. You can also switch the configuration live. For more details consider the --help output of that script.

If you have a nixos-rebuild available on your system it can automatize these things with the --flake and --target-host parameters. But there are some pitfalls so look at the nixos-rebuild documentation beforehand.

On the machine

<<<<<<< HEAD

=======

d89313e25d clone this repo to /etc/nixos/ and nixos-rebuild boot or nixos-rebuild switch that will select the appropriate machine based on hostname.

If the hostname is not correct, or you don't want to clone this flake you can also use the --flake parameter.

In any case, to switch the system configuration you will need to have root priviledges on the target machine.

How this flake is organized

This flake uses flake-parts see flake.parts for more details. It makes handling system and some other moudles related things more convenient. For the general layout of nixos system config and modules, please see the corresponding documentation.

The toplevel flake.nix contains the flake inputs as usual and only calls a file flake-module.nix this toplevel flake-module.nix imports further more specialiesed flake-modules.nix files from subdirectories. Right now the only one is nixos/flake-module.nix.

the nixos folder contains all machine configurations. It sepreates in two folders nixos/machines and nixos/roles.

nixos/machines contains all machine specific configuration (in a subfolder per machine). Like hardware configuration, specific network configuration. And service configuration that are too closely intervowen with the rest of that machine. It also contains the root config for that machine called configuration.nix. This file usually only includes other modules.

nixos/roles contains config that is pontentially shared by some machines. It is expected that nixos/roles/default.nix is imported as (../../roles) in every machine. Notable are the files nixos/roles/admins.nix which contains common admin accounts for these machines and nixos/roles/nix_keys.nix which contains the additional trusted keys for the nix store.

sops

We are sharing secrets using sops and sops-nix As of right now we use only age keys. The machine keys are derived from their server ssh keys, that they generate at first boot. User keys are generated by the users. New keys and machines need entries into the .sops.yaml file within the root directory of this repo.

To make a secret available on a given machine you need to do the following. Configure the following keys

sops.secrets.example-key = {
  sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)
  path = "optinal path where the secret gets symlinked to, practical if some programm expects a specific path"
  owner = user that owns the secret file: config.users.users.nerf.name (for example)
  group = same as user just with groups: config.users.users.nerf.group
  mode = "premission in usual octet: 0400 (for example)"

afterwards the secret should be available in /run/secrets/example-key. If the accessing process is not root it must be member of the group config.users.groups.keys for systemd services this can be archived by setting serviceConfig.SupplementaryGroups = [ config.users.groups.keys.name ]; it the service config.