2023-06-12 10:02:01 +00:00
|
|
|
# nixConfig
|
2023-09-26 12:17:45 +00:00
|
|
|
This repository contains the configuration of all our machines running NixOS.
|
2023-06-11 12:59:47 +00:00
|
|
|
|
2023-06-12 10:02:01 +00:00
|
|
|
## Build a machine
|
2023-09-23 23:50:41 +00:00
|
|
|
There are multiple ways to build and deploy a machine configuration. Which is the
|
|
|
|
most appropriate depends on the context and scenario. So first there will be a general
|
|
|
|
explanation how this works and afterwards we will talk about some scenarios.
|
|
|
|
|
2023-10-04 21:37:15 +00:00
|
|
|
If you run `nix flake show`, you should get an output similar to this
|
2023-09-23 23:50:41 +00:00
|
|
|
```
|
|
|
|
$ nix flake show
|
|
|
|
git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb749287d1e9e793811759dfa29469ab706dc
|
|
|
|
├───apps
|
|
|
|
│ └───x86_64-linux
|
|
|
|
├───checks
|
|
|
|
│ └───x86_64-linux
|
|
|
|
├───devShells
|
|
|
|
│ └───x86_64-linux
|
|
|
|
├───formatter
|
|
|
|
├───legacyPackages
|
|
|
|
│ └───x86_64-linux omitted (use '--legacy' to show)
|
|
|
|
├───nixosConfigurations
|
|
|
|
│ └───nyarlathotep: NixOS configuration
|
|
|
|
├───nixosModules
|
|
|
|
├───overlays
|
|
|
|
└───packages
|
|
|
|
└───x86_64-linux
|
|
|
|
```
|
2023-10-02 09:47:52 +00:00
|
|
|
we can see there is an output called `nixosConfigurations.nyarlathotep` which contains the configuration of the machine
|
2023-09-23 23:50:41 +00:00
|
|
|
called nyarlathotep. `nixosConfigurations` is special in that sense, that `nixos-rebuild` will automatically look
|
|
|
|
for this key and assume how it is structured. The interesting part for us is the derivation `config.system.build.toplevel`.
|
|
|
|
Its closure contains the whole system and the resulting derivation a script that changes the current system to
|
|
|
|
that derivation. (called `/bin/switch-to-configuration`).
|
|
|
|
|
|
|
|
So what we want to archive is populate the nix store of the target machine with the closure of the derivation
|
|
|
|
`.#nixosConfigurations.<name>.config.system.build.toplevel` and run the the resulting script on the target machine.
|
|
|
|
|
|
|
|
|
2023-06-12 10:02:01 +00:00
|
|
|
### Local
|
2023-09-26 12:17:45 +00:00
|
|
|
It has multiple benefits to build the system configuration on the local computer and push it to the target server.
|
|
|
|
For example one doesn't stress the server with the load of evaluating the expression and building the closure. Also the server
|
2023-09-23 23:50:41 +00:00
|
|
|
doesn't need to fetch the build dependencies this way. One has a local check if at least the nix syntax was correct.
|
|
|
|
And so on...
|
|
|
|
|
|
|
|
#### Build
|
2023-10-04 21:37:15 +00:00
|
|
|
If you have this repository local in your current directory, you can just run
|
2023-06-12 10:02:01 +00:00
|
|
|
```
|
2023-09-23 23:50:41 +00:00
|
|
|
$ nix build .#nixosConfigurations.<name>.config.system.build.toplevel
|
2023-06-12 10:02:01 +00:00
|
|
|
```
|
2023-09-26 12:17:45 +00:00
|
|
|
to build the system configuration of the machine `<name>`.
|
2023-06-12 10:02:01 +00:00
|
|
|
|
2023-09-26 12:17:45 +00:00
|
|
|
But you don't need to clone this repository, for more see the `nix flake --help` documentation about flake urls.
|
2023-09-23 23:50:41 +00:00
|
|
|
|
|
|
|
#### Copy
|
|
|
|
After we build the derivation we need to get the closure onto the target system. Luckily nix has tools to do that
|
|
|
|
via ssh. We could just run:
|
|
|
|
```
|
|
|
|
$ nix copy -s --to <however you setup your ssh stuff> .#nixosConfigurations.<name>.config.system.build.toplevel
|
|
|
|
```
|
2023-10-02 09:47:52 +00:00
|
|
|
This will evaluate the flake again to get the store path of the given derivation. If we want to avoid this,
|
2023-09-26 12:17:45 +00:00
|
|
|
we might supply the corresponding store path directly.
|
2023-09-23 23:50:41 +00:00
|
|
|
|
2023-10-02 09:47:52 +00:00
|
|
|
The `-s` is important: it makes the target machine substitute all derivations it can (by default from chache.nixos.org).
|
2023-09-26 12:17:45 +00:00
|
|
|
So you only upload configuration files and self build things.
|
2023-09-23 23:50:41 +00:00
|
|
|
|
|
|
|
To be able to copy things to a machine they need to be signed by someone trusted. Additional trusted nix keys are handled
|
|
|
|
in `./nixos/roles/nix_keys.nix`. So to get yourself trusted you either need to install one derivation from the machine itself,
|
2023-09-26 12:17:45 +00:00
|
|
|
or find someone who is already trusted, to push your key.
|
2023-09-23 23:50:41 +00:00
|
|
|
|
|
|
|
For more information on signing and key creation see `nix store sign --help` and `nix key --help`.
|
|
|
|
|
|
|
|
#### Activate
|
2023-09-26 12:17:45 +00:00
|
|
|
Log into the remote machine and execute (with root privileges)
|
2023-09-23 23:50:41 +00:00
|
|
|
```
|
|
|
|
# /nix/store/<storepath>/bin/switch-to-configuration boot
|
|
|
|
```
|
|
|
|
That will setup a configuration switch at reboot. You can also switch the configuration live. For more
|
2023-09-26 12:17:45 +00:00
|
|
|
details consider the `--help` output of that script. The storepath (or at least the hash of the derivation)
|
|
|
|
is exactly the same it was on your machine.
|
2023-09-23 23:50:41 +00:00
|
|
|
|
|
|
|
|
2023-10-02 09:47:52 +00:00
|
|
|
If you have a `nixos-rebuild` available on your system, it can automatize these things with the `--flake` and
|
2023-09-23 23:50:41 +00:00
|
|
|
`--target-host` parameters. But there are some pitfalls so look at the `nixos-rebuild` documentation beforehand.
|
|
|
|
|
2023-06-12 10:02:01 +00:00
|
|
|
### On the machine
|
2023-09-23 23:50:41 +00:00
|
|
|
|
2023-10-02 09:47:52 +00:00
|
|
|
Clone this repository to `/etc/nixos/` and `nixos-rebuild boot` or `nixos-rebuild switch` that will select
|
2023-09-23 23:50:41 +00:00
|
|
|
the appropriate machine based on hostname.
|
|
|
|
|
2023-10-02 09:47:52 +00:00
|
|
|
If the hostname is not correct, or you don't want to clone this flake, you can also use the `--flake` parameter.
|
2023-09-23 23:50:41 +00:00
|
|
|
|
2023-09-26 12:17:45 +00:00
|
|
|
In any case, to switch the system configuration you will need to have root privileges on the target machine.
|
2023-09-25 19:03:23 +00:00
|
|
|
|
2023-09-30 15:04:37 +00:00
|
|
|
## Installing a new machine
|
|
|
|
|
|
|
|
You have written a configuration and now want to deploy it as a new machine. You need to get the build configuration on the
|
|
|
|
`nixos-installer` machine (regarding this machine see issue [#10]). You can either use either any of the
|
|
|
|
versions above, or just continue then the machine will build the configuration implicitly.
|
|
|
|
|
|
|
|
### Disk layout
|
|
|
|
|
|
|
|
You will need to assemble the disk layout manually, we assume you do it below `/mnt` as the nixos-install tools
|
|
|
|
assume this as the default location (they have an option to change that consider their `--help` pages).
|
|
|
|
|
|
|
|
This repository loads some default configuration that expects certain things. Your hardware configuration of that machine should
|
|
|
|
reflect those.
|
|
|
|
|
|
|
|
- `"/"` is a tmpfs
|
|
|
|
- `"/persist"` is the place where we keep data that can not be regenerated at any boot, so this should be a permanent disk
|
|
|
|
- `"/nix"` the place the nixstore resides, needed to boot the machine should also be persistent
|
|
|
|
- `"/boot"` the place for bootloader configuration and kernel also persistent
|
|
|
|
- any additional data paths for your machine specific needs. Choose filesystems accordingly.
|
|
|
|
|
|
|
|
My recommendation is to put `"/persist"` and `"/nix"` on a joint btrfs as subvolumes and `"/boot"` on separate disks (because grub
|
|
|
|
will give you a hard time if you do it as a subvolume or bind mount (even though that should be possible but is an upstream problem)).
|
|
|
|
For how to configure additional persistent data
|
|
|
|
to be stored in `"/persist"` look at the impermanence section as soon it is merged. Before this look at issue [#9].
|
|
|
|
I do not recommend this for actual high access application data like databases mailboxes and things like it. You should
|
|
|
|
think about this as data that if lost can be regenerated with only little problems and read/written only a few times
|
|
|
|
during setup. (Like the server ssh keys for example). The configuration also setups some paths for `"/persist"` automatically,
|
|
|
|
again look at the impermanence sections.
|
|
|
|
|
|
|
|
#### File system uuids
|
|
|
|
|
|
|
|
You might end with a bit of a chicken/egg problem regarding filesystem uuids. See you need to set them in your system configuration.
|
|
|
|
There are two ways around that. Either generate the filesystems read out the uuids, and push them into the repository holding
|
|
|
|
the configuration you want to build, or generate the uuids first, have them in your configuration and set them upon filesystem creation. Most
|
|
|
|
`mkfs` utilities have an option for that.
|
|
|
|
|
|
|
|
### Installing
|
|
|
|
|
|
|
|
Just run
|
|
|
|
```
|
|
|
|
nixos-install --flake 'git+https://gitea.mathebau.de/Fachschaft/nixConfig?ref=<branchname>#<name>'
|
|
|
|
```
|
|
|
|
where `<branchname>` is the branch you install from and `<name>` is the name of the configuration you build.
|
2023-10-04 21:37:15 +00:00
|
|
|
If the build system is already in the nix store, this will start the installation, else it will first attempt to build
|
2023-09-30 15:04:37 +00:00
|
|
|
it. That should be the whole installation process, just reboot. The machine should be fully setup. No additional user
|
|
|
|
or service setup, after the reboot.
|
|
|
|
|
|
|
|
|
|
|
|
## How to write a new machine configuration
|
2023-10-02 09:47:52 +00:00
|
|
|
At best, you take a first look at already existing configurations. But here are a few guidelines.
|
2023-09-30 15:04:37 +00:00
|
|
|
Make a new folder in `/nixos/machines`. The name of the folder should match the hostname of your
|
|
|
|
machine. The only technically required file in there is `configuration.nix`. So create it.
|
|
|
|
|
|
|
|
A good skeleton is probably:
|
|
|
|
```
|
|
|
|
flake-inputs:
|
|
|
|
{config, pkgs, lib, ... }: {
|
|
|
|
|
|
|
|
imports = [
|
|
|
|
./hardware-configuration.nix
|
|
|
|
../../roles
|
|
|
|
./network.nix
|
|
|
|
|
|
|
|
<your additional imports here>
|
|
|
|
|
|
|
|
];
|
|
|
|
|
|
|
|
<your system config here>
|
|
|
|
networking.hostname = "<your hostname>"; # this will hopefully disappear if I have time to refactor this.
|
|
|
|
system.stateVersion = "<state version at time of install>";
|
|
|
|
}
|
|
|
|
```
|
2023-10-02 09:47:52 +00:00
|
|
|
The import of `../../roles` loads all the nice default setup that all these machines have in common. There the
|
2023-09-30 15:04:37 +00:00
|
|
|
impermanence configuration is loaded as well as ssh, sops, shared user configuration and much more.
|
|
|
|
The other two imports are suggestions how you should organize your configuration but not enforced by anything.
|
|
|
|
In your hardware
|
|
|
|
configuration you should basically only write you filesystem layout and your hostPlatform. The bootloading stuff
|
|
|
|
is already taken care of by `../../roles`.
|
|
|
|
|
|
|
|
As of moment of writing `network.nix` should contain ip, nameserver and default gateway setup. As parts of
|
2023-10-02 09:47:52 +00:00
|
|
|
this is constant across all systems and will undergo refactor soon.
|
2023-09-30 15:04:37 +00:00
|
|
|
|
|
|
|
I would recommend to split your configuration into small files you import. If this is something machine specific (like
|
2023-10-02 09:47:52 +00:00
|
|
|
tied to your ip address hostname), put it into the machine directory. If it is not, put it into `/nixos/roles/` if it
|
|
|
|
is not but has options to set, put it in `/nixos/modules`.
|
2023-09-25 19:03:23 +00:00
|
|
|
|
2023-09-23 23:50:41 +00:00
|
|
|
## How this flake is organized
|
|
|
|
|
|
|
|
This flake uses `flake-parts` see [flake.parts](https://flake.parts) for more details. It makes handling
|
2023-09-26 12:17:45 +00:00
|
|
|
`system` and some other modules related things more convenient.
|
|
|
|
For the general layout of nixos system configuration and modules, please see the corresponding documentation.
|
|
|
|
|
|
|
|
The toplevel `flake.nix` contains the flake inputs as usual and only calls a file `flake-module.nix`.
|
|
|
|
This toplevel `flake-module.nix` imports further more specialized `flake-modules.nix` files from sub-directories.
|
|
|
|
Right now the only one is `nixos/flake-module.nix`. But if we start to ship our own software (or software versions,
|
|
|
|
with specific build flags), this might get more.
|
|
|
|
|
|
|
|
### nixos
|
|
|
|
The `nixos` folder contains all machine configurations. It separates in two folders `nixos/machines` and `nixos/roles`.
|
|
|
|
The corresponding `flake-module.nix` file automatically searches for `machines/<name>/configuration.nix`, and evalutes
|
|
|
|
those as nixos configurations, and populates the flake.
|
|
|
|
|
|
|
|
#### machines
|
|
|
|
`nixos/machines` contains all machine specific configuration (in a sub-folder per machine). Like hardware configuration, specific
|
|
|
|
network configuration. And service configuration that are too closely interwoven with the rest of that machine (for example
|
|
|
|
mailserver configuration depends heavily on network settings). It also
|
|
|
|
contains the root configuration for that machine called `configuration.nix`. This file usually only includes other modules.
|
2023-10-02 09:47:52 +00:00
|
|
|
These `configuration.nix` files are almost usual nix configurations. The only difference is that they take as an extra argument
|
|
|
|
the flake inputs. This allows them to load modules from these flakes. For example, nyarlathotep loads the simple-nixos-mailserver
|
2023-09-26 12:17:45 +00:00
|
|
|
module that way.
|
|
|
|
|
|
|
|
#### roles
|
|
|
|
`nixos/roles` contains configuration that is potentially shared by some machines. It is expected that `nixos/roles/default.nix`
|
2023-09-23 23:50:41 +00:00
|
|
|
is imported as (`../../roles`) in every machine. Notable are the files `nixos/roles/admins.nix` which contains
|
|
|
|
common admin accounts for these machines and `nixos/roles/nix_keys.nix` which contains the additional trusted
|
|
|
|
keys for the nix store.
|
|
|
|
|
|
|
|
## sops
|
2023-09-25 19:03:23 +00:00
|
|
|
|
|
|
|
We are sharing secrets using [`sops`](https://github.com/getsops/sops) and [`sops-nix`](https://github.com/Mic92/sops-nix)
|
|
|
|
As of right now we use only `age` keys.
|
|
|
|
The machine keys are derived from their server ssh keys, that they generate at first boot.
|
2023-09-26 12:17:45 +00:00
|
|
|
To read out a machines public key run the following command on the corresponding machine.
|
|
|
|
```
|
|
|
|
$ nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
|
|
|
|
```
|
2023-09-25 19:03:23 +00:00
|
|
|
User keys are generated by the users.
|
2023-09-26 12:17:45 +00:00
|
|
|
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repository.
|
2023-09-25 19:03:23 +00:00
|
|
|
|
2023-09-26 12:17:45 +00:00
|
|
|
To make a secret available on a given machine you need to configure the following:
|
2023-09-25 19:03:23 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
sops.secrets.example-key = {
|
2023-09-26 12:17:45 +00:00
|
|
|
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)";
|
|
|
|
path = "optinal path where the secret gets symlinked to, practical if some program expects a specific path";
|
|
|
|
owner = user that owns the secret file: config.users.users.nerf.name (for example);
|
|
|
|
group = same as user just with groups: config.users.users.nerf.group;
|
|
|
|
mode = "permission in usual octet: 0400 (for example)";
|
|
|
|
};
|
2023-09-25 19:03:23 +00:00
|
|
|
```
|
2023-09-26 12:17:45 +00:00
|
|
|
Afterwards the secret should be available in `/run/secrets/example-key`.
|
2023-09-25 19:03:23 +00:00
|
|
|
If the accessing process is not root it must be member of the group `config.users.groups.keys`
|
|
|
|
for systemd services this can be archived by setting `serviceConfig.SupplementaryGroups = [ config.users.groups.keys.name ];`
|
2023-09-26 12:17:45 +00:00
|
|
|
it the service configuration.
|
2023-10-01 09:54:35 +00:00
|
|
|
|
|
|
|
## impermanence
|
|
|
|
|
|
|
|
These machines are setup with `"/"` as a tmpfs. This is there to keep the machines clean. So no clutter in home
|
|
|
|
directories, no weird ad-hoc solutions of botching something into `/opt/` or something like this. All will be
|
|
|
|
gone at reboot.
|
|
|
|
|
|
|
|
But there are some files that we want to survive reboots, for example logs or ssh keys. The solution to this is
|
|
|
|
to have a persistent storage mounted at `/persist` and automatically bind mount the paths of persistent things
|
|
|
|
to the right places. To set this up we are using the impermanence module. In our configuration this is loaded with
|
|
|
|
some default files to bind mount (ssh keys, machine-id some nixos specific things). That we have on all machines.
|
|
|
|
|
|
|
|
If you keep your application data (like recommended) on a separate partition, the chances are you don't need
|
|
|
|
to interact with this, as most configuration files will be in the nix store anyway. If the application wants these nix
|
2023-10-04 21:42:17 +00:00
|
|
|
store files in certain directories, you should use `environment.etc` family of options (consult the nixos documentation
|
2023-10-01 09:54:35 +00:00
|
|
|
for this). This is for mutable files that are not core application data. (Like ssh keys, for a mailserver one could
|
2023-10-02 09:55:55 +00:00
|
|
|
think about the hash files (not the db files) of an alias map (if one doesn't want to manage that with
|
2023-10-01 09:54:35 +00:00
|
|
|
the nix store), things like that).
|
|
|
|
|
|
|
|
This should not be (but could be) used for large application databases. It would be more appropriate to mount
|
|
|
|
its own filesystem for things like that. For small configuration files that are not in the nix-store,
|
|
|
|
that might be the appropriate solution.
|
|
|
|
|
|
|
|
By default the storage is called `persist` and the default path for it is `/persist`. These can be changed
|
|
|
|
with the `impermanence.name` and `impermanence.storagePath` options. To add paths to this storage you do the
|
|
|
|
following.
|
|
|
|
|
|
|
|
```
|
|
|
|
environment.persistence.${config.impermanence.name} = {
|
|
|
|
directories = [
|
|
|
|
"<your path to a directory to persist>"
|
|
|
|
];
|
|
|
|
files = [
|
|
|
|
"<your path to a file to persist>"
|
|
|
|
];
|
|
|
|
};
|
|
|
|
```
|
|
|
|
For this to work `config` must be binded by the function arguments of you module. So the start of your module looks
|
|
|
|
something like this:
|
|
|
|
```
|
|
|
|
{lib, pkgs, config, ...} :
|
|
|
|
<module code >
|
|
|
|
```
|