some documentation I wrote without proofreading at 2 in the morning

This commit is contained in:
Dennis Frieberg 2023-09-24 01:50:41 +02:00 committed by dennis
parent 33519a678a
commit c0d03be602

110
README.md
View file

@ -1,19 +1,116 @@
# nixConfig
## Build a machine
There are multiple ways to build and deploy a machine configuration. Which is the
most appropriate depends on the context and scenario. So first there will be a general
explanation how this works and afterwards we will talk about some scenarios.
If you run `nix flake show` you should get an output similiar to this
```
$ nix flake show
git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb749287d1e9e793811759dfa29469ab706dc
├───apps
│ └───x86_64-linux
├───checks
│ └───x86_64-linux
├───devShells
│ └───x86_64-linux
├───formatter
├───legacyPackages
│ └───x86_64-linux omitted (use '--legacy' to show)
├───nixosConfigurations
│ └───nyarlathotep: NixOS configuration
├───nixosModules
├───overlays
└───packages
└───x86_64-linux
```
we can see there is an output callled `nixosConfigurations.nyarlathotep`. Which contains the config of the machine
called nyarlathotep. `nixosConfigurations` is special in that sense, that `nixos-rebuild` will automatically look
for this key and assume how it is structured. The interesting part for us is the derivation `config.system.build.toplevel`.
Its closure contains the whole system and the resulting derivation a script that changes the current system to
that derivation. (called `/bin/switch-to-configuration`).
So what we want to archive is populate the nix store of the target machine with the closure of the derivation
`.#nixosConfigurations.<name>.config.system.build.toplevel` and run the the resulting script on the target machine.
### Local
If you want to build the machineconfiguration for machine <name>
run
It has multiple benefits to build the system config on the local computer and push it to the target server.
For example one doesn't stress the server with the load coming with evaluating the expression. Also the server
doesn't need to fetch the build dependencies this way. One has a local check if at least the nix syntax was correct.
And so on...
#### Build
If you have this repository local in your current directory you can just run:
```
nix build .#nixosConfiguration.<name>.config.system.build.toplevel
$ nix build .#nixosConfigurations.<name>.config.system.build.toplevel
```
But you don't need to clone this repository for more on flake urls see the `nix flake --help` documentation.
#### Copy
After we build the derivation we need to get the closure onto the target system. Luckily nix has tools to do that
via ssh. We could just run:
```
$ nix copy -s --to <however you setup your ssh stuff> .#nixosConfigurations.<name>.config.system.build.toplevel
```
we do not need the flake anymore, instead of specifying the derivation name we could also give the store path
directly.
The `-s` is important it makes the target machine substitute all derivations it can (by default from chache.nixos.org).
So you only upload config files and self build things.
To be able to copy things to a machine they need to be signed by someone trusted. Additional trusted nix keys are handled
in `./nixos/roles/nix_keys.nix`. So to get yourself trusted you either need to install one derivation from the machine itself,
or find someone who is already trusted.
For more information on signing and key creation see `nix store sign --help` and `nix key --help`.
#### Activate
Log into the remote machine and execute
```
# /nix/store/<storepath>/bin/switch-to-configuration boot
```
That will setup a configuration switch at reboot. You can also switch the configuration live. For more
details consider the `--help` output of that script.
If you have a `nixos-rebuild` available on your system it can automatize these things with the `--flake` and
`--target-host` parameters. But there are some pitfalls so look at the `nixos-rebuild` documentation beforehand.
### On the machine
clone this repo to `/etc/nixos/` and `nixos-rebuild` that will select
the appropriate machine based on hostname
clone this repo to `/etc/nixos/` and `nixos-rebuild boot` or `nixos-rebuild switch` that will select
the appropriate machine based on hostname.
If the hostname is not correct, or you don't want to clone this flake you can also use the `--flake` parameter.
In any case, to switch the system configuration you will need to have root priviledges on the target machine.
### sops
## How this flake is organized
This flake uses `flake-parts` see [flake.parts](https://flake.parts) for more details. It makes handling
`system` and some other moudles related things more convenient.
For the general layout of nixos system config and modules, please see the corresponding documentation.
The toplevel `flake.nix` contains the flake inputs as usual and only calls a file `flake-module.nix`
this toplevel `flake-module.nix` imports further more specialiesed `flake-modules.nix` files from subdirectories.
Right now the only one is `nixos/flake-module.nix`.
the `nixos` folder contains all machine configurations. It sepreates in two folders `nixos/machines` and `nixos/roles`.
`nixos/machines` contains all machine specific configuration (in a subfolder per machine). Like hardware configuration, specific
network configuration. And service configuration that are too closely intervowen with the rest of that machine. It also
contains the root config for that machine called `configuration.nix`. This file usually only includes other modules.
`nixos/roles` contains config that is pontentially shared by some machines. It is expected that `nixos/roles/default.nix`
is imported as (`../../roles`) in every machine. Notable are the files `nixos/roles/admins.nix` which contains
common admin accounts for these machines and `nixos/roles/nix_keys.nix` which contains the additional trusted
keys for the nix store.
## sops
We are sharing secrets using [`sops`](https://github.com/getsops/sops) and [`sops-nix`](https://github.com/Mic92/sops-nix)
As of right now we use only `age` keys.
@ -35,4 +132,3 @@ afterwards the secret should be available in `/run/secrets/example-key`.
If the accessing process is not root it must be member of the group `config.users.groups.keys`
for systemd services this can be archived by setting `serviceConfig.SupplementaryGroups = [ config.users.groups.keys.name ];`
it the service config.