forked from Fachschaft/nixConfig
Compare commits
23 commits
fb002f7c82
...
10db771ee4
Author | SHA1 | Date | |
---|---|---|---|
10db771ee4 | |||
5396a13889 | |||
4404d74416 | |||
ed07b584f5 | |||
71e56143fa | |||
d008620456 | |||
|
4515516ba8 | ||
44036f804a | |||
191083f526 | |||
8a87066837 | |||
b91c5b0a2a | |||
c0d03be602 | |||
|
33519a678a | ||
4f414fa1d7 | |||
3dc8c90a27 | |||
4ccc9c83e6 | |||
d0d7237fa6 | |||
c7825cbd01 | |||
16fee6f1f9 | |||
44a0ef0ecd | |||
cba8cb1ce8 | |||
4d7d32f7b6 | |||
d7b8d63f83 |
8 changed files with 416 additions and 16 deletions
272
README.md
272
README.md
|
@ -1,38 +1,280 @@
|
|||
# nixConfig
|
||||
This repository contains the configuration of all our machines running NixOS.
|
||||
|
||||
## Build a machine
|
||||
There are multiple ways to build and deploy a machine configuration. Which is the
|
||||
most appropriate depends on the context and scenario. So first there will be a general
|
||||
explanation how this works and afterwards we will talk about some scenarios.
|
||||
|
||||
If you run `nix flake show`, you should get an output similar to this
|
||||
```
|
||||
$ nix flake show
|
||||
git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb749287d1e9e793811759dfa29469ab706dc
|
||||
├───apps
|
||||
│ └───x86_64-linux
|
||||
├───checks
|
||||
│ └───x86_64-linux
|
||||
├───devShells
|
||||
│ └───x86_64-linux
|
||||
├───formatter
|
||||
├───legacyPackages
|
||||
│ └───x86_64-linux omitted (use '--legacy' to show)
|
||||
├───nixosConfigurations
|
||||
│ └───nyarlathotep: NixOS configuration
|
||||
├───nixosModules
|
||||
├───overlays
|
||||
└───packages
|
||||
└───x86_64-linux
|
||||
```
|
||||
we can see there is an output called `nixosConfigurations.nyarlathotep` which contains the configuration of the machine
|
||||
called nyarlathotep. `nixosConfigurations` is special in that sense, that `nixos-rebuild` will automatically look
|
||||
for this key and assume how it is structured. The interesting part for us is the derivation `config.system.build.toplevel`.
|
||||
Its closure contains the whole system and the resulting derivation a script that changes the current system to
|
||||
that derivation. (called `/bin/switch-to-configuration`).
|
||||
|
||||
So what we want to archive is populate the nix store of the target machine with the closure of the derivation
|
||||
`.#nixosConfigurations.<name>.config.system.build.toplevel` and run the the resulting script on the target machine.
|
||||
|
||||
|
||||
### Local
|
||||
If you want to build the machineconfiguration for machine <name>
|
||||
run
|
||||
It has multiple benefits to build the system configuration on the local computer and push it to the target server.
|
||||
For example one doesn't stress the server with the load of evaluating the expression and building the closure. Also the server
|
||||
doesn't need to fetch the build dependencies this way. One has a local check if at least the nix syntax was correct.
|
||||
And so on...
|
||||
|
||||
#### Build
|
||||
If you have this repository local in your current directory, you can just run
|
||||
```
|
||||
nix build .#nixosConfiguration.<name>.config.system.build.toplevel
|
||||
$ nix build .#nixosConfigurations.<name>.config.system.build.toplevel
|
||||
```
|
||||
to build the system configuration of the machine `<name>`.
|
||||
|
||||
But you don't need to clone this repository, for more see the `nix flake --help` documentation about flake urls.
|
||||
|
||||
#### Copy
|
||||
After we build the derivation we need to get the closure onto the target system. Luckily nix has tools to do that
|
||||
via ssh. We could just run:
|
||||
```
|
||||
$ nix copy -s --to <however you setup your ssh stuff> .#nixosConfigurations.<name>.config.system.build.toplevel
|
||||
```
|
||||
This will evaluate the flake again to get the store path of the given derivation. If we want to avoid this,
|
||||
we might supply the corresponding store path directly.
|
||||
|
||||
The `-s` is important: it makes the target machine substitute all derivations it can (by default from chache.nixos.org).
|
||||
So you only upload configuration files and self build things.
|
||||
|
||||
To be able to copy things to a machine they need to be signed by someone trusted. Additional trusted nix keys are handled
|
||||
in `./nixos/roles/nix_keys.nix`. So to get yourself trusted you either need to install one derivation from the machine itself,
|
||||
or find someone who is already trusted, to push your key.
|
||||
|
||||
For more information on signing and key creation see `nix store sign --help` and `nix key --help`.
|
||||
|
||||
#### Activate
|
||||
Log into the remote machine and execute (with root privileges)
|
||||
```
|
||||
# /nix/store/<storepath>/bin/switch-to-configuration boot
|
||||
```
|
||||
That will setup a configuration switch at reboot. You can also switch the configuration live. For more
|
||||
details consider the `--help` output of that script. The storepath (or at least the hash of the derivation)
|
||||
is exactly the same it was on your machine.
|
||||
|
||||
|
||||
If you have a `nixos-rebuild` available on your system, it can automatize these things with the `--flake` and
|
||||
`--target-host` parameters. But there are some pitfalls so look at the `nixos-rebuild` documentation beforehand.
|
||||
|
||||
### On the machine
|
||||
clone this repo to `/etc/nixos/` and `nixos-rebuild` that will select
|
||||
the appropriate machine based on hostname
|
||||
|
||||
Clone this repository to `/etc/nixos/` and `nixos-rebuild boot` or `nixos-rebuild switch` that will select
|
||||
the appropriate machine based on hostname.
|
||||
|
||||
If the hostname is not correct, or you don't want to clone this flake, you can also use the `--flake` parameter.
|
||||
|
||||
In any case, to switch the system configuration you will need to have root privileges on the target machine.
|
||||
|
||||
## Installing a new machine
|
||||
|
||||
You have written a configuration and now want to deploy it as a new machine. You need to get the build configuration on the
|
||||
`nixos-installer` machine (regarding this machine see issue [#10]). You can either use either any of the
|
||||
versions above, or just continue then the machine will build the configuration implicitly.
|
||||
|
||||
### Disk layout
|
||||
|
||||
You will need to assemble the disk layout manually, we assume you do it below `/mnt` as the nixos-install tools
|
||||
assume this as the default location (they have an option to change that consider their `--help` pages).
|
||||
|
||||
This repository loads some default configuration that expects certain things. Your hardware configuration of that machine should
|
||||
reflect those.
|
||||
|
||||
- `"/"` is a tmpfs
|
||||
- `"/persist"` is the place where we keep data that can not be regenerated at any boot, so this should be a permanent disk
|
||||
- `"/nix"` the place the nixstore resides, needed to boot the machine should also be persistent
|
||||
- `"/boot"` the place for bootloader configuration and kernel also persistent
|
||||
- any additional data paths for your machine specific needs. Choose filesystems accordingly.
|
||||
|
||||
My recommendation is to put `"/persist"` and `"/nix"` on a joint btrfs as subvolumes and `"/boot"` on separate disks (because grub
|
||||
will give you a hard time if you do it as a subvolume or bind mount (even though that should be possible but is an upstream problem)).
|
||||
For how to configure additional persistent data
|
||||
to be stored in `"/persist"` look at the impermanence section as soon it is merged. Before this look at issue [#9].
|
||||
I do not recommend this for actual high access application data like databases mailboxes and things like it. You should
|
||||
think about this as data that if lost can be regenerated with only little problems and read/written only a few times
|
||||
during setup. (Like the server ssh keys for example). The configuration also setups some paths for `"/persist"` automatically,
|
||||
again look at the impermanence sections.
|
||||
|
||||
#### File system uuids
|
||||
|
||||
You might end with a bit of a chicken/egg problem regarding filesystem uuids. See you need to set them in your system configuration.
|
||||
There are two ways around that. Either generate the filesystems read out the uuids, and push them into the repository holding
|
||||
the configuration you want to build, or generate the uuids first, have them in your configuration and set them upon filesystem creation. Most
|
||||
`mkfs` utilities have an option for that.
|
||||
|
||||
### Installing
|
||||
|
||||
Just run
|
||||
```
|
||||
nixos-install --flake 'git+https://gitea.mathebau.de/Fachschaft/nixConfig?ref=<branchname>#<name>'
|
||||
```
|
||||
where `<branchname>` is the branch you install from and `<name>` is the name of the configuration you build.
|
||||
If the build system is already in the nix store, this will start the installation, else it will first attempt to build
|
||||
it. That should be the whole installation process, just reboot. The machine should be fully setup. No additional user
|
||||
or service setup, after the reboot.
|
||||
|
||||
|
||||
### sops
|
||||
## How to write a new machine configuration
|
||||
At best, you take a first look at already existing configurations. But here are a few guidelines.
|
||||
Make a new folder in `/nixos/machines`. The name of the folder should match the hostname of your
|
||||
machine. The only technically required file in there is `configuration.nix`. So create it.
|
||||
|
||||
A good skeleton is probably:
|
||||
```
|
||||
flake-inputs:
|
||||
{config, pkgs, lib, ... }: {
|
||||
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
../../roles
|
||||
./network.nix
|
||||
|
||||
<your additional imports here>
|
||||
|
||||
];
|
||||
|
||||
<your system config here>
|
||||
networking.hostname = "<your hostname>"; # this will hopefully disappear if I have time to refactor this.
|
||||
system.stateVersion = "<state version at time of install>";
|
||||
}
|
||||
```
|
||||
The import of `../../roles` loads all the nice default setup that all these machines have in common. There the
|
||||
impermanence configuration is loaded as well as ssh, sops, shared user configuration and much more.
|
||||
The other two imports are suggestions how you should organize your configuration but not enforced by anything.
|
||||
In your hardware
|
||||
configuration you should basically only write you filesystem layout and your hostPlatform. The bootloading stuff
|
||||
is already taken care of by `../../roles`.
|
||||
|
||||
As of moment of writing `network.nix` should contain ip, nameserver and default gateway setup. As parts of
|
||||
this is constant across all systems and will undergo refactor soon.
|
||||
|
||||
I would recommend to split your configuration into small files you import. If this is something machine specific (like
|
||||
tied to your ip address hostname), put it into the machine directory. If it is not, put it into `/nixos/roles/` if it
|
||||
is not but has options to set, put it in `/nixos/modules`.
|
||||
|
||||
## How this flake is organized
|
||||
|
||||
This flake uses `flake-parts` see [flake.parts](https://flake.parts) for more details. It makes handling
|
||||
`system` and some other modules related things more convenient.
|
||||
For the general layout of nixos system configuration and modules, please see the corresponding documentation.
|
||||
|
||||
The toplevel `flake.nix` contains the flake inputs as usual and only calls a file `flake-module.nix`.
|
||||
This toplevel `flake-module.nix` imports further more specialized `flake-modules.nix` files from sub-directories.
|
||||
Right now the only one is `nixos/flake-module.nix`. But if we start to ship our own software (or software versions,
|
||||
with specific build flags), this might get more.
|
||||
|
||||
### nixos
|
||||
The `nixos` folder contains all machine configurations. It separates in two folders `nixos/machines` and `nixos/roles`.
|
||||
The corresponding `flake-module.nix` file automatically searches for `machines/<name>/configuration.nix`, and evalutes
|
||||
those as nixos configurations, and populates the flake.
|
||||
|
||||
#### machines
|
||||
`nixos/machines` contains all machine specific configuration (in a sub-folder per machine). Like hardware configuration, specific
|
||||
network configuration. And service configuration that are too closely interwoven with the rest of that machine (for example
|
||||
mailserver configuration depends heavily on network settings). It also
|
||||
contains the root configuration for that machine called `configuration.nix`. This file usually only includes other modules.
|
||||
These `configuration.nix` files are almost usual nix configurations. The only difference is that they take as an extra argument
|
||||
the flake inputs. This allows them to load modules from these flakes. For example, nyarlathotep loads the simple-nixos-mailserver
|
||||
module that way.
|
||||
|
||||
#### roles
|
||||
`nixos/roles` contains configuration that is potentially shared by some machines. It is expected that `nixos/roles/default.nix`
|
||||
is imported as (`../../roles`) in every machine. Notable are the files `nixos/roles/admins.nix` which contains
|
||||
common admin accounts for these machines and `nixos/roles/nix_keys.nix` which contains the additional trusted
|
||||
keys for the nix store.
|
||||
|
||||
## sops
|
||||
|
||||
We are sharing secrets using [`sops`](https://github.com/getsops/sops) and [`sops-nix`](https://github.com/Mic92/sops-nix)
|
||||
As of right now we use only `age` keys.
|
||||
The machine keys are derived from their server ssh keys, that they generate at first boot.
|
||||
To read out a machines public key run the following command on the corresponding machine.
|
||||
```
|
||||
$ nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
|
||||
```
|
||||
User keys are generated by the users.
|
||||
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repo.
|
||||
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repository.
|
||||
|
||||
To make a secret available on a given machine you need to do the following. Configure the following keys
|
||||
To make a secret available on a given machine you need to configure the following:
|
||||
|
||||
```
|
||||
sops.secrets.example-key = {
|
||||
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)
|
||||
path = "optinal path where the secret gets symlinked to, practical if some programm expects a specific path"
|
||||
owner = user that owns the secret file: config.users.users.nerf.name (for example)
|
||||
group = same as user just with groups: config.users.users.nerf.group
|
||||
mode = "premission in usual octet: 0400 (for example)"
|
||||
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)";
|
||||
path = "optinal path where the secret gets symlinked to, practical if some program expects a specific path";
|
||||
owner = user that owns the secret file: config.users.users.nerf.name (for example);
|
||||
group = same as user just with groups: config.users.users.nerf.group;
|
||||
mode = "permission in usual octet: 0400 (for example)";
|
||||
};
|
||||
```
|
||||
afterwards the secret should be available in `/run/secrets/example-key`.
|
||||
Afterwards the secret should be available in `/run/secrets/example-key`.
|
||||
If the accessing process is not root it must be member of the group `config.users.groups.keys`
|
||||
for systemd services this can be archived by setting `serviceConfig.SupplementaryGroups = [ config.users.groups.keys.name ];`
|
||||
it the service config.
|
||||
it the service configuration.
|
||||
|
||||
## impermanence
|
||||
|
||||
These machines are setup with `"/"` as a tmpfs. This is there to keep the machines clean. So no clutter in home
|
||||
directories, no weird ad-hoc solutions of botching something into `/opt/` or something like this. All will be
|
||||
gone at reboot.
|
||||
|
||||
But there are some files that we want to survive reboots, for example logs or ssh keys. The solution to this is
|
||||
to have a persistent storage mounted at `/persist` and automatically bind mount the paths of persistent things
|
||||
to the right places. To set this up we are using the impermanence module. In our configuration this is loaded with
|
||||
some default files to bind mount (ssh keys, machine-id some nixos specific things). That we have on all machines.
|
||||
|
||||
If you keep your application data (like recommended) on a separate partition, the chances are you don't need
|
||||
to interact with this, as most configuration files will be in the nix store anyway. If the application wants these nix
|
||||
store files in certain directories, you should use `environment.etc` family of options (consult the nixos documentation
|
||||
for this). This is for mutable files that are not core application data. (Like ssh keys, for a mailserver one could
|
||||
think about the hash files (not the db files) of an alias map (if one doesn't want to manage that with
|
||||
the nix store), things like that).
|
||||
|
||||
This should not be (but could be) used for large application databases. It would be more appropriate to mount
|
||||
its own filesystem for things like that. For small configuration files that are not in the nix-store,
|
||||
that might be the appropriate solution.
|
||||
|
||||
By default the storage is called `persist` and the default path for it is `/persist`. These can be changed
|
||||
with the `impermanence.name` and `impermanence.storagePath` options. To add paths to this storage you do the
|
||||
following.
|
||||
|
||||
```
|
||||
environment.persistence.${config.impermanence.name} = {
|
||||
directories = [
|
||||
"<your path to a directory to persist>"
|
||||
];
|
||||
files = [
|
||||
"<your path to a file to persist>"
|
||||
];
|
||||
};
|
||||
```
|
||||
For this to work `config` must be binded by the function arguments of you module. So the start of your module looks
|
||||
something like this:
|
||||
```
|
||||
{lib, pkgs, config, ...} :
|
||||
<module code >
|
||||
```
|
||||
|
|
16
flake.lock
16
flake.lock
|
@ -33,6 +33,21 @@
|
|||
"type": "indirect"
|
||||
}
|
||||
},
|
||||
"impermanence": {
|
||||
"locked": {
|
||||
"lastModified": 1694622745,
|
||||
"narHash": "sha256-z397+eDhKx9c2qNafL1xv75lC0Q4nOaFlhaU1TINqb8=",
|
||||
"owner": "nix-community",
|
||||
"repo": "impermanence",
|
||||
"rev": "e9643d08d0d193a2e074a19d4d90c67a874d932e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-community",
|
||||
"repo": "impermanence",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"nixos-mailserver": {
|
||||
"inputs": {
|
||||
"blobs": "blobs",
|
||||
|
@ -123,6 +138,7 @@
|
|||
"root": {
|
||||
"inputs": {
|
||||
"flake-parts": "flake-parts",
|
||||
"impermanence": "impermanence",
|
||||
"nixos-mailserver": "nixos-mailserver",
|
||||
"nixpkgs": "nixpkgs",
|
||||
"sops-nix": "sops-nix"
|
||||
|
|
|
@ -14,6 +14,9 @@
|
|||
url = "github:Mic92/sops-nix";
|
||||
inputs.nixpkgs.follows = "nixpkgs";
|
||||
};
|
||||
impermanence = {
|
||||
url = "github:nix-community/impermanence";
|
||||
};
|
||||
};
|
||||
|
||||
outputs = inputs@{ flake-parts, ... }:
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
imports = [
|
||||
(import (./. + "/machines/${name}/configuration.nix") inputs)
|
||||
inputs.sops-nix.nixosModules.sops
|
||||
inputs.impermanence.nixosModules.impermanence
|
||||
];
|
||||
};
|
||||
in lib.genAttrs machines makeSystem);
|
||||
|
|
47
nixos/modules/impermanence.nix
Normal file
47
nixos/modules/impermanence.nix
Normal file
|
@ -0,0 +1,47 @@
|
|||
{lib, config, ...} :
|
||||
|
||||
let
|
||||
inherit (lib)
|
||||
mkEnableOption
|
||||
mkIf
|
||||
mkOption
|
||||
types
|
||||
;
|
||||
cfg = config.impermanence;
|
||||
in
|
||||
|
||||
{
|
||||
imports = [ ];
|
||||
|
||||
options.impermanence = {
|
||||
enable = mkEnableOption "impermanence";
|
||||
storagePath = mkOption {
|
||||
type = types.path;
|
||||
default = "/persist";
|
||||
description = "The path where persistent data is stored";
|
||||
};
|
||||
name = mkOption {
|
||||
type = types.str;
|
||||
default = "persist";
|
||||
description = "the name of the persistent data store";
|
||||
};
|
||||
};
|
||||
|
||||
config = mkIf cfg.enable {
|
||||
environment.persistence.${cfg.name} = {
|
||||
persistentStoragePath = cfg.storagePath;
|
||||
directories = [
|
||||
"/var/log"
|
||||
"/var/lib/nixos"
|
||||
];
|
||||
files = [
|
||||
"/etc/ssh/ssh_host_ed25519_key"
|
||||
"/etc/ssh/ssh_host_ed25519_key.pub"
|
||||
"/etc/ssh/ssh_host_rsa_key"
|
||||
"/etc/ssh/ssh_host_rsa_key.pub"
|
||||
];
|
||||
};
|
||||
environment.etc.machine-id.source = "${cfg.storagePath}/machine-id";
|
||||
};
|
||||
|
||||
}
|
30
nixos/roles/admins.nix
Normal file
30
nixos/roles/admins.nix
Normal file
|
@ -0,0 +1,30 @@
|
|||
{lib, ...} :
|
||||
with lib;
|
||||
|
||||
let
|
||||
admins = {
|
||||
nerf = {
|
||||
hashedPassword =
|
||||
"$y$j9T$SJcjUIcs3JYuM5oyxfEQa/$tUBQT07FK4cb9xm.A6ZKVnFIPNOYMOKC6Dt6hadCuJ7";
|
||||
keys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdA4LpEGUUmN8esFyrNZXFb2GiBID9/S6zzhcnofQuP nerf@nerflap2"
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
mkAdmin = name :
|
||||
{hashedPassword, keys}: {
|
||||
"${name}" = {
|
||||
isNormalUser = true;
|
||||
createHome = true;
|
||||
extraGroups = [ "wheel" ];
|
||||
group = "users";
|
||||
home = "/home/${name}";
|
||||
openssh.authorizedKeys = { inherit keys; };
|
||||
inherit hashedPassword;
|
||||
};
|
||||
};
|
||||
|
||||
in {
|
||||
users.users = mkMerge (mapAttrsToList mkAdmin admins);
|
||||
}
|
|
@ -1,4 +1,59 @@
|
|||
{ ... } : {
|
||||
{pkgs, config, lib, modulesPath, ...} : {
|
||||
|
||||
imports = [
|
||||
./admins.nix
|
||||
./nix_keys.nix
|
||||
(modulesPath + "/virtualisation/xen-domU.nix")
|
||||
../modules/impermanence.nix
|
||||
];
|
||||
nix = {
|
||||
extraOptions = ''
|
||||
experimental-features = nix-command flakes
|
||||
builders-use-substitutes = true
|
||||
'';
|
||||
};
|
||||
|
||||
networking = {
|
||||
firewall = { # these shoud be default, but better make sure!
|
||||
enable = true;
|
||||
allowPing = true;
|
||||
};
|
||||
nftables.enable = true;
|
||||
useDHCP = false; # We don't speak DHCP and even if we would, we should enable it per interface
|
||||
# hosts = # TODO write something to autogenerate ip adresses!
|
||||
};
|
||||
|
||||
users = {
|
||||
mutableUsers = false;
|
||||
users.root.hashedPassword = "!";
|
||||
};
|
||||
|
||||
impermanence.enable = true;
|
||||
|
||||
sops.age.sshKeyPaths = [ "/etc/ssh/ssh_host_ed25519_key" ];
|
||||
|
||||
environment = {
|
||||
systemPackages = builtins.attrValues {
|
||||
inherit (pkgs)
|
||||
htop lsof tmux btop;
|
||||
};
|
||||
};
|
||||
|
||||
services = {
|
||||
journald.extraConfig = "SystemMaxUse=5G";
|
||||
|
||||
nginx = {
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
recommendedTlsSettings = true;
|
||||
};
|
||||
|
||||
openssh = {
|
||||
enable = true;
|
||||
settings = {
|
||||
PermitRootLogin = "no";
|
||||
PasswordAuthentication = false;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
6
nixos/roles/nix_keys.nix
Normal file
6
nixos/roles/nix_keys.nix
Normal file
|
@ -0,0 +1,6 @@
|
|||
{
|
||||
imports = [ ];
|
||||
nix.settings.trusted-public-keys = [
|
||||
"nerflap2-1:pDZCg0oo9PxNQxwVSQSvycw7WXTl53PGvVeZWvxuqJc="
|
||||
];
|
||||
}
|
Loading…
Reference in a new issue