forked from Fachschaft/nixConfig
Compare commits
6 commits
10db771ee4
...
fb002f7c82
Author | SHA1 | Date | |
---|---|---|---|
fb002f7c82 | |||
71c5d34cdb | |||
3d9fb42fd4 | |||
94966307f7 | |||
1639b071d8 | |||
88f6014fc4 |
4 changed files with 17 additions and 279 deletions
231
README.md
231
README.md
|
@ -1,240 +1,40 @@
|
|||
# nixConfig
|
||||
This repository contains the configuration of all our machines running NixOS.
|
||||
|
||||
## Build a machine
|
||||
There are multiple ways to build and deploy a machine configuration. Which is the
|
||||
most appropriate depends on the context and scenario. So first there will be a general
|
||||
explanation how this works and afterwards we will talk about some scenarios.
|
||||
|
||||
If you run `nix flake show`, you should get an output similar to this
|
||||
```
|
||||
$ nix flake show
|
||||
git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb749287d1e9e793811759dfa29469ab706dc
|
||||
├───apps
|
||||
│ └───x86_64-linux
|
||||
├───checks
|
||||
│ └───x86_64-linux
|
||||
├───devShells
|
||||
│ └───x86_64-linux
|
||||
├───formatter
|
||||
├───legacyPackages
|
||||
│ └───x86_64-linux omitted (use '--legacy' to show)
|
||||
├───nixosConfigurations
|
||||
│ └───nyarlathotep: NixOS configuration
|
||||
├───nixosModules
|
||||
├───overlays
|
||||
└───packages
|
||||
└───x86_64-linux
|
||||
```
|
||||
we can see there is an output called `nixosConfigurations.nyarlathotep` which contains the configuration of the machine
|
||||
called nyarlathotep. `nixosConfigurations` is special in that sense, that `nixos-rebuild` will automatically look
|
||||
for this key and assume how it is structured. The interesting part for us is the derivation `config.system.build.toplevel`.
|
||||
Its closure contains the whole system and the resulting derivation a script that changes the current system to
|
||||
that derivation. (called `/bin/switch-to-configuration`).
|
||||
|
||||
So what we want to archive is populate the nix store of the target machine with the closure of the derivation
|
||||
`.#nixosConfigurations.<name>.config.system.build.toplevel` and run the the resulting script on the target machine.
|
||||
|
||||
|
||||
### Local
|
||||
It has multiple benefits to build the system configuration on the local computer and push it to the target server.
|
||||
For example one doesn't stress the server with the load of evaluating the expression and building the closure. Also the server
|
||||
doesn't need to fetch the build dependencies this way. One has a local check if at least the nix syntax was correct.
|
||||
And so on...
|
||||
|
||||
#### Build
|
||||
If you have this repository local in your current directory, you can just run
|
||||
If you want to build the machineconfiguration for machine <name>
|
||||
run
|
||||
```
|
||||
$ nix build .#nixosConfigurations.<name>.config.system.build.toplevel
|
||||
nix build .#nixosConfiguration.<name>.config.system.build.toplevel
|
||||
```
|
||||
to build the system configuration of the machine `<name>`.
|
||||
|
||||
But you don't need to clone this repository, for more see the `nix flake --help` documentation about flake urls.
|
||||
|
||||
#### Copy
|
||||
After we build the derivation we need to get the closure onto the target system. Luckily nix has tools to do that
|
||||
via ssh. We could just run:
|
||||
```
|
||||
$ nix copy -s --to <however you setup your ssh stuff> .#nixosConfigurations.<name>.config.system.build.toplevel
|
||||
```
|
||||
This will evaluate the flake again to get the store path of the given derivation. If we want to avoid this,
|
||||
we might supply the corresponding store path directly.
|
||||
|
||||
The `-s` is important: it makes the target machine substitute all derivations it can (by default from chache.nixos.org).
|
||||
So you only upload configuration files and self build things.
|
||||
|
||||
To be able to copy things to a machine they need to be signed by someone trusted. Additional trusted nix keys are handled
|
||||
in `./nixos/roles/nix_keys.nix`. So to get yourself trusted you either need to install one derivation from the machine itself,
|
||||
or find someone who is already trusted, to push your key.
|
||||
|
||||
For more information on signing and key creation see `nix store sign --help` and `nix key --help`.
|
||||
|
||||
#### Activate
|
||||
Log into the remote machine and execute (with root privileges)
|
||||
```
|
||||
# /nix/store/<storepath>/bin/switch-to-configuration boot
|
||||
```
|
||||
That will setup a configuration switch at reboot. You can also switch the configuration live. For more
|
||||
details consider the `--help` output of that script. The storepath (or at least the hash of the derivation)
|
||||
is exactly the same it was on your machine.
|
||||
|
||||
|
||||
If you have a `nixos-rebuild` available on your system, it can automatize these things with the `--flake` and
|
||||
`--target-host` parameters. But there are some pitfalls so look at the `nixos-rebuild` documentation beforehand.
|
||||
|
||||
### On the machine
|
||||
|
||||
Clone this repository to `/etc/nixos/` and `nixos-rebuild boot` or `nixos-rebuild switch` that will select
|
||||
the appropriate machine based on hostname.
|
||||
|
||||
If the hostname is not correct, or you don't want to clone this flake, you can also use the `--flake` parameter.
|
||||
|
||||
In any case, to switch the system configuration you will need to have root privileges on the target machine.
|
||||
|
||||
## Installing a new machine
|
||||
|
||||
You have written a configuration and now want to deploy it as a new machine. You need to get the build configuration on the
|
||||
`nixos-installer` machine (regarding this machine see issue [#10]). You can either use either any of the
|
||||
versions above, or just continue then the machine will build the configuration implicitly.
|
||||
|
||||
### Disk layout
|
||||
|
||||
You will need to assemble the disk layout manually, we assume you do it below `/mnt` as the nixos-install tools
|
||||
assume this as the default location (they have an option to change that consider their `--help` pages).
|
||||
|
||||
This repository loads some default configuration that expects certain things. Your hardware configuration of that machine should
|
||||
reflect those.
|
||||
|
||||
- `"/"` is a tmpfs
|
||||
- `"/persist"` is the place where we keep data that can not be regenerated at any boot, so this should be a permanent disk
|
||||
- `"/nix"` the place the nixstore resides, needed to boot the machine should also be persistent
|
||||
- `"/boot"` the place for bootloader configuration and kernel also persistent
|
||||
- any additional data paths for your machine specific needs. Choose filesystems accordingly.
|
||||
|
||||
My recommendation is to put `"/persist"` and `"/nix"` on a joint btrfs as subvolumes and `"/boot"` on separate disks (because grub
|
||||
will give you a hard time if you do it as a subvolume or bind mount (even though that should be possible but is an upstream problem)).
|
||||
For how to configure additional persistent data
|
||||
to be stored in `"/persist"` look at the impermanence section as soon it is merged. Before this look at issue [#9].
|
||||
I do not recommend this for actual high access application data like databases mailboxes and things like it. You should
|
||||
think about this as data that if lost can be regenerated with only little problems and read/written only a few times
|
||||
during setup. (Like the server ssh keys for example). The configuration also setups some paths for `"/persist"` automatically,
|
||||
again look at the impermanence sections.
|
||||
|
||||
#### File system uuids
|
||||
|
||||
You might end with a bit of a chicken/egg problem regarding filesystem uuids. See you need to set them in your system configuration.
|
||||
There are two ways around that. Either generate the filesystems read out the uuids, and push them into the repository holding
|
||||
the configuration you want to build, or generate the uuids first, have them in your configuration and set them upon filesystem creation. Most
|
||||
`mkfs` utilities have an option for that.
|
||||
|
||||
### Installing
|
||||
|
||||
Just run
|
||||
```
|
||||
nixos-install --flake 'git+https://gitea.mathebau.de/Fachschaft/nixConfig?ref=<branchname>#<name>'
|
||||
```
|
||||
where `<branchname>` is the branch you install from and `<name>` is the name of the configuration you build.
|
||||
If the build system is already in the nix store, this will start the installation, else it will first attempt to build
|
||||
it. That should be the whole installation process, just reboot. The machine should be fully setup. No additional user
|
||||
or service setup, after the reboot.
|
||||
clone this repo to `/etc/nixos/` and `nixos-rebuild` that will select
|
||||
the appropriate machine based on hostname
|
||||
|
||||
|
||||
## How to write a new machine configuration
|
||||
At best, you take a first look at already existing configurations. But here are a few guidelines.
|
||||
Make a new folder in `/nixos/machines`. The name of the folder should match the hostname of your
|
||||
machine. The only technically required file in there is `configuration.nix`. So create it.
|
||||
|
||||
A good skeleton is probably:
|
||||
```
|
||||
flake-inputs:
|
||||
{config, pkgs, lib, ... }: {
|
||||
|
||||
imports = [
|
||||
./hardware-configuration.nix
|
||||
../../roles
|
||||
./network.nix
|
||||
|
||||
<your additional imports here>
|
||||
|
||||
];
|
||||
|
||||
<your system config here>
|
||||
networking.hostname = "<your hostname>"; # this will hopefully disappear if I have time to refactor this.
|
||||
system.stateVersion = "<state version at time of install>";
|
||||
}
|
||||
```
|
||||
The import of `../../roles` loads all the nice default setup that all these machines have in common. There the
|
||||
impermanence configuration is loaded as well as ssh, sops, shared user configuration and much more.
|
||||
The other two imports are suggestions how you should organize your configuration but not enforced by anything.
|
||||
In your hardware
|
||||
configuration you should basically only write you filesystem layout and your hostPlatform. The bootloading stuff
|
||||
is already taken care of by `../../roles`.
|
||||
|
||||
As of moment of writing `network.nix` should contain ip, nameserver and default gateway setup. As parts of
|
||||
this is constant across all systems and will undergo refactor soon.
|
||||
|
||||
I would recommend to split your configuration into small files you import. If this is something machine specific (like
|
||||
tied to your ip address hostname), put it into the machine directory. If it is not, put it into `/nixos/roles/` if it
|
||||
is not but has options to set, put it in `/nixos/modules`.
|
||||
|
||||
## How this flake is organized
|
||||
|
||||
This flake uses `flake-parts` see [flake.parts](https://flake.parts) for more details. It makes handling
|
||||
`system` and some other modules related things more convenient.
|
||||
For the general layout of nixos system configuration and modules, please see the corresponding documentation.
|
||||
|
||||
The toplevel `flake.nix` contains the flake inputs as usual and only calls a file `flake-module.nix`.
|
||||
This toplevel `flake-module.nix` imports further more specialized `flake-modules.nix` files from sub-directories.
|
||||
Right now the only one is `nixos/flake-module.nix`. But if we start to ship our own software (or software versions,
|
||||
with specific build flags), this might get more.
|
||||
|
||||
### nixos
|
||||
The `nixos` folder contains all machine configurations. It separates in two folders `nixos/machines` and `nixos/roles`.
|
||||
The corresponding `flake-module.nix` file automatically searches for `machines/<name>/configuration.nix`, and evalutes
|
||||
those as nixos configurations, and populates the flake.
|
||||
|
||||
#### machines
|
||||
`nixos/machines` contains all machine specific configuration (in a sub-folder per machine). Like hardware configuration, specific
|
||||
network configuration. And service configuration that are too closely interwoven with the rest of that machine (for example
|
||||
mailserver configuration depends heavily on network settings). It also
|
||||
contains the root configuration for that machine called `configuration.nix`. This file usually only includes other modules.
|
||||
These `configuration.nix` files are almost usual nix configurations. The only difference is that they take as an extra argument
|
||||
the flake inputs. This allows them to load modules from these flakes. For example, nyarlathotep loads the simple-nixos-mailserver
|
||||
module that way.
|
||||
|
||||
#### roles
|
||||
`nixos/roles` contains configuration that is potentially shared by some machines. It is expected that `nixos/roles/default.nix`
|
||||
is imported as (`../../roles`) in every machine. Notable are the files `nixos/roles/admins.nix` which contains
|
||||
common admin accounts for these machines and `nixos/roles/nix_keys.nix` which contains the additional trusted
|
||||
keys for the nix store.
|
||||
|
||||
## sops
|
||||
### sops
|
||||
|
||||
We are sharing secrets using [`sops`](https://github.com/getsops/sops) and [`sops-nix`](https://github.com/Mic92/sops-nix)
|
||||
As of right now we use only `age` keys.
|
||||
The machine keys are derived from their server ssh keys, that they generate at first boot.
|
||||
To read out a machines public key run the following command on the corresponding machine.
|
||||
```
|
||||
$ nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
|
||||
```
|
||||
User keys are generated by the users.
|
||||
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repository.
|
||||
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repo.
|
||||
|
||||
To make a secret available on a given machine you need to configure the following:
|
||||
To make a secret available on a given machine you need to do the following. Configure the following keys
|
||||
|
||||
```
|
||||
sops.secrets.example-key = {
|
||||
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)";
|
||||
path = "optinal path where the secret gets symlinked to, practical if some program expects a specific path";
|
||||
owner = user that owns the secret file: config.users.users.nerf.name (for example);
|
||||
group = same as user just with groups: config.users.users.nerf.group;
|
||||
mode = "permission in usual octet: 0400 (for example)";
|
||||
};
|
||||
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)
|
||||
path = "optinal path where the secret gets symlinked to, practical if some programm expects a specific path"
|
||||
owner = user that owns the secret file: config.users.users.nerf.name (for example)
|
||||
group = same as user just with groups: config.users.users.nerf.group
|
||||
mode = "premission in usual octet: 0400 (for example)"
|
||||
```
|
||||
Afterwards the secret should be available in `/run/secrets/example-key`.
|
||||
afterwards the secret should be available in `/run/secrets/example-key`.
|
||||
If the accessing process is not root it must be member of the group `config.users.groups.keys`
|
||||
for systemd services this can be archived by setting `serviceConfig.SupplementaryGroups = [ config.users.groups.keys.name ];`
|
||||
it the service configuration.
|
||||
it the service config.
|
||||
|
||||
## impermanence
|
||||
|
||||
|
@ -278,3 +78,4 @@ something like this:
|
|||
{lib, pkgs, config, ...} :
|
||||
<module code >
|
||||
```
|
||||
|
||||
|
|
|
@ -1,30 +0,0 @@
|
|||
{lib, ...} :
|
||||
with lib;
|
||||
|
||||
let
|
||||
admins = {
|
||||
nerf = {
|
||||
hashedPassword =
|
||||
"$y$j9T$SJcjUIcs3JYuM5oyxfEQa/$tUBQT07FK4cb9xm.A6ZKVnFIPNOYMOKC6Dt6hadCuJ7";
|
||||
keys = [
|
||||
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdA4LpEGUUmN8esFyrNZXFb2GiBID9/S6zzhcnofQuP nerf@nerflap2"
|
||||
];
|
||||
};
|
||||
};
|
||||
|
||||
mkAdmin = name :
|
||||
{hashedPassword, keys}: {
|
||||
"${name}" = {
|
||||
isNormalUser = true;
|
||||
createHome = true;
|
||||
extraGroups = [ "wheel" ];
|
||||
group = "users";
|
||||
home = "/home/${name}";
|
||||
openssh.authorizedKeys = { inherit keys; };
|
||||
inherit hashedPassword;
|
||||
};
|
||||
};
|
||||
|
||||
in {
|
||||
users.users = mkMerge (mapAttrsToList mkAdmin admins);
|
||||
}
|
|
@ -1,9 +1,8 @@
|
|||
{pkgs, config, lib, modulesPath, ...} : {
|
||||
{pkgs, config, lib, ...} : {
|
||||
|
||||
imports = [
|
||||
./admins.nix
|
||||
./nix_keys.nix
|
||||
(modulesPath + "/virtualisation/xen-domU.nix")
|
||||
../modules/impermanence.nix
|
||||
];
|
||||
nix = {
|
||||
|
@ -25,35 +24,9 @@ networking = {
|
|||
|
||||
users = {
|
||||
mutableUsers = false;
|
||||
users.root.hashedPassword = "!";
|
||||
};
|
||||
|
||||
impermanence.enable = true;
|
||||
|
||||
sops.age.sshKeyPaths = [ "/etc/ssh/ssh_host_ed25519_key" ];
|
||||
|
||||
environment = {
|
||||
systemPackages = builtins.attrValues {
|
||||
inherit (pkgs)
|
||||
htop lsof tmux btop;
|
||||
};
|
||||
};
|
||||
|
||||
services = {
|
||||
journald.extraConfig = "SystemMaxUse=5G";
|
||||
|
||||
nginx = {
|
||||
recommendedOptimisation = true;
|
||||
recommendedGzipSettings = true;
|
||||
recommendedTlsSettings = true;
|
||||
};
|
||||
|
||||
openssh = {
|
||||
enable = true;
|
||||
settings = {
|
||||
PermitRootLogin = "no";
|
||||
PasswordAuthentication = false;
|
||||
};
|
||||
};
|
||||
};
|
||||
}
|
||||
|
|
|
@ -1,6 +0,0 @@
|
|||
{
|
||||
imports = [ ];
|
||||
nix.settings.trusted-public-keys = [
|
||||
"nerflap2-1:pDZCg0oo9PxNQxwVSQSvycw7WXTl53PGvVeZWvxuqJc="
|
||||
];
|
||||
}
|
Loading…
Reference in a new issue