Compare commits

..

33 commits

Author SHA1 Message Date
dennis
554c5c89a8 Merge pull request 'nyarlathtop' (#14) from dennis/nixConfig:nyarlathtop into nyarlathtop
Reviewed-on: Fachschaft/nixConfig#14
2023-10-01 10:40:02 +00:00
f6091a935a
fixed ssh paths for impermanence 2023-09-30 15:07:12 +02:00
3b01487d1d
set up hostname for nyarlathotep 2023-09-29 13:11:20 +02:00
377ff0141e
changed to seperate boot partition 2023-09-29 01:47:01 +02:00
6e4469fa8f
disable root login 2023-09-29 01:13:30 +02:00
2ffe242e8f
changed nyarlathotep disk config for impermanence 2023-09-29 00:03:06 +02:00
889d0a8736
changed impermanence config for subvolumes 2023-09-28 23:34:34 +02:00
08f06f3a92
changed nyarlathotep disk layout for impermanence 2023-09-28 17:47:00 +02:00
4f29103fdb
[#9] first impermanence config support 2023-09-28 17:45:54 +02:00
977bfa7114
fixed a merge thingy in README 2023-09-25 22:03:19 +02:00
013ef7d979
Merge branch 'nyarlathtop' of ssh://gitea.mathebau.de:3022/Fachschaft/nixConfig into nyarlathtop 2023-09-25 21:58:29 +02:00
12a20c4c52
Merge branch 'nyarlathtop' of ssh://gitea.mathebau.de:3022/dennis/nixConfig into nyarlathtop 2023-09-25 21:57:04 +02:00
8d3731eeb3
added a comment regarding the use of pkgs.nixos 2023-09-25 21:54:47 +02:00
bc8b37f38d
refactored xen_guest.nix 2023-09-25 21:54:46 +02:00
72c98986a0
some documentation I wrote without proofreading at 2 in the morning 2023-09-25 21:54:43 +02:00
53787ba7bb
/var/mail is special OOOPS 2023-09-25 21:50:36 +02:00
cb771c4abb
fixed small error in trusted nix keys handling 2023-09-25 21:50:35 +02:00
ba8862cb0c
first running config (fingers crossed) 2023-09-25 21:50:35 +02:00
0c6bb20db2
updated dependencies 2023-09-25 21:50:35 +02:00
60885b4cb5
added actual hardware identifiers & atual network config 2023-09-25 21:50:07 +02:00
fe7ea8aee1
first working steps on nyarlathotep 2023-09-25 21:48:15 +02:00
a9a95f4ca3
added sensible credentials to nerf user 2023-09-25 21:48:15 +02:00
152debbb36
disable debug flag, as logs are getting to large 2023-09-25 18:40:13 +02:00
eefaddbaed
make /tmp/ a tmpfs 2023-09-25 16:05:41 +02:00
d89313e25d
refactored xen_guest.nix 2023-09-24 02:04:39 +02:00
e1912d8538
some documentation I wrote without proofreading at 2 in the morning 2023-09-24 01:50:41 +02:00
9d0eb74928
/var/mail is special OOOPS 2023-09-22 21:33:23 +02:00
23283f6141
fixed small error in trusted nix keys handling 2023-09-22 20:00:35 +02:00
fc1fb67061
first running config (fingers crossed) 2023-09-22 19:36:48 +02:00
10ec752fa6
updated dependencies 2023-09-22 15:32:16 +02:00
2b0eec7dbf
added actual hardware identifiers & atual network config 2023-09-22 15:10:57 +02:00
f9672df9cd
first working steps on nyarlathotep 2023-09-22 15:09:15 +02:00
4608d5a65f
added sensible credentials to nerf user 2023-09-22 15:09:01 +02:00
33 changed files with 380 additions and 1257 deletions

2
.gitignore vendored
View file

@ -2,4 +2,4 @@
# Ignore build outputs from performing a nix-build or `nix build` command
result
result-*
.pre-commit-config.yaml

View file

@ -1,33 +1,16 @@
keys:
- &nerf age1rasjnr2tlv9y70sj0z0hwpgpxdc974wzg5umtx2pnc6z0p05u3js6r8sln
- &gonne age1epz92k2rkp43hkrg3u0jgkzhnkwx8y43kag7rvfzwl9wcddelvusyetxl7
- &nyarlathotep age1s99d0vlj5qlm287n98jratql5fypvjrxxal0k5jl2aw9dcc8kyvqw5yyt4
- &bragi age1lqvgpmlemyg9095ujck64u59ma29656zs7a4yxgz4s6u5cld2ccss69jwe
- &lobon age12nz7dtc0m5wasxm4r9crtkgwnzvauyfp0xh0n8z8jld0arn9ea9qe0agvn
creation_rules:
- path_regex: nixos/machines/nyarlathotep/.*
- path_regex nixos/machines/nyarlathotep/.*
key_groups:
- age:
- *nerf
- *gonne
- *nyarlathotep
- path_regex: nixos/machines/bragi/.*
key_groups:
- age:
- *nerf
- *gonne
- *bragi
- path_regex: nixos/machines/lobon/.*
key_groups:
- age:
- *nerf
- *gonne
- *lobon
*nerf
*nyarlathotep
# this is the catchall clause if nothing above machtes. Encrypt to users but not
# to machines
- key_groups:
- age:
- *nerf
- *gonne
*nerf

250
README.md
View file

@ -1,12 +1,11 @@
# nixConfig
This repository contains the configuration of all our machines running NixOS.
## Build a machine
There are multiple ways to build and deploy a machine configuration. Which is the
most appropriate depends on the context and scenario. So first there will be a general
explanation how this works and afterwards we will talk about some scenarios.
If you run `nix flake show`, you should get an output similar to this
If you run `nix flake show` you should get an output similiar to this
```
$ nix flake show
git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb749287d1e9e793811759dfa29469ab706dc
@ -26,7 +25,7 @@ git+file:///home/nerf/git/nixConfig?ref=refs%2fheads%2fnyarlathtop&rev=9d0eb7492
└───packages
└───x86_64-linux
```
we can see there is an output called `nixosConfigurations.nyarlathotep` which contains the configuration of the machine
we can see there is an output callled `nixosConfigurations.nyarlathotep`. Which contains the config of the machine
called nyarlathotep. `nixosConfigurations` is special in that sense, that `nixos-rebuild` will automatically look
for this key and assume how it is structured. The interesting part for us is the derivation `config.system.build.toplevel`.
Its closure contains the whole system and the resulting derivation a script that changes the current system to
@ -37,19 +36,18 @@ So what we want to archive is populate the nix store of the target machine with
### Local
It has multiple benefits to build the system configuration on the local computer and push it to the target server.
For example one doesn't stress the server with the load of evaluating the expression and building the closure. Also the server
It has multiple benefits to build the system config on the local computer and push it to the target server.
For example one doesn't stress the server with the load coming with evaluating the expression. Also the server
doesn't need to fetch the build dependencies this way. One has a local check if at least the nix syntax was correct.
And so on...
#### Build
If you have this repository local in your current directory, you can just run
If you have this repository local in your current directory you can just run:
```
$ nix build .#nixosConfigurations.<name>.config.system.build.toplevel
```
to build the system configuration of the machine `<name>`.
But you don't need to clone this repository, for more see the `nix flake --help` documentation about flake urls.
But you don't need to clone this repository for more on flake urls see the `nix flake --help` documentation.
#### Copy
After we build the derivation we need to get the closure onto the target system. Luckily nix has tools to do that
@ -57,156 +55,56 @@ via ssh. We could just run:
```
$ nix copy -s --to <however you setup your ssh stuff> .#nixosConfigurations.<name>.config.system.build.toplevel
```
This will evaluate the flake again to get the store path of the given derivation. If we want to avoid this,
we might supply the corresponding store path directly.
we do not need the flake anymore, instead of specifying the derivation name we could also give the store path
directly.
The `-s` is important: it makes the target machine substitute all derivations it can (by default from chache.nixos.org).
So you only upload configuration files and self build things.
The `-s` is important it makes the target machine substitute all derivations it can (by default from chache.nixos.org).
So you only upload config files and self build things.
To be able to copy things to a machine they need to be signed by someone trusted. Additional trusted nix keys are handled
in `./nixos/roles/nix_keys.nix`. So to get yourself trusted you either need to install one derivation from the machine itself,
or find someone who is already trusted, to push your key.
or find someone who is already trusted.
For more information on signing and key creation see `nix store sign --help` and `nix key --help`.
#### Activate
Log into the remote machine and execute (with root privileges)
Log into the remote machine and execute
```
# /nix/store/<storepath>/bin/switch-to-configuration boot
```
That will setup a configuration switch at reboot. You can also switch the configuration live. For more
details consider the `--help` output of that script. The storepath (or at least the hash of the derivation)
is exactly the same it was on your machine.
details consider the `--help` output of that script.
If you have a `nixos-rebuild` available on your system, it can automatize these things with the `--flake` and
If you have a `nixos-rebuild` available on your system it can automatize these things with the `--flake` and
`--target-host` parameters. But there are some pitfalls so look at the `nixos-rebuild` documentation beforehand.
### On the machine
Clone this repository to `/etc/nixos/` and `nixos-rebuild boot` or `nixos-rebuild switch` that will select
clone this repo to `/etc/nixos/` and `nixos-rebuild boot` or `nixos-rebuild switch` that will select
the appropriate machine based on hostname.
If the hostname is not correct, or you don't want to clone this flake, you can also use the `--flake` parameter.
If the hostname is not correct, or you don't want to clone this flake you can also use the `--flake` parameter.
In any case, to switch the system configuration you will need to have root privileges on the target machine.
In any case, to switch the system configuration you will need to have root priviledges on the target machine.
## Installing a new machine
You have written a configuration and now want to deploy it as a new machine. You need to get the build configuration on the
`nixos-installer` machine (regarding this machine see issue [#10]). You can either use either any of the
versions above, or just continue then the machine will build the configuration implicitly.
### Disk layout
You will need to assemble the disk layout manually, we assume you do it below `/mnt` as the nixos-install tools
assume this as the default location (they have an option to change that consider their `--help` pages).
This repository loads some default configuration that expects certain things. Your hardware configuration of that machine should
reflect those.
- `"/"` is a tmpfs
- `"/persist"` is the place where we keep data that can not be regenerated at any boot, so this should be a permanent disk
- `"/nix"` the place the nixstore resides, needed to boot the machine should also be persistent
- `"/boot"` the place for bootloader configuration and kernel also persistent
- any additional data paths for your machine specific needs. Choose filesystems accordingly.
My recommendation is to put `"/persist"` and `"/nix"` on a joint btrfs as subvolumes and `"/boot"` on separate disks (because grub
will give you a hard time if you do it as a subvolume or bind mount (even though that should be possible but is an upstream problem)).
For how to configure additional persistent data
to be stored in `"/persist"` look at the impermanence section as soon it is merged. Before this look at issue [#9].
I do not recommend this for actual high access application data like databases mailboxes and things like it. You should
think about this as data that if lost can be regenerated with only little problems and read/written only a few times
during setup. (Like the server ssh keys for example). The configuration also setups some paths for `"/persist"` automatically,
again look at the impermanence sections.
#### File system uuids
You might end with a bit of a chicken/egg problem regarding filesystem uuids. See you need to set them in your system configuration.
There are two ways around that. Either generate the filesystems read out the uuids, and push them into the repository holding
the configuration you want to build, or generate the uuids first, have them in your configuration and set them upon filesystem creation. Most
`mkfs` utilities have an option for that.
### Installing
Just run
```
nixos-install --flake 'git+https://gitea.mathebau.de/Fachschaft/nixConfig?ref=<branchname>#<name>'
```
where `<branchname>` is the branch you install from and `<name>` is the name of the configuration you build.
If the build system is already in the nix store, this will start the installation, else it will first attempt to build
it. That should be the whole installation process, just reboot. The machine should be fully setup. No additional user
or service setup, after the reboot.
## How to write a new machine configuration
At best, you take a first look at already existing configurations. But here are a few guidelines.
Make a new folder in `/nixos/machines`. The name of the folder should match the hostname of your
machine. The only technically required file in there is `configuration.nix`. So create it.
A good skeleton is probably:
```
{config, pkgs, lib, flake-inputs, ... }: {
imports = [
./hardware-configuration.nix
../../roles
./network.nix
<your additional imports here>
];
<your system config here>
networking.hostname = "<your hostname>"; # this will hopefully disappear if I have time to refactor this.
system.stateVersion = "<state version at time of install>";
}
```
The import of `../../roles` loads all the nice default setup that all these machines have in common. There the
impermanence configuration is loaded as well as ssh, sops, shared user configuration and much more.
The other two imports are suggestions how you should organize your configuration but not enforced by anything.
In your hardware
configuration you should basically only write you filesystem layout and your hostPlatform. The bootloading stuff
is already taken care of by `../../roles`.
The `flake-inputs` argument is optional, but you can use it if you need to get a hold of the flake inputs,
else this is a complete normal nixos system configuration module (with a lot of settings already imorted
from `../../roles`).
As of moment of writing `network.nix` should contain ip, nameserver and default gateway setup. As parts of
this is constant across all systems and will undergo refactor soon.
I would recommend to split your configuration into small files you import. If this is something machine specific (like
tied to your ip address hostname), put it into the machine directory. If it is not, put it into `/nixos/roles/` if it
is not but has options to set, put it in `/nixos/modules`.
## How this flake is organized
This flake uses `flake-parts` see [flake.parts](https://flake.parts) for more details. It makes handling
`system` and some other modules related things more convenient.
For the general layout of nixos system configuration and modules, please see the corresponding documentation.
`system` and some other moudles related things more convenient.
For the general layout of nixos system config and modules, please see the corresponding documentation.
The toplevel `flake.nix` contains the flake inputs as usual and only calls a file `flake-module.nix`.
This toplevel `flake-module.nix` imports further more specialized `flake-modules.nix` files from sub-directories.
Right now the only one is `nixos/flake-module.nix`. But if we start to ship our own software (or software versions,
with specific build flags), this might get more.
The toplevel `flake.nix` contains the flake inputs as usual and only calls a file `flake-module.nix`
this toplevel `flake-module.nix` imports further more specialiesed `flake-modules.nix` files from subdirectories.
Right now the only one is `nixos/flake-module.nix`.
### nixos
The `nixos` folder contains all machine configurations. It separates in two folders `nixos/machines` and `nixos/roles`.
The corresponding `flake-module.nix` file automatically searches for `machines/<name>/configuration.nix`, and evalutes
those as nixos configurations, and populates the flake.
the `nixos` folder contains all machine configurations. It sepreates in two folders `nixos/machines` and `nixos/roles`.
#### machines
`nixos/machines` contains all machine specific configuration (in a sub-folder per machine). Like hardware configuration, specific
network configuration. And service configuration that are too closely interwoven with the rest of that machine (for example
mailserver configuration depends heavily on network settings). It also
contains the root configuration for that machine called `configuration.nix`. This file usually only includes other modules.
These `configuration.nix` files are almost usual nix configurations. The only difference is that they take as an extra argument
the flake inputs. This allows them to load modules from these flakes. For example, nyarlathotep loads the simple-nixos-mailserver
module that way.
`nixos/machines` contains all machine specific configuration (in a subfolder per machine). Like hardware configuration, specific
network configuration. And service configuration that are too closely intervowen with the rest of that machine. It also
contains the root config for that machine called `configuration.nix`. This file usually only includes other modules.
#### roles
`nixos/roles` contains configuration that is potentially shared by some machines. It is expected that `nixos/roles/default.nix`
`nixos/roles` contains config that is pontentially shared by some machines. It is expected that `nixos/roles/default.nix`
is imported as (`../../roles`) in every machine. Notable are the files `nixos/roles/admins.nix` which contains
common admin accounts for these machines and `nixos/roles/nix_keys.nix` which contains the additional trusted
keys for the nix store.
@ -216,96 +114,20 @@ keys for the nix store.
We are sharing secrets using [`sops`](https://github.com/getsops/sops) and [`sops-nix`](https://github.com/Mic92/sops-nix)
As of right now we use only `age` keys.
The machine keys are derived from their server ssh keys, that they generate at first boot.
To read out a machines public key run the following command on the corresponding machine.
```
$ nix-shell -p ssh-to-age --run 'cat /etc/ssh/ssh_host_ed25519_key.pub | ssh-to-age'
```
User keys are generated by the users.
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repository.
New keys and machines need entries into the `.sops.yaml` file within the root directory of this repo.
To make a secret available on a given machine you need to configure the following:
To make a secret available on a given machine you need to do the following. Configure the following keys
```
sops.secrets.example-key = {
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)";
path = "optinal path where the secret gets symlinked to, practical if some program expects a specific path";
owner = user that owns the secret file: config.users.users.nerf.name (for example);
group = same as user just with groups: config.users.users.nerf.group;
mode = "permission in usual octet: 0400 (for example)";
};
sopsFile = "relative path to file in the repo containing the secrets (optional else the sops.defaultSopsFile is used)
path = "optinal path where the secret gets symlinked to, practical if some programm expects a specific path"
owner = user that owns the secret file: config.users.users.nerf.name (for example)
group = same as user just with groups: config.users.users.nerf.group
mode = "premission in usual octet: 0400 (for example)"
```
Afterwards the secret should be available in `/run/secrets/example-key`.
afterwards the secret should be available in `/run/secrets/example-key`.
If the accessing process is not root it must be member of the group `config.users.groups.keys`
for systemd services this can be archived by setting `serviceConfig.SupplementaryGroups = [ config.users.groups.keys.name ];`
it the service configuration.
## impermanence
These machines are setup with `"/"` as a tmpfs. This is there to keep the machines clean. So no clutter in home
directories, no weird ad-hoc solutions of botching something into `/opt/` or something like this. All will be
gone at reboot.
But there are some files that we want to survive reboots, for example logs or ssh keys. The solution to this is
to have a persistent storage mounted at `/persist` and automatically bind mount the paths of persistent things
to the right places. To set this up we are using the impermanence module. In our configuration this is loaded with
some default files to bind mount (ssh keys, machine-id some nixos specific things). That we have on all machines.
If you keep your application data (like recommended) on a separate partition, the chances are you don't need
to interact with this, as most configuration files will be in the nix store anyway. If the application wants these nix
store files in certain directories, you should use `environment.etc` family of options (consult the nixos documentation
for this). This is for mutable files that are not core application data. (Like ssh keys, for a mailserver one could
think about the hash files (not the db files) of an alias map (if one doesn't want to manage that with
the nix store), things like that).
This should not be (but could be) used for large application databases. It would be more appropriate to mount
its own filesystem for things like that. For small configuration files that are not in the nix-store,
that might be the appropriate solution.
By default the storage is called `persist` and the default path for it is `/persist`. These can be changed
with the `impermanence.name` and `impermanence.storagePath` options. To add paths to this storage you do the
following.
```
environment.persistence.${config.impermanence.name} = {
directories = [
"<your path to a directory to persist>"
];
files = [
"<your path to a file to persist>"
];
};
```
For this to work `config` must be binded by the function arguments of you module. So the start of your module looks
something like this:
```
{lib, pkgs, config, ...} :
<module code >
```
# Contributing
Like with all FS projects, you are welcome to contribute. Work is done usually by the person that is most annoyed
by the circumstances or by the person that didn't run fast enough. So we are happy if we get help. That doesn't
mean that we don't need to have some level of quality, people after us needs to work with it. It is live infrastructure
and downtime hurts someone (and in the wrong moment even really bad (Matheball ticket sales for example)).
So here are some Guidelines.
## Coding style and linting.
If you run `nix flake check` there are automated checks in place, please make sure to pass them.
There is also a code autoformatter (`alejandra`) incorporated into those. If you want to run
it you can do so over the development shell or by running `nix fmt`.
You can also install
them into your local git repository as pre-commit hooks, and setting up a shell that has
even more tooling by running `nix develop`. That will give you a bash in which you can run
all the checks manually `pre-commit run -a`. This will also run the autoformatter.
## Process for submitting changes
1. If it is something bigger, please open an issue first describing what and why you want to do something.
If it is just something small, skip this step.
2. Fork the repo and implement your changes in a branch on your fork. Afterwards open a pull request (possibly mentioning the issue).
Against the main branch.
- Your branch should be based on an up to date version of main, if it is not consider rebasing.
3. You will need to find someone with the proper rights to approve of your changes, but most of the time there will be request
for changes first.
it the service config.

View file

@ -1,62 +1,24 @@
{inputs, ...}: {
{inputs, ...}:
{
# debug = true;
# We only define machines config in this flake yet, so we only include
# the module that builds these. This file might get fuller, if we need to
# build our own packages, that are not flakes.
imports = [
./nixos/flake-module.nix
inputs.pre-commit-hooks.flakeModule
imports = [ ./nixos/flake-module.nix
# To import a flake module
# 1. Add foo to inputs
# 2. Add foo as a parameter to the outputs function
# 3. Add here: foo.flakeModule
];
systems = [ "x86_64-linux"];
perSystem = {
config,
pkgs,
system,
...
}: {
devShells.default = config.pre-commit.devShell;
pre-commit = let
generatedFiles = [
"hardware-configuration\\.nix"
];
in {
check.enable = true;
settings = {
hooks = {
nil.enable = true;
statix = {
enable = true;
settings = {
format = "stderr";
ignore = generatedFiles;
};
};
deadnix = {
enable = true;
excludes = generatedFiles;
};
alejandra.enable = true;
};
};
};
formatter = pkgs.alejandra;
# perSystem = { config, self', inputs', pkgs, system, ... }: {
# Per-system attributes can be defined here. The self' and inputs'
# module parameters provide easy access to attributes of the same
# system.
_module.args.pkgs = import inputs.nixpkgs {
inherit system;
config.permittedInsecurePackages = ["jitsi-meet-1.0.8043"];
};
};
# Equivalent to inputs'.nixpkgs.legacyPackages.hello;
# };
# flake = {
# The usual flake attributes can be defined here, including system-
# agnostic ones like nixosModule and system-enumerating ones, although

View file

@ -21,11 +21,11 @@
"nixpkgs-lib": "nixpkgs-lib"
},
"locked": {
"lastModified": 1727826117,
"narHash": "sha256-K5ZLCyfO/Zj9mPFldf3iwS6oZStJcU4tSpiXTMYaaL0=",
"lastModified": 1693611461,
"narHash": "sha256-aPODl8vAgGQ0ZYFIRisxYG5MOGSkIczvu2Cd8Gb9+1Y=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "3d04084d54bedc3d6b8b736c70ef449225c361b1",
"rev": "7f53fdb7bdc5bb237da7fefef12d099e4fd611ca",
"type": "github"
},
"original": {
@ -35,11 +35,11 @@
},
"impermanence": {
"locked": {
"lastModified": 1727649413,
"narHash": "sha256-FA53of86DjFdeQzRDVtvgWF9o52rWK70VHGx0Y8fElQ=",
"lastModified": 1694622745,
"narHash": "sha256-z397+eDhKx9c2qNafL1xv75lC0Q4nOaFlhaU1TINqb8=",
"owner": "nix-community",
"repo": "impermanence",
"rev": "d0b38e550039a72aff896ee65b0918e975e6d48e",
"rev": "e9643d08d0d193a2e074a19d4d90c67a874d932e",
"type": "github"
},
"original": {
@ -53,14 +53,16 @@
"blobs": "blobs",
"flake-compat": [],
"nixpkgs": [],
"nixpkgs-24_05": "nixpkgs-24_05"
"nixpkgs-22_11": "nixpkgs-22_11",
"nixpkgs-23_05": "nixpkgs-23_05",
"utils": "utils"
},
"locked": {
"lastModified": 1722877200,
"narHash": "sha256-qgKDNJXs+od+1UbRy62uk7dYal3h98I4WojfIqMoGcg=",
"lastModified": 1689976554,
"narHash": "sha256-uWJq3sIhkqfzPmfB2RWd5XFVooGFfSuJH9ER/r302xQ=",
"ref": "refs/heads/master",
"rev": "af7d3bf5daeba3fc28089b015c0dd43f06b176f2",
"revCount": 593,
"rev": "c63f6e7b053c18325194ff0e274dba44e8d2271e",
"revCount": 570,
"type": "git",
"url": "https://gitlab.com/simple-nixos-mailserver/nixos-mailserver.git"
},
@ -71,11 +73,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1728492678,
"narHash": "sha256-9UTxR8eukdg+XZeHgxW5hQA9fIKHsKCdOIUycTryeVw=",
"lastModified": 1695145219,
"narHash": "sha256-Eoe9IHbvmo5wEDeJXKFOpKUwxYJIOxKUesounVccNYk=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "5633bcff0c6162b9e4b5f1264264611e950c8ec7",
"rev": "5ba549eafcf3e33405e5f66decd1a72356632b96",
"type": "github"
},
"original": {
@ -85,77 +87,76 @@
"type": "github"
}
},
"nixpkgs-24_05": {
"nixpkgs-22_11": {
"locked": {
"lastModified": 1717144377,
"narHash": "sha256-F/TKWETwB5RaR8owkPPi+SPJh83AQsm6KrQAlJ8v/uA=",
"lastModified": 1669558522,
"narHash": "sha256-yqxn+wOiPqe6cxzOo4leeJOp1bXE/fjPEi/3F/bBHv8=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "805a384895c696f802a9bf5bf4720f37385df547",
"rev": "ce5fe99df1f15a09a91a86be9738d68fadfbad82",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-24.05",
"ref": "nixos-22.11",
"type": "indirect"
}
},
"nixpkgs-23_05": {
"locked": {
"lastModified": 1684782344,
"narHash": "sha256-SHN8hPYYSX0thDrMLMWPWYulK3YFgASOrCsIL3AJ78g=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "8966c43feba2c701ed624302b6a935f97bcbdf88",
"type": "github"
},
"original": {
"id": "nixpkgs",
"ref": "nixos-23.05",
"type": "indirect"
}
},
"nixpkgs-lib": {
"locked": {
"lastModified": 1727825735,
"narHash": "sha256-0xHYkMkeLVQAMa7gvkddbPqpxph+hDzdu1XdGPJR+Os=",
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/fb192fec7cc7a4c26d51779e9bab07ce6fa5597a.tar.gz"
"dir": "lib",
"lastModified": 1693471703,
"narHash": "sha256-0l03ZBL8P1P6z8MaSDS/MvuU8E75rVxe5eE1N6gxeTo=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "3e52e76b70d5508f3cec70b882a29199f4d1ee85",
"type": "github"
},
"original": {
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/fb192fec7cc7a4c26d51779e9bab07ce6fa5597a.tar.gz"
"dir": "lib",
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-stable": {
"locked": {
"lastModified": 1728156290,
"narHash": "sha256-uogSvuAp+1BYtdu6UWuObjHqSbBohpyARXDWqgI12Ss=",
"lastModified": 1694908564,
"narHash": "sha256-ducA98AuWWJu5oUElIzN24Q22WlO8bOfixGzBgzYdVc=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "17ae88b569bb15590549ff478bab6494dde4a907",
"rev": "596611941a74be176b98aeba9328aa9d01b8b322",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "release-24.05",
"ref": "release-23.05",
"repo": "nixpkgs",
"type": "github"
}
},
"pre-commit-hooks": {
"inputs": {
"flake-compat": [],
"gitignore": [],
"nixpkgs": [],
"nixpkgs-stable": []
},
"locked": {
"lastModified": 1728727368,
"narHash": "sha256-7FMyNISP7K6XDSIt1NJxkXZnEdV3HZUXvFoBaJ/qdOg=",
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"rev": "eb74e0be24a11a1531b5b8659535580554d30b28",
"type": "github"
},
"original": {
"owner": "cachix",
"repo": "pre-commit-hooks.nix",
"type": "github"
}
},
"root": {
"inputs": {
"flake-parts": "flake-parts",
"impermanence": "impermanence",
"nixos-mailserver": "nixos-mailserver",
"nixpkgs": "nixpkgs",
"pre-commit-hooks": "pre-commit-hooks",
"sops-nix": "sops-nix"
}
},
@ -167,11 +168,11 @@
"nixpkgs-stable": "nixpkgs-stable"
},
"locked": {
"lastModified": 1728345710,
"narHash": "sha256-lpunY1+bf90ts+sA2/FgxVNIegPDKCpEoWwOPu4ITTQ=",
"lastModified": 1695284550,
"narHash": "sha256-z9fz/wz9qo9XePEvdduf+sBNeoI9QG8NJKl5ssA8Xl4=",
"owner": "Mic92",
"repo": "sops-nix",
"rev": "06535d0e3d0201e6a8080dd32dbfde339b94f01b",
"rev": "2f375ed8702b0d8ee2430885059d5e7975e38f78",
"type": "github"
},
"original": {
@ -179,6 +180,21 @@
"repo": "sops-nix",
"type": "github"
}
},
"utils": {
"locked": {
"lastModified": 1605370193,
"narHash": "sha256-YyMTf3URDL/otKdKgtoMChu4vfVL3vCMkRqpGifhUn0=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "5021eac20303a61fafe17224c087f5519baed54d",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
}
},
"root": "root",

View file

@ -17,15 +17,6 @@
impermanence = {
url = "github:nix-community/impermanence";
};
pre-commit-hooks = {
url = "github:cachix/pre-commit-hooks.nix";
inputs = {
flake-compat.follows = "";
gitignore.follows = "";
nixpkgs-stable.follows = "";
nixpkgs.follows = "";
};
};
};
outputs = inputs@{ flake-parts, ... }:

View file

@ -1,30 +1,29 @@
# copied and adopted from maralorns config
# This automatically searches for nixos configs in ./machines/${name}/configuration.nix
# and exposes them as outputs.nixosConfigurations.${name}
{
withSystem,
lib,
inputs,
...
}: {
#
# a comment regarding pkgs.nixos vs lib.nixosSystem
# while lib.nixosSystem is the usual enduser way to evaluate nixos configurations
# in flakes, pkgs.nixos sets the package set to the packages it comes from.
# This spares us tracking our potentiell overlays and own package additions, but just
# using the right package set to begin with. Using lib.nixosSystem from the flake we would
# need to specify that again.
{ withSystem, lib, inputs, ... }: {
flake = {
nixosConfigurations = withSystem "x86_64-linux" ({pkgs, ...}: let
nixosConfigurations = withSystem "x86_64-linux" ({ pkgs, ... }:
let
machines = builtins.attrNames (builtins.readDir ./machines);
makeSystem = name: let
importedConfig = import (./. + "/machines/${name}/configuration.nix");
systemConfig =
if lib.isFunction importedConfig
then x: importedConfig (x // {flake-inputs = inputs;})
else importedConfig;
in
makeSystem = name:
pkgs.nixos {
imports = [
systemConfig
(import (./. + "/machines/${name}/configuration.nix") inputs)
inputs.sops-nix.nixosModules.sops
inputs.impermanence.nixosModules.impermanence
];
};
in
lib.genAttrs machines makeSystem);
in lib.genAttrs machines makeSystem);
};
}

View file

@ -1,39 +0,0 @@
backupKey: ENC[AES256_GCM,data:PBdeV6uQ/Jg9xk7HXylyDKCBdny/XRflF752arUZAnUvmVv4yiSwOY9ua3tH1BDpddiql1aNJqmfatZOB3JKB2mHnyeSt7L0B81zuIFpxJOdsnGACviH6sUfsC5ogkGRhLKynf5Ghz/6xanthyK6euIpAAu05wDWcseg4y8k5rdFhL7rasmOMi0oVLN54Psmyf9vahfX6BNGBHQA1qJyeaI5iDLI+6gh7dtOXjTd4pHX8T9PEYpGnOBMHvaaVA2r7z27iUJSKqzzSB9B/rm01tI6LTG/yfQU+TKlWFU2iIodCG7eJ2qe+exvxOlEj9At/UI/Kd+dNSHcffLDUxVcstthOOP0TUdHkPCTC8BEJtAAexUBqSv+LTOPfJeAbIw3QhEpfeyZOpw2FY1qstQ/G0+1sDF8762uJu9v1amx4+4e8NEiSa/dtdnQAKFiEswk+5nqRSKQJOH3w0tHr9NRhhS775lUtUX4DW0xN42QeABM0xg56qJPxPNif3K2ovPX3BTSSZnQ0EZzuxployu1MyJxpcBT+6qa8l/h,iv:ZdivBorDtIyBOs7XSg/DHjReG+T6/exeS8ziA7ms7FM=,tag:gcIXgpyd2UeQV3APqCCxMg==,type:str]
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1rasjnr2tlv9y70sj0z0hwpgpxdc974wzg5umtx2pnc6z0p05u3js6r8sln
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBaR2dRc3NPeUwwaHdCL25V
RHNaWU9xRUw5dDlaOG5hczVlNm5UR01QUEVNClJsVFRBWU85Z0JuV1l3MDdvd1F2
RS9CcXhuNEJWdEE1cktXYjF3RW9wUDQKLS0tIHk3MURmWlJNanVZaHlUR3R2UEZG
K2JxOHpNY2hsTysrWjNLajFKQkxuNHcKaFMvnDt9a3HsnbP1Q/i4ifRIXFcXYn8z
YyOho0hSmWZNhTbltmuVKjvCNgt9ONVRW93uRDDoju8Odps0qwwvuA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1epz92k2rkp43hkrg3u0jgkzhnkwx8y43kag7rvfzwl9wcddelvusyetxl7
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBMM1NCbHdFZDJvYjJjcmZ6
bUFjSG5OUEdydS9pTkNHRjFKb3gvWll0Q0RVCk56ZnhDa0NGeUNhVVdDZENieDFW
Q0xSNXhYQXZSVnI3WlRzUjhxOXRyM2sKLS0tIGhnVWJaRG4vSGpUcnQ5SFVFT3VQ
YUFzTlNLSE9CbW9oYTFsY0tpTE4vZTQKjurd87tDH8z58pAGJyVXRAu8Q2+k7e4G
zOGZhm5DpSmFv2O2fqXgBg8nT5wrPKQDFvcDh1P+a0753tUTbUttIA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1lqvgpmlemyg9095ujck64u59ma29656zs7a4yxgz4s6u5cld2ccss69jwe
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBlWmlwS0E5TytFdEpxN09U
Y3k0SDhnM2h5Rnh1bXQ2czA5bWt1Mkk3aUFFCmtwT2ZmN0IweGdOYURWNDVHcWtH
R3lRaFRkcWYzb2g4NWNFQU5WOXZZaGMKLS0tIHpWNnNvVUNucE5MQ1cxQWl6Qm1x
NUZDVnJORXF1NGlyNUkzOGl2REFHdmsK18k9UfOmtFSep6mZcSp6di7SjvrBXgGp
oWtLehp1UFEHCgaU5YxlYhtkrrOhb8ykFb1on+kmzrloaHqyvks7Aw==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-03-21T16:38:08Z"
mac: ENC[AES256_GCM,data:kEVWd988Ia6T8v3w0slQhM0lh78VhnP8qJNa6IZg0NF2B0JQbFRnQNbUfvG9Rf4mkAR/O9PD+r6HR+b3LCwzb/Ok/eD4/M3+oPaEx/JnoHrzF/1N29VEAvBHjQgw6DL05toqu5G03UDcDUFGc111AeRsexhONQRHJx3zqWyWGy4=,iv:T5Pkhl3vhSAIoKkC3r3VQn3tC4t04WxvAZDQ4PMvD84=,tag:h0/aB91SFr5q0Or5daxWUQ==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.8.1

View file

@ -1,22 +0,0 @@
{config, ...}: {
imports = [
./hardware-configuration.nix
../../roles
../../roles/hardware.nix
./network.nix
../../modules/borgbackup.nix
];
services.mathebau-borgbackup.enable = true;
# System configuration here
networking.hostName = "bragi";
system.stateVersion = "23.11";
sops.secrets.backupKey = {
sopsFile = ./backupKey.yaml;
owner = config.users.users.fsaccount.name;
inherit (config.users.users.fsaccount) group;
mode = "0400";
};
}

View file

@ -1,32 +0,0 @@
{lib, ...}: {
fileSystems."/" = {
device = "root";
fsType = "tmpfs";
options = ["size=2G" "mode=755"];
};
fileSystems."/persist" = {
device = "/dev/disk/by-label/nixos";
fsType = "btrfs";
options = ["subvol=persist"];
neededForBoot = true;
};
fileSystems."/boot" = {
device = "/dev/disk/by-label/boot";
fsType = "ext4";
};
fileSystems."/nix" = {
device = "/dev/disk/by-label/nixos";
fsType = "btrfs";
options = ["subvol=nix"];
};
fileSystems."/var/lib/backups" = {
device = "/dev/disk/by-label/backups";
fsType = "btrfs";
};
swapDevices = [{device = "/dev/disk/by-label/swap";}];
boot.loader.grub.device = "/dev/disk/by-id/wwn-0x5000c5003891662c";
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View file

@ -1,16 +0,0 @@
# We sohuld put that config somewhere in roles and give it a parameter or something,
# everyone gets the same nameserver and the same prefixLength and address vs defaultGateway alsways
# depend on the same thing
{
networking = {
interfaces.enp0s25.ipv4.addresses = [
{
address = "192.168.1.11";
prefixLength = 24;
}
];
defaultGateway = "192.168.1.137";
# https://www.hrz.tu-darmstadt.de/services/it_services/nameserver_dns/index.de.jsp
nameservers = ["130.83.22.63" "130.83.22.60" "130.83.56.60"];
};
}

View file

@ -1,19 +0,0 @@
{
imports = [
./hardware-configuration.nix
../../modules/jitsi.nix
../../roles
../../roles/vm.nix
../../modules/vmNetwork.nix
];
services.mathebau-jitsi = {
enable = true;
hostName = "meet.mathebau.de";
};
# System configuration here
networking.hostName = "ghatanothoa";
vmNetwork.ipv4 = "192.168.0.25";
system.stateVersion = "23.11";
}

View file

@ -1,30 +0,0 @@
{lib, ...}: {
imports = [];
fileSystems."/" = {
device = "gha-root";
fsType = "tmpfs";
options = ["size=1G" "mode=755"];
};
fileSystems."/persist" = {
device = "/dev/disk/by-uuid/e0a160ef-7d46-4705-9152-a6b602898136";
fsType = "btrfs";
options = ["subvol=persist"];
neededForBoot = true;
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/19da7f3a-69da-4fa8-bb68-b355d7697ba7";
fsType = "ext4";
};
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/e0a160ef-7d46-4705-9152-a6b602898136";
fsType = "btrfs";
options = ["subvol=nix"];
};
swapDevices = [{device = "/dev/disk/by-uuid/e6e3ba6b-c9f5-4960-b56d-f49760d76a4a";}];
nix.settings.max-jobs = lib.mkDefault 4;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View file

@ -1,39 +0,0 @@
allowlistPass: ENC[AES256_GCM,data:bb9jXSvWeDnZqqiY/IarwA==,iv:qeFAYvXYdh2uEleg8kpCd77u4PTbwM8ydEkbMhyPz1I=,tag:1/eysyZb2mJ0mYHXIrpihw==,type:str]
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1rasjnr2tlv9y70sj0z0hwpgpxdc974wzg5umtx2pnc6z0p05u3js6r8sln
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSAySVhjV0xXdGE2am85RVJh
NXJLRy92blkzeENuWHh3QSsxNHBXcUpibGxnCnVHUEVoYVgxbk5WSmxQRXNzMC9i
Y1g4MUFrNEVjVjJWM0xhU0JzTzNZTk0KLS0tIFIrdmhrbXFHb2VaQ1p2dDJMMmlR
Um5CcGlZanBBRzJKOVNZeWVPTmsrcVUK905uViHD7uZMVQHPfFraIHXYTHaT+ERl
ZvyRDdjjRCyxu0qcIpYVpPAmfGCo0++bXSRUX8rCp48YN20MbPNjgA==
-----END AGE ENCRYPTED FILE-----
- recipient: age1xv5rfxkxg9jyqx5jg2j82cxv7w7ep4a3795p4yl5fuqf38f3m3eqfnefju
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBLNkNpN2RlcHBuOUxoYmkx
QzdOM1E0cFBSc1I0NzVRbmhiUXhjM3dQOWhnCmlOQzJ3b2Q5NFJkb2haMDNGSFBv
SkdySWtRUzhic1FNeXhiUFBPRVNoWmcKLS0tIGNaVW5xUmxWOEtXVkRqVEJJSEVv
NFBWREFQbnFXclhiNW51M0ZsOEMxdnMKdOPVRbD42q7MRw1CX1M30Xdil7VFLDVD
G8j4sjxlDkcwQK/3WjZdBLXAzJcrvAp0okGzw8lymC812CXTSEfmxw==
-----END AGE ENCRYPTED FILE-----
- recipient: age12nz7dtc0m5wasxm4r9crtkgwnzvauyfp0xh0n8z8jld0arn9ea9qe0agvn
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBKVVN2THloaU1pVnhtWDhm
TWpPaHNLSXlud0RLU3ovS0s4REtUTzQwMHhZClF5OFZQVHB2VG9BeThSYzVSMUFJ
VDNkT0Y1Y3RUemkwSmxlM0drUlNDR1UKLS0tIDYrcVhXMWJxR2dhcXhjdTQ3MjV1
Y3lWbHdLOGRGamhRY0xoRnVJczc2aFUKWWAflRwoszNw5bEDTSaVI65FtQve/HrC
uY1JvYwXLq4m4hu76dyrplDpzb8ant/YAUXpG6F4U7nn9GiLBaoyUQ==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-03-31T14:34:54Z"
mac: ENC[AES256_GCM,data:sjWiO96NcFUT4L9mdBuQwt6Zl5cS16o73zes30SYJxzM1R3ZBIg9oOmhXxY9BC3yKjEb6bVuemj/bnnopSR/m3RPH7xfaYCBfz97Zgc4SGtoqLIra5OUCRpWnKSsD6Nf09Qss5Pbla9EIrI0kQt7fpf4iKLF7VJwrQryslnvfcM=,iv:ilnbLK6sttweEyqszVHxVnjbTq8jF5ZTO24OEIPMprE=,tag:3XgAlXMl/RIaUfkVwHJeBQ==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.8.1

View file

@ -1,39 +0,0 @@
backupKey: ENC[AES256_GCM,data:/PErHUVZDTyqK+GKI2inDoEBQpSmezeBTgXWnrthc8IPtUFn4Ur2CkDo+MqfiAlSn9vT2ksHmyS5qmoGANG01e1Cm50qpt/BdoC2hh15jOVuc0uUBNOq7f5YBVeYtbemwjPcmbF7dgUeRlEAvxhqtX3/ntzxSB1inew/SsEgPrU4Yl0FF+CHhqgbeB/NJOhQY29/3hBGwMksfTUDymUmX6pUgIN1M26crIKFCn5IyqAXl10F+zL4PThZPnhmks7Y8BsGUbKkiE6ghdaUjEjBjGOGgbaGAjolG+nJ17xyM1Pc2speT4E/3VgAC34dpaByveGcf2SfsXir0KavcI86mUkjzaNF9u7GjGO0Szn742/aqbdUoOkJl41unb0Enf2/D4Up3fy6LrUqVqrHIM4Dea9WLQd0poD0FWSN12IKh+ylkouMkmhwLXUXFzIHOePS92/MsPM+9fLhH4cU64qxr9UzmfYRnNBpAHrjlxdkK9WZ1Oj4mdtu6R20vYkYcMIQgU38FvSN6uWGvPxJj+Ij,iv:ghvgkC6qFO/0tvsc7igCoZy7am8eNsd21WYCSAKiZDs=,tag:MFnk/Nnw+cloN+x7sd4LLg==,type:str]
sops:
kms: []
gcp_kms: []
azure_kv: []
hc_vault: []
age:
- recipient: age1rasjnr2tlv9y70sj0z0hwpgpxdc974wzg5umtx2pnc6z0p05u3js6r8sln
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBaNVExTHQrY2h1M3RZOEdU
Wm5kdDBHZ1NmZGpQU0VOYWtjOTdBVURQZkJnCmZzWTEvSWxvMFk4NlFOVnBDbm9q
Tncva0VyMGVDL29ZZ0YxeGE3RFBUS3cKLS0tIDluK05NMUNNM3pEUmlCNE9BV3lT
L0dPYTBwbjJzUmJnYktiM2JBME5LM3cKvPwth4DxQgFYhvr9vJLfeaiNc+UfAo4c
RdXPLkwtq3vksrU1IR54tHcUJ0yZiZ1HxxGp3PCPaXXJiUykllnJPw==
-----END AGE ENCRYPTED FILE-----
- recipient: age1epz92k2rkp43hkrg3u0jgkzhnkwx8y43kag7rvfzwl9wcddelvusyetxl7
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBRdFV0OVA5VTBuOVhsL3lp
MFpDN2RHVDExck5vcWpDNDNPM2k3S1FqQUFFCjNreXdSbDFXOHJ4b21mNGlZb0xQ
YUh0WVNGN2o1aFVaaGpxbmk4aUQ4ZTAKLS0tIHhtci84Zk1zZlBOOHk4a3VKUlM1
MXNZbWdpVEJiTTlIRERLYzBlNWxBMlUK4Z8JLlN5FOegfdg5njhHjbCwAm/f+kJS
buOHGWzWirW0ZibOP+fikzJwdIzIsX8v8tGaV89nQwf0hrxK0748Hg==
-----END AGE ENCRYPTED FILE-----
- recipient: age12nz7dtc0m5wasxm4r9crtkgwnzvauyfp0xh0n8z8jld0arn9ea9qe0agvn
enc: |
-----BEGIN AGE ENCRYPTED FILE-----
YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBZa2pUZlZsVmhPU242d0Nj
bi9BSlJBU1Q1cFU4ZjA4NnlJNmdwaVFBc2xNCjJlSG5UaDFnSzFHZ01RVVNjOHY5
L1JVUit6SThvbGRIU0loNmtZanllNXcKLS0tIFhMR1pxRmlGQWFEQURiRFJoMWJZ
dlExV2xTVWR6bWI3VCtSdU81SmtqYncKLFQczlIj89vzlfgE33w6ktotYFdxaWr9
YyewbY8qZmOUGQ4xKlZmhojeMh/FEH8dGNEf1AxnKbuQdnW6lqGR/w==
-----END AGE ENCRYPTED FILE-----
lastmodified: "2024-03-31T16:01:00Z"
mac: ENC[AES256_GCM,data:AawTzIXyX+3FyFpw8pXFeVJJtXN7ZpTFnUqhedC2vcbbNUzMMt1X0SaxtNNJ5chZI/tYHn59FT6zznl1eO4Xn29Zc2Up4dkT1BE4yqkEG0hiCFXrXMz/PaHfROzBhIWCVyF4fYj6MZKg1iBBxhWRqhJlQ1q4UVkoaITRUKpFJgs=,iv:3lTPOQ8VjmP3WNGbFK2yLU4Ks1KviNS/l7TH4SnvSUs=,tag:KUbAU6+76/Uxj2Wn9EnqnA==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.8.1

View file

@ -1,36 +0,0 @@
{
imports = [
./hardware-configuration.nix
../../modules/mailman.nix
../../roles
../../roles/vm.nix
../../modules/vmNetwork.nix
];
# System configuration here
services.mathebau-mailman = {
enable = true;
hostName = "lists.mathebau.de";
siteOwner = "root@mathebau.de";
};
networking.hostName = "lobon";
vmNetwork.ipv4 = "192.168.0.22";
system.stateVersion = "23.11";
sops.secrets = {
allowlistPass = {
sopsFile = ./allowlistPass.yaml;
owner = "mailman";
group = "mailman";
mode = "0400";
};
backupKey = {
sopsFile = ./backupKey.yaml;
owner = "root";
group = "root";
mode = "0400";
};
};
}

View file

@ -1,30 +0,0 @@
{
lib,
pkgs,
...
}: {
imports = [];
fileSystems."/" = {
device = "root";
fsType = "tmpfs";
options = ["size=1G" "mode=755"];
};
fileSystems."/persist" = {
device = "/dev/disk/by-label/nixos";
fsType = "btrfs";
options = ["subvol=persist"];
neededForBoot = true;
};
fileSystems."/boot" = {
device = "/dev/disk/by-label/boot";
fsType = "ext4";
};
fileSystems."/nix" = {
device = "/dev/disk/by-label/nixos";
fsType = "btrfs";
options = ["subvol=nix"];
};
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View file

@ -0,0 +1,16 @@
flake-inputs:
{config, pkgs, lib, ... }: {
imports = [
./hardware-configuration.nix
(import ./mail.nix flake-inputs)
../../roles
../../roles/xen_guest.nix
./network.nix
];
# System configuration here
networking.hostName = "nyarlathotep";
system.stateVersion = "23.11";
}

View file

@ -0,0 +1,35 @@
{config, lib, pkgs, modulesPath, ...}: {
imports = [ ];
fileSystems."/" = {
device = "nya-root";
fsType = "tmpfs";
options = [ "size=1G" "mode=755" ];
};
fileSystems."/persist" = {
device = "/dev/disk/by-uuid/a72da670-f631-49b1-bcb3-6d378cc1f2d0";
fsType = "btrfs";
options = [ "subvol=persist" ];
neededForBoot = true;
};
fileSystems."/boot" = {
device = "/dev/disk/by-uuid/75b01f48-e159-4d72-b049-54b7af072076";
fsType = "ext4";
};
fileSystems."/nix" = {
device = "/dev/disk/by-uuid/a72da670-f631-49b1-bcb3-6d378cc1f2d0";
fsType = "btrfs";
options = [ "subvol=nix" ];
};
fileSystems."/var/vmail" = {
device = "/dev/disk/by-uuid/23c44c93-5035-4e29-9e46-75c1c08f4cea";
fsType = "ext4";
};
swapDevices =
[{ device = "/dev/disk/by-uuid/8bc30d17-3c08-4648-ab18-8c723523be1a"; }];
nix.settings.max-jobs = lib.mkDefault 4;
nixpkgs.hostPlatform = lib.mkDefault "x86_64-linux";
}

View file

@ -0,0 +1,47 @@
flake-inputs:
{pkgs, config, lib, ...}: {
imports = [flake-inputs.nixos-mailserver.nixosModule];
mailserver = {
enable = true;
debug = false; # TODO disable
fqdn = "mathebau.de";
sendingFqdn = "fb04184.mathematik.tu-darmstadt.de";
domains = [
"mathebau.de"
"lists.mathebau.de"
];
# forwards = #TODO
# loginAccounts = #TODO
# extraVirtualAliases = # TODO # only for local things (maybe don't use?)
certificateDomains = ["imap.mathebau.de"];
# certificateScheme = "manual"; # Do we need CERTS? We don't want to run a webmailer YES IMAP!!
# certificateFile = #TODO
# keyFile = #TODO
enableSubmission = false; # no starttls smtp
# Fun dovecot stuff :
mailDirectory = "/var/vmail/vmail/"; # directory to store mail it was /var/mail/vmail but
# /var/mail ist special
hierarchySeparator = "/"; # seperator for imap mailboxes from client view
# Caching of search indices
indexDir = "/var/vmail/lib/dovecot/indices";
fullTextSearch = {
enforced = "body"; # only brute force headers if no search index is available
};
lmtpSaveToDetailMailbox = "no";
# no starttls
enableImap = false;
# TODO checkout redis `config.services.redis.servers.rspamd.`
# TODO
# borgbackup = {
# };
};
}

View file

@ -0,0 +1,15 @@
# We sohuld put that config somewhere in roles and give it a parameter or something,
# everyone gets the same nameserver and the same prefixLength and address vs defaultGateway alsways
# depend on the same thing
{
imports = [ ];
networking = {
interfaces.enX0.ipv4.addresses = [ {
address = "192.168.0.28";
prefixLength = 16;
} ];
defaultGateway = "192.168.0.155";
nameservers = ["130.83.2.22" "130.83.56.60" "130.83.22.60" "130.82.22.63"];
};
}

View file

@ -1,172 +0,0 @@
{
config,
lib,
pkgs,
...
}: let
inherit
(lib)
mkIf
mkEnableOption
;
cfg = config.services.mathebau-borgbackup;
in {
imports = [];
options.services.mathebau-borgbackup = {
enable = mkEnableOption "mathebau borgbackup service";
};
config = mkIf cfg.enable {
services.borgbackup = {
# repos are made available at ssh://borg@hostname and served according to the presented ssh-key
# If you think about adding keys of nix machines:
# Congratulations, you are the first person to make backups from a nixos machine.
# Your won the task of automatizing this endeavor, so in future we don't need to hand copy any
# ssh keys anymore.
repos = {
aphoom-zhah = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA8pI6uinXezAMH4vG2yEbu/yOYU5vXcsZN74tYgV+Wj Aphoom-Zhah Backup"
];
path = "/var/lib/backups/aphoom-zhah";
# subrepos are allowed because each vm creates at least one repo below this filepath and yibb-tstll even more
allowSubRepos = true;
};
azathoth = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGBEwllQ77ktoirXX6dJ6ET8TfK4lzq0aaq+X4rrX2Vk Azathoth Backup"
];
path = "/var/lib/backups/azathoth";
allowSubRepos = true;
};
cthulhu = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMSJl1MvabUADTdOCgufsBzn1tIIpxMq4iDcYZsaW1lV Cthulhu Backup"
];
path = "/var/lib/backups/cthulhu";
allowSubRepos = true;
};
dagon = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJaTBennwqT9eB43gVD1nM1os3dMPZ8RWwIKPEjqMK5V Dagon Backup"
];
path = "/var/lib/backups/dagon";
allowSubRepos = true;
};
eihort = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHLoDxtY4Tp6NKxLt9oHmWT6w4UpU6eA1TnPU2Ut83BN Eihort Backup"
];
path = "/var/lib/backups/eihort";
allowSubRepos = true;
};
fsaccount = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG+Y7fQTYdIWHehrKdk92CaJ0AisEux4OrS4nIyMstU4 FS Account Backup"
];
path = "/var/lib/backups/fsaccount";
allowSubRepos = true;
};
hastur = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeDvTyOUdIPARatX0PPhHgrV1gjERWLt2Twa8E2GETb Hastur Backupsystem"
];
path = "/var/lib/backups/hastur";
allowSubRepos = true;
};
ithaqua = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPJmBf8cz3FTDdeuxWbp1MO2yPT5rvH8ZIGUzfogjpXi Ithaqua Backup"
];
path = "/var/lib/backups/ithaqua";
allowSubRepos = true;
};
lobon = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICEptjf1UWRlo6DG9alAIRwkSDUAVHwDKkHC6/DeYKzi Lobon Backup"
];
path = "/var/lib/backups/lobon";
allowSubRepos = true;
};
sanctamariamaterdei = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH9Le5OI4ympQ0mQKYHmxgxGF598rzpD5VVpWK1mGfd8 Sanctamariamaterdei Backupsystem"
];
path = "/var/lib/backups/sanctamariamaterdei";
allowSubRepos = true;
};
tsathoggua = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKS9/1lFOhv+3sNuGcysM3TYh2xRrjMeAZX3K7CBx0QW Tsathoggua Backup"
];
path = "/var/lib/backups/tsathoggua";
allowSubRepos = true;
};
uvhash = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB8DjIqgFgmYhQnTLpbqL0r7xBPb8TPy6SO5RhQ31OGj Uvhash Backup"
];
path = "/var/lib/backups/uvhash";
allowSubRepos = true;
};
yibb-tstll = {
authorizedKeysAppendOnly = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINlnGOV58Ks9lu+WTI4F7QAHtDrJq2jY8ZocITZG8K0+ Yibb-Tstll Backup"
];
path = "/var/lib/backups/yibb-tstll";
allowSubRepos = true;
};
};
# Configure backup of files on the department's fs account:
# This job first copies the files to the local account 'fsaccount' in tmpfs
# and then takes a regular backup of the mirrored folder.
# See also https://borgbackup.readthedocs.io/en/stable/deployment/pull-backup.html
# which does not work due to missing permissions.
jobs.fsaccount = {
preHook = ''
mkdir -p /home/fsaccount/sicherung # Create if it does not exist
${pkgs.rsync}/bin/rsync --rsh='ssh -i /run/secrets/backupKey' --recursive --delete fachschaft@gw1.mathematik.tu-darmstadt.de:/home/fachschaft/* /home/fsaccount/sicherung
'';
paths = "/home/fsaccount/sicherung";
encryption.mode = "none"; # Otherwise the key is next to the backup or we have human interaction.
environment = {
BORG_RSH = "ssh -i /run/secrets/backupKey";
# “Borg ensures that backups are not created on random drives that just happen to contain a Borg repository.”
# https://borgbackup.readthedocs.io/en/stable/deployment/automated-local.html
# We don't want this in order to not need to persist borg cache and simplify new deployments.
BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK = "yes";
};
repo = "borg@localhost:fsaccount";
startAt = "daily";
user = "fsaccount";
group = "users";
readWritePaths = ["/home/fsaccount"];
};
};
# Extra user for FS account backup
users.users = {
fsaccount = {
description = "FS Account backup";
isSystemUser = true;
home = "/home/fsaccount";
createHome = true;
group = "users";
};
};
environment.persistence.${config.impermanence.name} = {
users.fsaccount.files = [
{
file = ".ssh/known_hosts";
parentDirectory = {
mode = "u=rwx,g=,o=";
user = "fsaccount";
group = "users";
};
}
];
};
};
}

View file

@ -1,17 +1,16 @@
{
lib,
config,
...
}: let
inherit
(lib)
{lib, config, ...} :
let
inherit (lib)
mkEnableOption
mkIf
mkOption
types
;
cfg = config.impermanence;
in {
in
{
imports = [ ];
options.impermanence = {
@ -44,4 +43,5 @@ in {
};
environment.etc.machine-id.source = "${cfg.storagePath}/machine-id";
};
}

View file

@ -1,61 +0,0 @@
{
config,
lib,
modulesPath,
...
}: let
inherit
(lib)
mkIf
mkEnableOption
mkOption
head
;
inherit (lib.types) str;
cfg = config.services.mathebau-jitsi;
in {
imports = [(modulesPath + "/services/web-apps/jitsi-meet.nix")];
options.services.mathebau-jitsi = {
enable = mkEnableOption "mathebau jitsi service";
hostName = mkOption {
type = str;
};
localAddress = mkOption {
type = str;
default = (head config.networking.interfaces.enX0.ipv4.addresses).address;
};
};
config = mkIf cfg.enable {
services = {
jitsi-meet = {
enable = true;
config = {
defaultLang = "de";
};
inherit (cfg) hostName;
};
jitsi-videobridge = {
openFirewall = true;
nat = {
publicAddress = "130.83.2.184";
inherit (cfg) localAddress;
};
};
#We are behind a reverse proxy that handles TLS
nginx.virtualHosts."${cfg.hostName}" = {
enableACME = false;
forceSSL = false;
};
};
environment.persistence.${config.impermanence.name} = {
directories = [
"/var/lib/jitsi-meet"
"/var/lib/prosody"
];
};
#The network ports for HTTP(S) are not opened automatically
networking.firewall.allowedTCPPorts = [80 443];
};
}

View file

@ -1,128 +0,0 @@
# Adapted and simplified from https://nixos.wiki/wiki/Mailman
{
config,
lib,
pkgs,
...
}: let
inherit
(lib)
mkIf
mkEnableOption
mkOption
;
inherit (lib.types) nonEmptyStr;
cfg = config.services.mathebau-mailman;
in {
options.services.mathebau-mailman = {
enable = mkEnableOption "mathebau mailman service";
hostName = mkOption {
type = nonEmptyStr;
};
siteOwner = mkOption {
type = nonEmptyStr;
};
};
config = mkIf cfg.enable {
services = {
postfix = {
enable = true;
relayDomains = ["hash:/var/lib/mailman/data/postfix_domains"];
config = {
transport_maps = ["hash:/var/lib/mailman/data/postfix_lmtp"];
local_recipient_maps = ["hash:/var/lib/mailman/data/postfix_lmtp"];
proxy_interfaces = "130.83.2.184";
smtputf8_enable = "no"; # HRZ does not know SMTPUTF8
};
relayHost = "192.168.0.24"; # Relay to eihort which relays to HRZ (see https://www.hrz.tu-darmstadt.de/services/it_services/email_infrastruktur/index.de.jsp)
};
mailman = {
enable = true;
inherit (cfg) siteOwner;
hyperkitty.enable = true;
webHosts = [cfg.hostName];
serve.enable = true; #
# Don't include confirmation tokens in reply addresses, because we would need to send them to HRZ otherwise.
settings.mta.verp_confirmations = "no";
};
};
environment.persistence.${config.impermanence.name} = {
directories = [
"/var/lib/mailman"
"/var/lib/mailman-web"
];
files = ["/root/.ssh/known_hosts"]; # for the backup server bragi
};
networking.firewall.allowedTCPPorts = [25 80];
# Update HRZ allowlist
# For account details see https://www-cgi.hrz.tu-darmstadt.de/mail/
# will stop working if no valid TUIDs are associated to our domain.
systemd.timers."mailAllowlist" = {
wantedBy = ["timers.target"];
timerConfig = {
OnBootSec = "5m"; # Run every 5 minutes
OnUnitActiveSec = "5m";
RandomizedDelaySec = "2m"; # prevent overload on regular intervals
Unit = "mailAllowlist.service";
};
};
systemd.services."mailAllowlist" = {
description = "Allowlist update: Post the mail addresses used by mailman to the HRZ allowllist";
script = ''
# Get the mail addresses' local-part
cut -d '@' -f 1 /var/lib/mailman/data/postfix_lmtp | grep -v '#' | grep "\S" > /tmp/addresses
# Post local-parts to HRZ
${pkgs.curl}/bin/curl https://www-cgi.hrz.tu-darmstadt.de/mail/whitelist-update.php -F emaildomain=${cfg.hostName} -F password=$(cat /run/secrets/allowlistPass) -F emailliste=@/tmp/addresses -F meldungen=voll
# Cleanup
rm /tmp/addresses
'';
serviceConfig = {
Type = "oneshot";
User = "mailman";
NoNewPrivileges = true;
# See https://www.man7.org/linux/man-pages/man5/systemd.exec.5.html
PrivateTmp = true;
ProtectHome = true;
ReadOnlyPaths = "/";
ReadWritePaths = "/tmp";
InaccessiblePaths = "-/lost+found";
PrivateDevices = true;
PrivateUsers = true;
ProtectHostname = true;
ProtectClock = true;
ProtectKernelTunables = true;
ProtectKernelModules = true;
ProtectKernelLogs = true;
ProtectControlGroups = true;
LockPersonality = true;
MemoryDenyWriteExecute = true;
RestrictRealtime = true;
RestrictSUIDSGID = true;
};
};
# Backups
services.borgbackup.jobs.mailman = {
paths = [
"/var/lib/mailman/data"
"/var/lib/mailman-web"
];
encryption.mode = "none"; # Otherwise the key is next to the backup or we have human interaction.
environment = {
BORG_RSH = "ssh -i /run/secrets/backupKey";
# “Borg ensures that backups are not created on random drives that just happen to contain a Borg repository.”
# https://borgbackup.readthedocs.io/en/stable/deployment/automated-local.html
# We don't want this in order to not need to persist borg cache and simplify new deployments.
BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK = "yes";
};
repo = "borg@192.168.1.11:lobon"; # TODO for https://gitea.mathebau.de/Fachschaft/nixConfig/issues/33
startAt = "daily";
user = "root";
group = "root";
};
};
}

View file

@ -1,48 +0,0 @@
{
lib,
config,
...
}: let
inherit
(lib)
mkOption
types
last
init
;
inherit
(lib.strings)
splitString
concatStringsSep
toInt
;
cfg = config.vmNetwork;
in {
imports = [];
options.vmNetwork = {
ipv4 = mkOption {
type = types.str;
description = "the ipv4 adress of this machine";
};
};
config = {
networking = {
interfaces.enX0.ipv4.addresses = [
{
address = cfg.ipv4;
prefixLength = 16;
}
];
defaultGateway = let
addr = splitString "." cfg.ipv4;
addrInit = init addr;
addrLastInt = builtins.toString (toInt (last addr) + 127);
in
concatStringsSep "." (addrInit ++ [addrLastInt]);
# https://www.hrz.tu-darmstadt.de/services/it_services/nameserver_dns/index.de.jsp
nameservers = ["130.83.22.63" "130.83.22.60" "130.83.56.60"];
};
};
}

View file

@ -1,24 +1,19 @@
{lib, ...} :
with lib; let
with lib;
let
admins = {
nerf = {
hashedPassword = "$y$j9T$SJcjUIcs3JYuM5oyxfEQa/$tUBQT07FK4cb9xm.A6ZKVnFIPNOYMOKC6Dt6hadCuJ7";
hashedPassword =
"$y$j9T$SJcjUIcs3JYuM5oyxfEQa/$tUBQT07FK4cb9xm.A6ZKVnFIPNOYMOKC6Dt6hadCuJ7";
keys = [
"ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEdA4LpEGUUmN8esFyrNZXFb2GiBID9/S6zzhcnofQuP nerf@nerflap2"
];
};
gonne = {
hashedPassword = "$6$EtGpHEcFkOi0yUWp$slXf0CvIUrhdqaoCrQ5YwtYu2IVuE1RGGst4fnDPRLWVm.lYx0ruvSAF2/vw/sLbW37ORJjlb0NHQ.kSG7cVY/";
keys = [
"sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAIAhwkSDISCWLN2GhHfxdZsVkK4J7JoEcPwtNbAesb+BZAAAABHNzaDo= Gonne"
];
};
};
mkAdmin = name: {
hashedPassword,
keys,
}: {
mkAdmin = name :
{hashedPassword, keys}: {
"${name}" = {
isNormalUser = true;
createHome = true;
@ -29,6 +24,7 @@ with lib; let
inherit hashedPassword;
};
};
in {
users.users = mkMerge (mapAttrsToList mkAdmin admins);
}

View file

@ -1,15 +1,10 @@
{
pkgs,
lib,
...
}: {
{pkgs, config, lib, ...} : {
imports = [
./admins.nix
./nix_keys.nix
./prometheusNodeExporter.nix
../modules/impermanence.nix
];
nix = {
extraOptions = ''
experimental-features = nix-command flakes
@ -18,8 +13,7 @@
};
networking = {
firewall = {
# these shoud be default, but better make sure!
firewall = { # these shoud be default, but better make sure!
enable = true;
allowPing = true;
};
@ -39,13 +33,8 @@
environment = {
systemPackages = builtins.attrValues {
inherit
(pkgs)
htop
lsof
tmux
btop
;
inherit (pkgs)
htop lsof tmux btop;
};
};
@ -65,7 +54,5 @@
PasswordAuthentication = false;
};
};
#Prevent clock drift due to interaction problem with xen hardware clock
timesyncd.enable = lib.mkForce true;
};
}

View file

@ -1,6 +0,0 @@
{
# Use grub as bootloader.
# Systemd-boot does not support our legacy BIOS hardware,
# but only runs on UEFI systems.
boot.loader.grub.enable = true;
}

View file

@ -2,6 +2,5 @@
imports = [ ];
nix.settings.trusted-public-keys = [
"nerflap2-1:pDZCg0oo9PxNQxwVSQSvycw7WXTl53PGvVeZWvxuqJc="
"gonne.mathebau.de-1:FsXFyFiBFE/JxC9MCkt/WuiXjx5dkRI9RXj0FxOQrV0="
];
}

View file

@ -1,39 +0,0 @@
{config, ...}: {
imports = [];
services.prometheus.exporters.node = {
enable = true;
port = 9100;
# Aligned with https://git.rwth-aachen.de/fsdmath/server/prometheus/-/blob/main/node_exporter/etc/default/prometheus-node-exporter
# It was compiled along the following steps:
# 1. Does the current Debian release supports the collector?
# 2. Is the collector depracated in the latest release?
# 3. Could you probably use the collected metrics for monitoring or are they useless because they make no sense in our context
# (e.g. power adapter inside a VM, use fibre port connection)?
disabledCollectors = [
"arp"
"bcache"
"btrfs"
"dmi"
"fibrechannel"
"infiniband"
"nvme"
"powersupplyclass"
"rapl"
"selinux"
"tapestats"
"thermal_zone"
"udp_queues"
"xfs"
"zfs"
];
enabledCollectors = [
"buddyinfo"
"ksmd"
"logind"
"mountstats"
"processes"
];
};
networking.firewall.allowedTCPPorts = [9100];
environment.persistence.${config.impermanence.name}.directories = ["/var/lib/${config.services.prometheus.stateDir}"];
}

View file

@ -1,5 +0,0 @@
{modulesPath, ...}: {
imports = [
(modulesPath + "/virtualisation/xen-domU.nix")
];
}

16
nixos/roles/xen_guest.nix Normal file
View file

@ -0,0 +1,16 @@
{...}: {
imports = [ ];
boot = {
loader.grub = {
device = "nodev";
enable = true;
};
initrd = {
availableKernelModules = [ "ata_piix" "sr_mod" "xen_blkfront" ];
kernelModules = [ ];
};
extraModulePackages = [ ];
tmp.useTmpfs = true;
};
}