Enable prometheus node exporter by default #19

Merged
nerf merged 5 commits from Gonne/nixConfig:prometheusNodeExporter into main 2023-11-06 14:30:33 +00:00
2 changed files with 41 additions and 0 deletions

View file

@ -3,6 +3,7 @@
imports = [
./admins.nix
./nix_keys.nix
./prometheusNodeExporter.nix
(modulesPath + "/virtualisation/xen-domU.nix")
../modules/impermanence.nix
];

View file

@ -0,0 +1,40 @@
{config, ...}:
Gonne marked this conversation as resolved Outdated
Outdated
Review

At least there should be persistence for
/var/lib/${config.services.prometheus.stateDir}
(maybe the config name is slightly wrong better look it up.)

At least there should be persistence for `/var/lib/${config.services.prometheus.stateDir}` (maybe the config name is slightly wrong better look it up.)
{
imports = [ ];
services.prometheus.exporters.node = {
enable = true;
port = 9100;
# Aligned with https://git.rwth-aachen.de/fsdmath/server/prometheus/-/blob/main/node_exporter/etc/default/prometheus-node-exporter
# It was compiled along the following steps:
Outdated
Review

Maybe we should rethink this list. Even if we end up with the same list, but a less
shitty justification.

Maybe we should rethink this list. Even if we end up with the same list, but a less shitty justification.

Initially the list was created with the following three aspects in mind:

  1. Does the current Debian release supports the collector (because some weren't supported)?
  2. Is the collector depracated in the latest release?
  3. Could you probably use the collected metrics for monitoring in a useful manner? Or are they useless because they make no sense in our context (e.g. power adapter inside a VM, use fibre port connection)?

Because the scraped metrics are not saved on the monitored host itself but on the monitoring host (Cthugha) for a limited time span, space allocation should not be considered a problem. You could restrict the collectors only to the actually used ones, but because rolling out those configurations on all hosts (especially those running without Nix) is a pain (which would be necessary every time you want to use a currently unexported metric) and the export itself doesn't harm, I think the current set of exporters is fine.

Initially the list was created with the following three aspects in mind: 1. Does the current Debian release supports the collector (because some weren't supported)? 2. Is the collector depracated in the latest release? 3. Could you probably use the collected metrics for monitoring in a useful manner? Or are they useless because they make no sense in our context (e.g. power adapter inside a VM, use fibre port connection)? Because the scraped metrics are not saved on the monitored host itself but on the monitoring host (Cthugha) for a limited time span, space allocation should not be considered a problem. You could restrict the collectors only to the actually used ones, but because rolling out those configurations on all hosts (especially those running without Nix) is a pain (which would be necessary every time you want to use a currently unexported metric) and the export itself doesn't harm, I think the current set of exporters is fine.
Outdated
Review

Does cthugha needs support for them? Because if not 1.) is not the right constraint for nix machines.

Also if that is the real justification, that should be what the comment says and not that they kinda looked nice.

Does cthugha needs support for them? Because if not 1.) is not the right constraint for nix machines. Also if that is the real justification, that should be what the comment says and not that they kinda looked nice.

It depends on what you mean by "support for them":

  1. If you mean support as in can handle the additional data (which I think is the intended meaning), then the answer is no, cthugha does not need additional support for those metrics and can handle them automatically.
  2. If you mean support as in can use that new data automatically in a useful manner, then the answer is yes, cthugha needs to be explicitly configured. In that case, the prometheus instance needs some new (alerting) rules which use those newly exported metrics.
    Because
    The justification in the comment could actually be better, but is not as wrong as your answer could suggest. The initial selection of collectors is somewhat arbitrary. In fact I'm not using all exported metrics for alerting, but only a (small to medium sized) subset of the available ones. Some metrics are actively used in alerting (e.g. collector.systemd, collector.time and collector.textfile) while could be used in a nice way in future setups (e.g. collector.mdadm, collector.cpufreq and collector.nfsd). The latter example enables the export of metrics about software RAIDs (mdadm) and NFS servers (nfsd), which are things definitly not used at the moment in our infrastructure (at least to my knowledge), but probably could be used in future use case. Explicitly disabled are only exporters which are not useful at all, because of the VM environment, the constant resolution of arp information within our internal network (monitoring ARP tables in a constant server network doesn't make any sense to me), the absence of required hardware or because of performance problems (udp_queue).
    I would suggest keeping the list roughly symmetric, but adding the new collectors not available in Debian Bullseye. Shrinking the list of available metrics (in comparison to other hosts) could lead to unexpected results like alerts not firing for a specific host because the used metrics from that host were not available (absence of metrics -> empty dataset for that host -> no data to compare rule against -> no alert firing in case of problem).
It depends on what you mean by "support for them": 1. If you mean support as in can handle the additional data (which I think is the intended meaning), then the answer is no, cthugha does not need additional support for those metrics and can handle them automatically. 2. If you mean support as in can use that new data automatically in a useful manner, then the answer is yes, cthugha needs to be explicitly configured. In that case, the prometheus instance needs some new (alerting) rules which use those newly exported metrics. Because The justification in the comment could actually be better, but is not as wrong as your answer could suggest. The initial selection of collectors is somewhat arbitrary. In fact I'm not using all exported metrics for alerting, but only a (small to medium sized) subset of the available ones. Some metrics are actively used in alerting (e.g. `collector.systemd`, `collector.time` and `collector.textfile`) while could be used in a nice way in future setups (e.g. `collector.mdadm`, `collector.cpufreq` and `collector.nfsd`). The latter example enables the export of metrics about software RAIDs (`mdadm`) and NFS servers (`nfsd`), which are things definitly not used at the moment in our infrastructure (at least to my knowledge), but probably could be used in future use case. Explicitly disabled are only exporters which are not useful at all, because of the VM environment, the constant resolution of arp information within our internal network (monitoring ARP tables in a constant server network doesn't make any sense to me), the absence of required hardware or because of performance problems (udp_queue). I would suggest keeping the list roughly symmetric, but adding the new collectors not available in Debian Bullseye. Shrinking the list of available metrics (in comparison to other hosts) could lead to unexpected results like alerts not firing for a specific host because the used metrics from that host were not available (absence of metrics -> empty dataset for that host -> no data to compare rule against -> no alert firing in case of problem).
Outdated
Review

With “support for them” I meant if this only applies to the host with the exporter or also to cthuga (which is a debian machine right?)

Does the current Debian release supports the collector (because some weren't supported)?

So you say it is advisable to have on all machines the same list of exporters and not enable exporters selectively?

With “support for them” I meant if this only applies to the host with the exporter or also to cthuga (which is a debian machine right?) > Does the current Debian release supports the collector (because some weren't supported)? So you say it is advisable to have on all machines the same list of exporters and not enable exporters selectively?
# 1. Does the current Debian release supports the collector?
# 2. Is the collector depracated in the latest release?
# 3. Could you probably use the collected metrics for monitoring or are they useless because they make no sense in our context
# (e.g. power adapter inside a VM, use fibre port connection)?
disabledCollectors = [
"arp"
"bcache"
"btrfs"
"dmi"
"fibrechannel"
"infiniband"
"nvme"
"powersupplyclass"
"rapl"
"selinux"
"tapestats"
"thermal_zone"
"udp_queues"
"xfs"
"zfs"
];
enabledCollectors = [
"buddyinfo"
"ksmd"
"logind"
"mountstats"
"processes"
];
};
networking.firewall.allowedTCPPorts = [ 9100 ];
environment.persistence.${config.impermanence.name}.directories = [ "/var/lib/${config.services.prometheus.stateDir}" ];
}