Enable prometheus node exporter by default #19
|
@ -3,6 +3,7 @@
|
||||||
imports = [
|
imports = [
|
||||||
./admins.nix
|
./admins.nix
|
||||||
./nix_keys.nix
|
./nix_keys.nix
|
||||||
|
./prometheusNodeExporter.nix
|
||||||
(modulesPath + "/virtualisation/xen-domU.nix")
|
(modulesPath + "/virtualisation/xen-domU.nix")
|
||||||
../modules/impermanence.nix
|
../modules/impermanence.nix
|
||||||
];
|
];
|
||||||
|
@ -55,12 +56,5 @@ services = {
|
||||||
PasswordAuthentication = false;
|
PasswordAuthentication = false;
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
prometheus.exporters.node = {
|
|
||||||
enable = true;
|
|
||||||
port = 9100;
|
|
||||||
};
|
};
|
||||||
Gonne marked this conversation as resolved
Outdated
|
|||||||
};
|
|
||||||
# Prometheus Monitoring
|
|
||||||
networking.firewall.allowedTCPPorts = [ 9100 ];
|
|
||||||
}
|
}
|
||||||
|
|
36
nixos/roles/prometheusNodeExporter.nix
Normal file
|
@ -0,0 +1,36 @@
|
||||||
|
{
|
||||||
Gonne marked this conversation as resolved
Outdated
nerf
commented
At least there should be persistence for At least there should be persistence for
`/var/lib/${config.services.prometheus.stateDir}`
(maybe the config name is slightly wrong better look it up.)
|
|||||||
|
imports = [ ];
|
||||||
|
services.prometheus.exporters.node = {
|
||||||
|
enable = true;
|
||||||
|
port = 9100;
|
||||||
|
# Aligned with https://git.rwth-aachen.de/fsdmath/server/prometheus/-/blob/main/node_exporter/etc/default/prometheus-node-exporter
|
||||||
|
# Original reasons are for these lists are unknown, but along the lines
|
||||||
|
# “This looks useless for VMs, but that seems nice.”
|
||||||
nerf
commented
Maybe we should rethink this list. Even if we end up with the same list, but a less Maybe we should rethink this list. Even if we end up with the same list, but a less
shitty justification.
dsimon
commented
Initially the list was created with the following three aspects in mind:
Because the scraped metrics are not saved on the monitored host itself but on the monitoring host (Cthugha) for a limited time span, space allocation should not be considered a problem. You could restrict the collectors only to the actually used ones, but because rolling out those configurations on all hosts (especially those running without Nix) is a pain (which would be necessary every time you want to use a currently unexported metric) and the export itself doesn't harm, I think the current set of exporters is fine. Initially the list was created with the following three aspects in mind:
1. Does the current Debian release supports the collector (because some weren't supported)?
2. Is the collector depracated in the latest release?
3. Could you probably use the collected metrics for monitoring in a useful manner? Or are they useless because they make no sense in our context (e.g. power adapter inside a VM, use fibre port connection)?
Because the scraped metrics are not saved on the monitored host itself but on the monitoring host (Cthugha) for a limited time span, space allocation should not be considered a problem. You could restrict the collectors only to the actually used ones, but because rolling out those configurations on all hosts (especially those running without Nix) is a pain (which would be necessary every time you want to use a currently unexported metric) and the export itself doesn't harm, I think the current set of exporters is fine.
nerf
commented
Does cthugha needs support for them? Because if not 1.) is not the right constraint for nix machines. Also if that is the real justification, that should be what the comment says and not that they kinda looked nice. Does cthugha needs support for them? Because if not 1.) is not the right constraint for nix machines.
Also if that is the real justification, that should be what the comment says and not that they kinda looked nice.
dsimon
commented
It depends on what you mean by "support for them":
It depends on what you mean by "support for them":
1. If you mean support as in can handle the additional data (which I think is the intended meaning), then the answer is no, cthugha does not need additional support for those metrics and can handle them automatically.
2. If you mean support as in can use that new data automatically in a useful manner, then the answer is yes, cthugha needs to be explicitly configured. In that case, the prometheus instance needs some new (alerting) rules which use those newly exported metrics.
Because
The justification in the comment could actually be better, but is not as wrong as your answer could suggest. The initial selection of collectors is somewhat arbitrary. In fact I'm not using all exported metrics for alerting, but only a (small to medium sized) subset of the available ones. Some metrics are actively used in alerting (e.g. `collector.systemd`, `collector.time` and `collector.textfile`) while could be used in a nice way in future setups (e.g. `collector.mdadm`, `collector.cpufreq` and `collector.nfsd`). The latter example enables the export of metrics about software RAIDs (`mdadm`) and NFS servers (`nfsd`), which are things definitly not used at the moment in our infrastructure (at least to my knowledge), but probably could be used in future use case. Explicitly disabled are only exporters which are not useful at all, because of the VM environment, the constant resolution of arp information within our internal network (monitoring ARP tables in a constant server network doesn't make any sense to me), the absence of required hardware or because of performance problems (udp_queue).
I would suggest keeping the list roughly symmetric, but adding the new collectors not available in Debian Bullseye. Shrinking the list of available metrics (in comparison to other hosts) could lead to unexpected results like alerts not firing for a specific host because the used metrics from that host were not available (absence of metrics -> empty dataset for that host -> no data to compare rule against -> no alert firing in case of problem).
nerf
commented
With “support for them” I meant if this only applies to the host with the exporter or also to cthuga (which is a debian machine right?)
So you say it is advisable to have on all machines the same list of exporters and not enable exporters selectively? With “support for them” I meant if this only applies to the host with the exporter or also to cthuga (which is a debian machine right?)
> Does the current Debian release supports the collector (because some weren't supported)?
So you say it is advisable to have on all machines the same list of exporters and not enable exporters selectively?
|
|||||||
|
disabledCollectors = [
|
||||||
|
"arp"
|
||||||
|
"bcache"
|
||||||
|
"btrfs"
|
||||||
|
"dmi"
|
||||||
|
"fibrechannel"
|
||||||
|
"infiniband"
|
||||||
|
"nfs"
|
||||||
|
"nvme"
|
||||||
|
"powersupplyclass"
|
||||||
|
"rapl"
|
||||||
|
"selinux"
|
||||||
|
"tapestats"
|
||||||
|
"thermal_zone"
|
||||||
|
"udp_queues"
|
||||||
|
"xfs"
|
||||||
|
"zfs"
|
||||||
|
];
|
||||||
|
enabledCollectors = [
|
||||||
|
"buddyinfo"
|
||||||
|
"ksmd"
|
||||||
|
"logind"
|
||||||
|
"mountstats"
|
||||||
|
"processes"
|
||||||
|
];
|
||||||
|
};
|
||||||
|
networking.firewall.allowedTCPPorts = [ 9100 ];
|
||||||
|
}
|
This is the closing bracket of the services record, there should be less white space, right?