Enable prometheus node exporter by default #19
No reviewers
Labels
No labels
Kind/Breaking
Kind/Bug
Kind/Documentation
Kind/Enhancement
Kind/Feature
Kind/Security
Kind/Testing
Priority
Critical
Priority
High
Priority
Low
Priority
Medium
Reviewed
Confirmed
Reviewed
Duplicate
Reviewed
Invalid
Reviewed
Won't Fix
Status
Abandoned
Status
Blocked
Status
Need More Info
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: Fachschaft/nixConfig#19
Loading…
Reference in a new issue
No description provided.
Delete branch "Gonne/nixConfig:prometheusNodeExporter"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Before merging, we should look through the list of exporters and choose the ones we like. CC @dsimon
Enable prometheus node exporter by defaultto WIP: Enable prometheus node exporter by defaultb275efa6df
to559c5a47ad
If we want to start a discussion we should open.an issue
Solved by offline discussion.
1b1bf736db
to1de0d32860
WIP: Enable prometheus node exporter by defaultto Enable prometheus node exporter by defaultI don't know much about prometheus and which options should be set and which not. I would love if @dsimon could leave a comment here about that.
@ -56,3 +57,3 @@
};
};
};
};
This is the closing bracket of the services record, there should be less white space, right?
@ -0,0 +1,36 @@
{
At least there should be persistence for
/var/lib/${config.services.prometheus.stateDir}
(maybe the config name is slightly wrong better look it up.)
@ -0,0 +5,4 @@
port = 9100;
# Aligned with https://git.rwth-aachen.de/fsdmath/server/prometheus/-/blob/main/node_exporter/etc/default/prometheus-node-exporter
# Original reasons are for these lists are unknown, but along the lines
# “This looks useless for VMs, but that seems nice.”
Maybe we should rethink this list. Even if we end up with the same list, but a less
shitty justification.
Initially the list was created with the following three aspects in mind:
Because the scraped metrics are not saved on the monitored host itself but on the monitoring host (Cthugha) for a limited time span, space allocation should not be considered a problem. You could restrict the collectors only to the actually used ones, but because rolling out those configurations on all hosts (especially those running without Nix) is a pain (which would be necessary every time you want to use a currently unexported metric) and the export itself doesn't harm, I think the current set of exporters is fine.
Does cthugha needs support for them? Because if not 1.) is not the right constraint for nix machines.
Also if that is the real justification, that should be what the comment says and not that they kinda looked nice.
It depends on what you mean by "support for them":
Because
The justification in the comment could actually be better, but is not as wrong as your answer could suggest. The initial selection of collectors is somewhat arbitrary. In fact I'm not using all exported metrics for alerting, but only a (small to medium sized) subset of the available ones. Some metrics are actively used in alerting (e.g.
collector.systemd
,collector.time
andcollector.textfile
) while could be used in a nice way in future setups (e.g.collector.mdadm
,collector.cpufreq
andcollector.nfsd
). The latter example enables the export of metrics about software RAIDs (mdadm
) and NFS servers (nfsd
), which are things definitly not used at the moment in our infrastructure (at least to my knowledge), but probably could be used in future use case. Explicitly disabled are only exporters which are not useful at all, because of the VM environment, the constant resolution of arp information within our internal network (monitoring ARP tables in a constant server network doesn't make any sense to me), the absence of required hardware or because of performance problems (udp_queue).I would suggest keeping the list roughly symmetric, but adding the new collectors not available in Debian Bullseye. Shrinking the list of available metrics (in comparison to other hosts) could lead to unexpected results like alerts not firing for a specific host because the used metrics from that host were not available (absence of metrics -> empty dataset for that host -> no data to compare rule against -> no alert firing in case of problem).
With “support for them” I meant if this only applies to the host with the exporter or also to cthuga (which is a debian machine right?)
So you say it is advisable to have on all machines the same list of exporters and not enable exporters selectively?
Side note: The list here currently disables collectors explicitly enabled in [1], like
nfs
. Because NFS is not used at the moment (neither the protocol nor the exported metric), this should not be a problem. But at least the new list is not a super-set of the original list.[1] https://git.rwth-aachen.de/fsdmath/server/prometheus/-/blob/main/node_exporter/etc/default/prometheus-node-exporter
@ -39,6 +39,7 @@ config = mkIf cfg.enable {
"/etc/ssh/ssh_host_ed25519_key.pub"
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/var/lib/${config.services.prometheus.stateDir}"
I think that should go together with the prometheus config and not here, because if you we would comment
out the prometheus config we would also want this to be not persistent. So I think it should be reachable from the
same import.
Maybe there are better reasons to to it like this, then convince me please
a140b4d9ec
to3da04e80ae
3da04e80ae
to2c2b24d0a9
Fixed.