-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High label cardinality issue #53
Comments
The Each Unfortunately, the nature of this exporter is such that there will inevitably be a lot of series when you have a lot of projects. That being said,
Thinking off the top of my head, there are a few things we might consider doing in the exporter:
You may also want to consider dropping the metric or a subset of the series with |
I like the first option. It is unlikely that all the projects will have a value other than 0 at the same time so it would solve the high number of combinations. Just to give you more details about my setup, I have 720 projects and 0 violations, which would not create a single metric and therefore no issues with high cardinality. Regarding the caveats when choosing this approach, we have had a similar issue in the past and solved it with an
This is only used when we want to alert whenever a new vulnerability is found (same works for policy violation) PS: Unfortunately I cannot find the article where I got this solution to link it |
Presumably you have 0 violations because you don't have any policies? If that's the case, then dropping the metric with It pains me a bit to implement 1, simply because it goes against the typical best-practice advice and often results in missed alerts because people aren't aware of the issues with |
I see. Unfortunately, we don't "have access" to the Prometheus configuration, since it was designed to not have exceptions like this one. That said, I understand your point and appreciate the quick support. Feel free to close the issue if you like. |
What would be the impact in light of this thread if the project tags are added as a label to the other metrics? At the moment, it is only available of the project info metric. We use the tags to get an understanding which teams own which project in dependency track. We need to report what the security footprint for each team is, so will need to be able to report on only a subset of projects, which is where tags comes in. |
You should be able to join tags from
|
The exporter initialises all possible labels of dependency_track_project_policy_violations metrics which adds 72 metrics for each project. jetstack#53 has details of the rational but there are usecases where it is desirable to only have non-zero values of the metrics available. This change adds a new argument that enables only non-zero value metrics to be returned. This flag defaults to the current behaviour so is an opt in feature.
* fix: high cardinality policy_violations metrics The exporter initialises all possible labels of dependency_track_project_policy_violations metrics which adds 72 metrics for each project. jetstack#53 has details of the rational but there are usecases where it is desirable to only have non-zero values of the metrics available. This change adds a new argument that enables only non-zero value metrics to be returned. This flag defaults to the current behaviour so is an opt in feature. * docs: add flag to readme
After a successful deployment of dependency-track-exporter, I start receiving alerts in our infrastructure because the exporter is generating labels with high cardinalities.
It is a known issue when taking into account Prometheus performance as stated in this article Cardinality is key by Robust Perception
After a deep investigation, I found that the offender metric is
dependency_track_project_policy_violations
which has a labeluuid
that can explode the number of combinations.I would suggest dropping the
uuid
label since it doesn't bring benefits in this case as we already have the project name.Unfortunately, I'm not a good Go developer, but I would be happy to help in any other way.
The text was updated successfully, but these errors were encountered: