-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support meters with the same name but different set of tag keys in PrometheusMeterRegistry #877
Comments
This actually is enforced by the underlying Prometheus client that Micrometer is delegating to. It's possible for distinct processes to send metrics with different sets of tag keys, but Prometheus does not allow it from a single process. All Micrometer is doing is giving you a more readable error message about it before the Prometheus client blows up. |
So basically this is expected? Kafka metrics as configured by
|
@flozano Try the latest version if you haven’t done yet as there were updates on the Kafka binder recently. |
If the meter tags are different, could we Also, am I looking at the right prometheus client implementation? |
Hi @jdbranham, have you been able to fix or work around this? I have a similar problem that If I annotate the controller method with just @timed() with no parameters, there is no error. However, if I use @timed with a name argument, like this:
... then it errors when someOperation() is called. Hypothesis: both Actuator and Prometheus are trying to use the annotation to create a metric (each with their own set of tags), but Prometheus requires the name be unique. Stack trace:
I've also tried to upgrade to a newer version of micrometer (1.4.1), but the result is the same. |
@flozano @izeye I think this issue still exists. I think this is because some metrics exists at multiple levels with different tags, for example, Here is above metric in jconsole: Sometimes this causes an uncaught exception originating here (note 1.5 branch) , other times it seems silent. Sometimes the metrics in question show up on my prometheus endpoint, sometimes they don't. This is a bit vague as I'm not fully understanding why, I'm guessing the one that wins the above race shows up and the other doesn't, and the order may differ depending on the order metrics get registered. When I do see an exception, this is it:
I'm not sure what to do for now. As @jkschneider said it's not really a micrometer problem, it's more the metrics are not really comparable with prometheus, or perhaps when auto registered as they are now they are not compatible. If anyone has ideas let me know. I'm using this via spring kafkas |
A bit more information: After I looked to tweak If I remove |
Okay, I am running into this issue as well. I copied and pasted the registry and modified the offending code so that it creates a new collector instead. I have written a functional test that proves that #877 (comment) is no longer true. I have also written an integration test that proves that Prometheus can handle scraping metrics that have the same name but varying labels. I feel like we could easily replace Lines 404 to 424 in c9ae058
with private MicrometerCollector collectorByName(Meter.Id id) {
return collectorMap.compute(
getConventionName(id),
(name, existingCollector) -> {
List<String> tagKeys = getConventionTags(id).stream().map(Tag::getKey).collect(toList());
if (existingCollector != null && existingCollector.getTagKeys().equals(tagKeys)) {
return existingCollector;
}
return new MicrometerCollector(id, config().namingConvention(), prometheusConfig)
.register(registry);
});
} And eliminate the source of this developer friction when using this registry. |
This comment has been minimized.
This comment has been minimized.
Hi, I have encountered the issue as I add some tags (like error name) basing on the result of an operation.
I have checked Prometheus documentation and there is no information about such requirement - looks like the metric name can be associated with varying number of tags/labels:
Source: https://prometheus.io/docs/concepts/data_model/ Looks like the Micrometer's integration is too strict. |
Thanks for checking the documentation! Unfortunately, the underlying Prometheus client isn't implemented to that specification. See https://github.com/prometheus/client_java/blob/master/simpleclient/src/main/java/io/prometheus/client/SimpleCollector.java#L63 So you could open a ticket with that repo to try to get it improved, or if you can think of a good way to work around that limitation that would also be welcome. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
@dnowak-wrldrmt How we dealt with this in, for example, Spring Web MVC metrics is always having an I'm reopening this for some more consideration as this keeps coming up and we've made recent changes to not throw an exception anymore by default in these cases. The current state is still that the scrape will not contain metrics with different labels, though. I'm reopening to see if we can make that happen somehow. After some investigation into this, I will echo what @checketts said in #877 (comment) that the Prometheus Java client does not seem to support this by design. That being said, it is indeed true that the Prometheus server can and will scrape metrics with different labels, with no problem as far as I can tell. @fieldju thank you for the tests. That was useful to look at. The modifications, however, are causing meters to end up as different metrics with the same name, which violates the Prometheus scrape format, even though the Prometheus server seems to have no problem scraping this. Specifically, your modifications result in a scrape like:
The above violates the following snippets from the Prometheus scrape format:
So my understanding is that the scrape should be like:
Trying to register things with the Prometheus client so this happens results in the aforementioned exception. We need to find a way to properly register them, which may not be currently exist. In that case, we would need to see if the Prometheus Java client is willing to make changes to support this, as suggested by @checketts. For example, here is a pure Prometheus Java client unit test that fails trying to do what this issue asks to support: @Test
void promClientSameMetricNameDifferentLabels() throws IOException {
CollectorRegistry registry = new CollectorRegistry();
Counter counter = Counter.build().name("my_counter").labelNames("hi").help("unhelpful string")
.register(registry);
Counter counter1 = Counter.build().name("my_counter").labelNames("hello").help("unhelpful string")
.register(registry); // fails with "Collector already registered that provides name: my_counter"
counter.labels("world").inc(2);
counter1.labels("world").inc(3);
StringWriter writer = new StringWriter();
TextFormat.write004(writer, registry.metricFamilySamples());
System.out.println(writer.toString());
} |
I've opened prometheus/client_java#696 in the repository for the official Java client for Prometheus to get some input from the Prometheus experts on this. |
Just want to chime in that the result of effectively just "dropping" later attempts at making any meter with same name but different tags (when using PrometheusMeterRegistry), without any warning or anything in the logs, has been very frustrating, costing me several hours of debugging and then googling before finding these issues. It says abundantly clearly in all documentation that meters are unique as long as the combination of name and tags are different. It quickly became obvious that changing the name of the second set of meters with different tags "fixed" the problem, but this is not what the docs state. |
Note that the observed behavior is that when adding new meters with the same name but differing in the keys (number or name) of the Tags, the later registrations are ignored. When only the value of the Tags are changed, it works correctly. Therefore, a "fix" is to make both sets of meters (having the same name) also have the same set of Tags, where the irrelevant tags of each subset is set to a fixed dummy value (or maybe even better, the empty string) |
this issue keeps coming back, not only in kafka but for example in micrometer apache client interceptor(async instrumentation) + executor (sync instrumentation)- if you use both, you get slightly different set of tags (sync version includes "outcome", async version does not), and your async metrics are gone. |
Thanks @stolsvik! With your input I at least have a workaround. |
have you resolve this problem? |
How? Forcing to keep all tags as the same? |
In connection with
PrometheusMeterRegistry
and the newKafkaConsumerMetrics
(thanks to @jkschneider) I have a problem that metrics won´t register correctly because ofPrometheusMeterRegistry
complaining about metrics with same name but different set of key tags. It claims that metrics of same name are required in prometheus to have same set of tags.I believe the error thrown at https://github.com/micrometer-metrics/micrometer/blob/master/implementations/micrometer-registry-prometheus/src/main/java/io/micrometer/prometheus/PrometheusMeterRegistry.java#L357 is wrong. Prometheus perfectly accepts metrics of same name with disjunct set of tags (the metrics name itself is just a tag
__name__
). We have in our prometheus production instance metrics likeup{application="xyz", job="abc"}
andup{application="abc", job="123", host="prometheus"}
coexisting without any issue whatsoever.Example of exception text, stacktrace below:
Context of variables
Stacktrace:
The text was updated successfully, but these errors were encountered: