implement mvp for span derived primary tags#11358
Conversation
| } | ||
|
|
||
| public List<String> getTraceStatsAdditionalTags() { | ||
| return tryMakeImmutableList(configProvider.getList(TRACE_STATS_ADDITIONAL_TAGS)); |
There was a problem hiding this comment.
Instead of a List where something like region,tenant_id and tenant_id,region would be seen as different values and duplicates would be preserved, instead we can define this as a Set? assuming order and duplicates do not matter. For example,
public Set<String> getTraceStatsAdditionalTags() {
return tryMakeImmutableSet(configProvider.getList(TRACE_STATS_ADDITIONAL_TAGS));
}
similar to https://github.com/DataDog/dd-trace-java/blob/master/internal-api/src/main/java/datadog/trace/api/Config.java#L5107. Claude recommended the following change as well to dd-trace-core/src/main/java/datadog/trace/common/metrics/ConflatingMetricsAggregator.java so that additionalTagKeys is still iterable while maintaining no duplicates, no config-order sensitivity, and stable payload/key ordering:
List<String> orderedAdditionalTagKeys = new ArrayList<>(additionalTagKeys);
Collections.sort(orderedAdditionalTagKeys);
this.additionalTagKeys = Collections.unmodifiableList(orderedAdditionalTagKeys);
and corresponding tests that check duplicate keys / same keys but re-ordered return the expected result 🤔
| final boolean hasGrpcStatusCode = key.getGrpcStatusCode() != null; | ||
| final int mapSize = | ||
| 15 | ||
| 16 |
There was a problem hiding this comment.
A future improvement can be to make this conditional such as hasServiceSource and hasHttpMethod below. Otherwise, most customers will always get an empty additional tag and thus an unnecessarily large payload
| boolean includeEndpointInMetrics) { | ||
| this.ignoredResources = ignoredResources; | ||
| this.additionalTagKeys = | ||
| additionalTagKeys == null ? Collections.emptyList() : additionalTagKeys; |
There was a problem hiding this comment.
We could also add a MAX_ADDITIONAL_TAG_KEYS value limiting the number of tag keys that customers can set to prevent "infinite" tag keys and possible exploitation(? - not sure if exploitation is possible here but a limit and warning once you hit the limit seems safer)
There was a problem hiding this comment.
We only allow 4 unique primary tag keys in the UI so if a user is setting more than that they are aggregating their stats on an additional dimension only for that value to get discarded in our stats pipeline.
IIRC we do increase the amount of keys a customer can set but its rare, a max value of 10 I think makes sense here. It provides protection against the extreme while leaving some room for customers with an increased key count.
dougqh
left a comment
There was a problem hiding this comment.
Given that we've already had issues with cardinality in this code, for me, a minimal viable solution needs to include a more robust solution to tag cardinality.
While right now there is a size cap. When we hit the size cap, we silently drop stats which leads to a bad user experience.
I think we need to sort those issues out; otherwise, adding more dimensions will just exacerbate the problem.
| ADDITIONAL_TAG_VALUES_CACHE = DDCaches.newFixedSizeCache(64); | ||
| private static final Function< | ||
| String, Pair<DDCache<String, UTF8BytesString>, Function<String, UTF8BytesString>>> | ||
| ADDITIONAL_TAG_VALUES_CACHE_ADDER = |
There was a problem hiding this comment.
In hindsight, I wish we hadn't chosen to concatenate key + ":" + value in the payload. For keeping memory down, separate fields would probably have been better.
| return Collections.emptyList(); | ||
| } | ||
|
|
||
| private List<UTF8BytesString> getAdditionalTags(CoreSpan<?> span) { |
There was a problem hiding this comment.
We already have high allocation from creating MetricKey-s just to perform Map look-ups.
Producing an intermediate list with UTF8BytesStrings is going to compound that problem significantly.
There was a problem hiding this comment.
I copied how we interact with peer tags, to keep behavior consistent. What do you propose I do differently?
| key -> | ||
| Pair.of( | ||
| DDCaches.newFixedSizeCache(512), | ||
| value -> UTF8BytesString.create(key + ":" + value)); |
There was a problem hiding this comment.
I think as is this degenerates into hot allocation when a tag has high cardinality. In high cardinality situation, we need to avoid the concatenation before it happens.
Right now, we're still paying the allocation cost and undoing the benefit of the cache.
There was a problem hiding this comment.
Just to verify, I ran this through Claude as well...
Vulnerabilities / Misconfiguration risk
- No per-value cardinality cap (acknowledged in PR description)
The MAX_ADDITIONAL_TAG_KEYS = 10 cap protects against the number of configured keys. It does nothing about the number of unique values per key. A single misconfiguration —
DD_TRACE_STATS_ADDITIONAL_TAGS=user_id or request_id or trace_id — gives the customer an unbounded MetricKey set, unbounded keys map growth, and an oversized payload to the
agent. The PR explicitly defers cardinality protection ("requires an HLL/bounded-set estimator"), which is reasonable for MVP — but in its current form this config is a foot-gun
and should not be enabled by default, which it isn't. Two suggestions:
- Add a log.warn (one-shot, at startup) when additionalTagKeys is non-empty, calling out the cardinality risk and pointing at the future
DD_TRACE_STATS_ADDITIONAL_TAGS_CARDINALITY_LIMIT config.- Consider a stop-gap "max unique MetricKeys before we stop adding" guard rather than waiting for the full HLL implementation.
- Tag values are not length-bounded before being interned and serialized
UTF8BytesString.create(key + ":" + value) will faithfully serialize whatever the customer set on the span tag. If application code stuffs a JSON body or a stack trace into a tag
named in DD_TRACE_STATS_ADDITIONAL_TAGS, the per-value cache entry, the MetricKey, and the msgpack payload all carry it. This is pre-existing for peer tags too, but again is
amplified here because the user controls which keys feed this path. A value.length() < N guard with a one-shot warn would be cheap insurance.
Overhead
5. Cache footprint scales with MAX_ADDITIONAL_TAG_KEYS × per-key-cache-size
ADDITIONAL_TAG_VALUES_CACHE is newFixedSizeCache(64) (plenty) but each entry holds an inner newFixedSizeCache(512). With 10 configured keys that's up to 10×512 = 5,120 cached
UTF8BytesString entries plus the lambda closures, vs. the existing peer-tags cache of similar size. Acceptable, but worth a comment near ADDITIONAL_TAG_VALUES_CACHE explaining
the bound (mirrors the existing comment on PEER_TAGS_CACHE).
There was a problem hiding this comment.
Also worth noting, that the cache is still unbounded in terms of bytes. That has bit us with SQL statements previously.
And I guess part of what Claude is pointing out is that tags may also contain complicated objects, we might need to filter those out otherwise allocation could explode if misconfigured. Although, that was definitely a pre-existing issue.
There was a problem hiding this comment.
added basic cardinality control and will add a char limit to the string values, would that address all your concerns here?
There was a problem hiding this comment.
I'm experimenting with a revised approach.
I think the simplest solution might be to combine the cardinality tracking and the caching into a single class.
In hindsight, I'd argue that DDCache isn't a great fit for metrics. DDCache works well with low cardinality, but doesn't work well in situations where cardinality can be high.
In a strict sense, that's okay because the GC will be able to reclaim the memory, so memory isn't unbounded. However, a solution that deals with high cardinality more gracefully would be preferable.
|
|
||
| private final int limitPerTag; | ||
| private final HealthMetrics healthMetrics; | ||
| private final ConcurrentHashMap<String, Set<String>> seenValuesPerTag = new ConcurrentHashMap<>(); |
There was a problem hiding this comment.
I think we're going to want a lighter weight collection for this purpose.
ConcurrentHashMap is very allocation heavy.
As mentioned elsewhere, I think a better would be to have an AdditionalTagCardinalityLimiter per tag. Then we don't need the extra allocation heft of one of the ConcurrentHashMap.
I'm also debating whether we can do something cheaper for the Set, but that I think we could leave for later.
I'm also hoping to get away from the extra coordination overhead needed for ConcurrentHashMap by moving more of the metrics work into a background thread.
There was a problem hiding this comment.
What do you propose we use instead? An array of length 100 would work but then our lookup time is less efficient.
There was a problem hiding this comment.
Not necessarily.
Pointer chasing actually isn't great on modern hardware.
And arrays benefit from prefetch from caches, so often a small array beats a linked list.
Admittedly, I need to benchmark a bit to determine what's best here.
|
|
||
| private final Set<String> ignoredResources; | ||
| private final List<String> additionalTagKeys; | ||
| private final AdditionalTagsCardinalityLimiter cardinalityLimiter; |
There was a problem hiding this comment.
I think rather than a single AdditionalTagsCardinalityLimiter that handles all the extra tags. A better design would be a handler per configured tag.
That would avoid the Map of Set design which is adding undue allocation.
Although for the most part, I think we just need to take a step back and rework this code before we add more to it. Rather than continue to review this PR, I'd like to work on solutions to these issues more directly.
What Does This Do
Adds an MVP implementation of span-derived primary tags for client-side stats in the Java tracer (spec). Users can configure a list of span tag keys via
DD_TRACE_STATS_ADDITIONAL_TAGS; the tracer extracts the matching values from each eligible span, adds them as an additional aggregation dimension onMetricKey, and emits them in thev0.6/statspayload under the newAdditionalMetricTagsfield (msgpack array of"key:value"strings, mirroringPeerTags).Motivation
Customers want to aggregate APM stats by custom dimensions beyond the standard
envprimary tag (e.g.region,tenant_id) for more granular per-segment visibility. Previously this was only available via the deprecated agent-side config; this PR moves it tracer-side per the v1.3.0 spec.Behavior summary
DD_TRACE_STATS_ADDITIONAL_TAGS=region,tenant_id(env),dd.trace.stats.additional.tags=region,tenant_id(sysprop), ortrace.stats.additional.tags=region,tenant_idindd-java-tracer.properties. Comma-separated list of tag keys.span.unsafeGetTag(...); only present values are included. Trace/chunk-level tags are visible through the same call.region:us-east-1aggregate separately fromregion:eu-west-1; spans without the configured tag aggregate together (empty list).AdditionalMetricTags(repeated string, each entry"key:value") in the per-metric msgpack map. The field is only included when at least one tag was extracted, so the 99% of customers who don't configure the feature pay zero payload overhead.MAX_ADDITIONAL_TAG_KEYS = 10. The backend stats pipeline accepts only 4 primary tag dimensions by default (occasionally up to ~10 for elevated quotas); going beyond gets dropped downstream while inflating tracer-side cardinality. Extras are dropped post-sort with alog.warn.Additional Notes
This is an MVP. Out of scope for this PR:
blocked_by_traceroverflow behavior from the spec). The cap above is on the number of configured keys, not on the number of unique values per key. Cardinality protection will land separately as it requires an HLL/bounded-set estimator and aDD_TRACE_STATS_ADDITIONAL_TAGS_CARDINALITY_LIMITconfig.Contributor Checklist
Jira ticket: [PROJ-IDENT]