-
Notifications
You must be signed in to change notification settings - Fork 13
[crashtracker] Don't send the same stack trace twice #1005
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
BenchmarksComparisonBenchmark execution time: 2025-05-28 18:17:38 Comparing candidate commit ded14a7 in PR branch Found 0 performance improvements and 0 performance regressions! Performance is the same for 52 metrics, 2 unstable metrics. CandidateCandidate benchmark detailsGroup 1
Group 2
Group 3
Group 4
Group 5
Group 6
Group 7
Group 8
Group 9
Group 10
Group 11
Group 12
Group 13
BaselineOmitted due to size. |
Not sure about the CI errors, but the PR looks good and improves the message format 🚀 |
5a33e7f
to
f7e240e
Compare
…been swapped (#1042) This check is not 100% deterministic, but between the time the mock returns the response and we swap the atomic pointer holding the agent_info we only need to perform very few operations. We wait for a maximum of 1s before failing the test and that should give way more time than necesssary.
* build: bump prost-build This updates prost-build across our crates. * build: update sidecar's console-subscriber In order to update prost-types & co after updating prost-build.
We need a `ddcommon/fips` and a `trace_utils/fips` feature. This functionality is written specifically for use in the datadog-lambda-extension. Other use cases should probably validate carefully that the right dependencies are being used in their builds.
#934) # What does this PR do? Removes the enum for stable config and defer the responsibility of checking which config params are allowed to the libraries themselves --- Co-authored-by: paullegranddc <[email protected]>
) **What does this PR do?** This PR applies a suggestion from #963 (comment) to set the `start_time` after obtaining the old profile. This will hopefully make it more clear that the `start_time` is used for the old profile, not the new one. **Motivation:** Make code more readable. **Additional Notes:** N/A **How to test the change?** Existing test coverage should be enough to validate this change (for instance, the Ruby profiler test suite explicitly tests the timestamps on profiles).
- The MAX_PAYLOAD_SIZE used in trace_utils::coalesce_send_data() was 50mb. The agent drops payloads greater than > 25mb and returns a 413. So it was potentially combining payloads that would result in an error and drop. - In the sidecar's trace_flusher payloads were being dropped if the queue's size exceeded the min drop size. The correct behavior is to check if the payload being enqueued exceeds min drop size and log an error. - When the sidecar's trace_flusher was dropping a payload that was too large it was still adding that payload's size to the queue size. This could have lead to subsequent payloads being dropped due to an incorrectly lar ge queue size until a time based flush was done. - A bug was discovered in the test helper function poll_for_mock_hits() where we were incorrectly returning true always when expected hits was 0.
…eserialization (#992) # What does this PR do? This PR adds the following features: * A public API to send trace chunks through the trace exporter. This function takes a `Vec<Vec<Span<T>>` so we can rapidly iterate in ddtrace-rs on which datastructure we want to use to pass spans * A public API to wait for the /info endpoint to be ready. This is needed for deterministic tests with the test agent, because otherwise it's possible to not have stats during the first submission * A way to specify env vars when instantiating an apm test agent. We need to skip some meta keys in dd-trace-rs using the `SNAPSHOT_IGNORED_ATTRS` env, and to recreate the snapshots when running outside of the CI with the `SNAPSHOT_CI` env parameters # Motivation Working on integrating the trace-exporter in dd-trace-rs
Co-authored-by: Daniel Schwartz-Narbonne <[email protected]>
* Add changes to integrate profiling-ffi by source in other projects. * Add "lib" to crate-type. * Fix blazesym version in crashtracker. * Modify builder to explicitely pass the "crate-type" during compilation. * Modify build script to explicitely pass the crate-type so to avoid LTO issues. * Modify window's build scripts to use rustc to avoid LTO issues.
…RemoteConfigProduct (#1054)
… dispatch (#1061) # What does this PR do? Allow to replace the refcounted type used in tinybytes, by using a custom dispatch table for clone a drop
* style: allow clippy::large_enum_variant * style: fix clippy::double_ended_iterator_last * style: fix clippy::io_other_error
…ishing (#1067) **What does this PR do?** This PR bootstraps a new GitHub workflow to publish the `libdatadog` Ruby gem directly from CI and using [trusted publishing](https://guides.rubygems.org/trusted-publishing/). It's not fully set up completely, as I'm having trouble testing the authentication part, I suspect because I'm working off a branch and because the workflow does not yet exist on main. Note also that I've set up a `publish-ruby` environment in https://github.com/datadog/libdatadog/settings/environments which can be used to control who can run this action. **Motivation:** Replace our current release approach that requires manually accessing authentication keys to one that's fully automated and verified. **Additional Notes:** N/A **How to test the change?** As I said above, this is not yet fully wired up. Also, on purpose I've disabled the step where we would upload actual packages until we can fully validate everything is working fine.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1005 +/- ##
=======================================
Coverage 70.94% 70.94%
=======================================
Files 323 323
Lines 49501 49498 -3
=======================================
- Hits 35117 35115 -2
+ Misses 14384 14383 -1
🚀 New features to boost your workflow:
|
Artifact Size Benchmark Reportaarch64-alpine-linux-musl
aarch64-unknown-linux-gnu
libdatadog-x64-windows
libdatadog-x86-windows
x86_64-alpine-linux-musl
x86_64-unknown-linux-gnu
|
What does this PR do?
Currently we send the stacktrace in the log message, and again as part of the
crash_info
payload. This wastes bytes for no reason. Instead, just send it in the crash_info payload.Motivation
Lets not waste bytes.
Additional Notes
This will likely require synchronization with the backend.
How to test the change?
Describe here in detail how the change can be validated.