-
Notifications
You must be signed in to change notification settings - Fork 1
fix(deps): update all non-major gomod dependencies #925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
ℹ Artifact update noticeFile name: go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/common/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/decaying-lru/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwauthz/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwdb/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwes/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwlocale/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwtesting/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/hwutil/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: libs/telemetry/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/property-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/tasks-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/updates-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: services/user-svc/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
File name: spicedb/migrations/go.modIn order to perform the update(s) described in the table above, Renovate ran the
Details:
|
4ef7a5c to
a6ba0b9
Compare
6c13224 to
974e135
Compare
deb6cea to
70edb84
Compare
1ba15ae to
634f7c9
Compare
dab0a2e to
8dc943c
Compare
47761c2 to
ffd0542
Compare
c707628 to
27e0e41
Compare
3072255 to
4498309
Compare
4860ea6 to
5c612b4
Compare
5c612b4 to
ddc7d31
Compare
This PR contains the following updates:
v1.4.0->v1.5.0v0.1.0->v0.4.0v1.6.0->v1.12.1v1.2.0->v1.6.0v2.2.1+incompatible->v2.4.0+incompatiblev1.14.4->v1.16.1v1.11.0->v1.13.0v10.23.0->v10.28.0v4.18.1->v4.19.0v1.0.1->v1.1.0v2.2.0->v2.3.2v5.7.2->v5.7.6v2.4.1->v2.6.0v4.3.0->v4.9.0v1.20.5->v1.23.2v9.7.0->v9.16.0v1.33.0->v1.34.0v0.3.2->v0.3.4v1.10.0->v1.11.1v0.35.0->v0.39.0v0.34.0->v0.39.0v0.34.0->v0.39.0v0.58.0->v0.63.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v1.33.0->v1.38.0v0.33.0->v0.46.0v0.25.0->v0.32.0v0.21.0->v0.30.0v1.69.2->v1.76.0v1.36.1->v1.36.10Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
BurntSushi/toml (github.com/BurntSushi/toml)
v1.5.0Compare Source
Mostly some small bugfixes, with a few small new features:
Add Position.Col, to mark the column an error occurred (#410)
Print more detailed errors in the
tomlvCLI.Ensure ParseError.Message is always set (#411)
Allow custom string types as map keys (#414)
Mark meta keys as decoded when using Unmarshaler interface (#426)
Fix encoding when nested inline table ends with map (#438)
Fix encoding of several layers of embedded structs (#430)
Fix ErrorWithPosition panic when there is no newline in the TOML document (#433)
Mariscal6/testcontainers-spicedb-go (github.com/Mariscal6/testcontainers-spicedb-go)
v0.4.0Compare Source
What's Changed
Full Changelog: Mariscal6/testcontainers-spicedb-go@v0.3.0...v0.4.0
v0.3.0Compare Source
What's Changed
Full Changelog: Mariscal6/testcontainers-spicedb-go@v0.2.0...v0.3.0
v0.2.0Compare Source
What's Changed
New Contributors
Full Changelog: Mariscal6/testcontainers-spicedb-go@v0.1.0...v0.2.0
alecthomas/kong (github.com/alecthomas/kong)
v1.12.1Compare Source
v1.12.0Compare Source
v1.11.0Compare Source
v1.10.0Compare Source
v1.9.0Compare Source
v1.8.1Compare Source
v1.8.0Compare Source
v1.7.0Compare Source
v1.6.1Compare Source
authzed/authzed-go (github.com/authzed/authzed-go)
v1.6.0Compare Source
Highlights
Bring in v1.45.4 backwards-compatible changes for SpiceDB
What's Changed
d27fc02by @josephschorr in #350Full Changelog: authzed/authzed-go@v1.5.0...v1.6.0
v1.5.0Compare Source
What's New
What's Changed
Full Changelog: authzed/authzed-go@v1.4.1...v1.5.0
v1.4.1Compare Source
What's Changed
New Contributors
Full Changelog: authzed/authzed-go@v1.4.0...v1.4.1
v1.4.0Compare Source
Highlights
What's Changed
New Contributors
Full Changelog: authzed/authzed-go@v1.3.0...v1.4.0
v1.3.0Compare Source
What's Changed
Full Changelog: authzed/authzed-go@v1.2.1...v1.3.0
v1.2.1Compare Source
What's Changed
Full Changelog: authzed/authzed-go@v1.2.0...v1.2.1
coreos/go-oidc (github.com/coreos/go-oidc)
v2.4.0+incompatibleCompare Source
v2.3.0+incompatibleCompare Source
dapr/dapr (github.com/dapr/dapr)
v1.16.1: Dapr Runtime v1.16.1Compare Source
Dapr 1.16.1
This update includes bug fixes:
Actor Initialization Timing Fix
Problem
When running Dapr with an
--app-portspecified but no application listening on that port (either due to no server or delayed server startup), the actor runtime would initialize immediately before the app channel was ready. This created a race condition where actors were trying to communicate with an application that wasn't available yet, resulting in repeated error logs:Impact
This created a poor user experience with confusing error messages when users specified an
--app-portbut had no application listening on that port.Root cause
The actor runtime initialization was occurring before the application channel was ready, creating a race condition where actors attempted to communicate with an unavailable application.
Solution
Defer actor runtime initialization until the application channel is ready. The runtime now:
waiting for application to listen on port XXXXmessages instead of confusing error logsSidecar Injector Crash with Disabled Scheduler
Problem
The sidecar injector crashes with error (
dapr-scheduler-server StatefulSet not found) when the scheduler is disabled via Helm chart (global.scheduler.enabled: false).Impact
The crash prevents the sidecar injector from functioning correctly when the scheduler is disabled, disrupting deployments.
Root cause
A previous change caused the
dapr-scheduler-serverStatefulSet to be removed when the scheduler was disabled, instead of scaling it to 0 as originally intended. The injector, hardcoded to check for the StatefulSet in theinjector.gofile, fails when it is not found.Solution
Revert the behavior to scale the
dapr-scheduler-serverStatefulSet to 0 when the scheduler is disabled, instead of removing it, as implemented in the Helm chart.Workflow actors reminders stopped after Application Health check transition
Problem
Application Health checks transitioning from unhealthy to healthy were incorrectly configuring the scheduler clients to stop watching for actor reminder jobs.
Impact
The misconfiguration in the scheduler clients made workflows to stop executing because reminders no longer executed.
Root cause
On Application Health change daprd was able to trigger an actors update for an empty slice, which caused a scheduler client reconfiguration. However because there were no changes in the actor types, daprd never received a new version of the placement table which caused the scheduler clients to get misconfigured. This happens because when daprd sends an actor types update to the placement server daprd wipes out the known actor types in the scheduler client, and because daprd never received an acknowledgement from placement with a new table version then the scheduler client never got updated back with the actor types.
Solution
Prevent any changes to hosted actor types if the input slice is empty
Fix Scheduler Etcd client port networking in standalone mode
Problem
The Scheduler Etcd client port is not available when running in Dapr CLI standalone mode.
Impact
Cannot perform Scheduler Etcd admin operations in Dapr CLI standalone mode.
Root cause
The Scheduler Etcd client port is only listened on localhost.
Solution
The Scheduler Etcd client listen address is now configurable via the
--scheduler-etcd-client-listen-addressCLI flag, meaning port can be exposed when running in standalone mode.Fix Helm chart not honoring --etcd-embed argument
Problem
The Scheduler would always treat
--etcd-embedas true, even when set to false in the context of the Helm chart.Impact
Cannot use external etcd addresses since Scheduler would always assume embedded etcd is used.
Root cause
The Helm template format treated the boolean argument as a seperate argument rather than inline.
Solution
The template format string was fixed to allow for
.etcdEmbedto be set tofalse.Component initialization timeout check before using reporter
Problem
The Component init timeout was checked after using the component reporter
Impact
This misalignment could lead to false positives, dapr could have reported success when later dapr was returning an error due the timeout check
Solution
Move the timeout check to be right after the actual component initialization and before the component reporter
Fix Regression in pubsub.kafka Avro Message Publication
Problem
The pubsub.kafka component failed to publish Avro messages in Dapr 1.16, breaking existing workflows.
Impact
Avro messages could not be published correctly, causing failures in Kafka message pipelines and potential data loss or dead-lettering issues.
Root cause
The Kafka pubsub component did not correctly create codecs in the SchemaRegistryClient. Additionally, the goavro library had a bug converting default null values that broke legitimate schemas.
Solution
Enabled codec creation in the Kafka SchemaRegistryClient and upgraded
github.com/linkedin/goavro/v2from v2.13.1 to v2.14.0 to fix null value handling. Metadata optionsuseAvroJsonandexcludeHeaderMetaRegexwere validated to ensure correct message encoding and dead-letter handling. Manual tests confirmed Avro and JSON message publication works as expected.Ensure Files are Closed Before Reading in SFTP Component
Problem
Some SFTP servers require files to be closed before they become available for reading. Without closing, read operations could fail or return incomplete data.
Impact
SFTP file reads could fail or return incomplete data on certain servers, causing downstream processing issues.
Root cause
The SFTP component did not explicitly close files after writing, which some servers require to make files readable.
Solution
Updated the SFTP component to close files after writing, ensuring they are available for reading on all supported servers.
Fix AWS Secrets Manager YAML Metadata Parsing
Problem
The AWS Secrets Manager component failed to correctly parse YAML metadata, causing boolean fields like
multipleKeyValuesPerSecretto be misinterpreted.Impact
Incorrect metadata parsing could lead to misconfiguration, preventing secrets from being retrieved or handled properly.
Root cause
The component used a JSON marshal/unmarshal approach in
getSecretManagerMetadata, which did not handle string-to-boolean conversion correctly for YAML metadata.Solution
Replaced JSON marshal/unmarshal with
kitmd.DecodeMetadatato correctly parse YAML metadata and convert string fields to their proper types, ensuringmultipleKeyValuesPerSecretworks as expected.Reuse Kafka Clients in AWS v2 Migration
Problem
After migrating to the AWS v2 Kafka client, a new client was created for every message published, causing inefficiency and unnecessary resource usage.
Impact
Frequent client creation led to performance degradation, increased connection overhead, and potential resource exhaustion during high-throughput message publishing.
Root cause
The AWS v2 client integration did not implement client reuse, resulting in a new client being instantiated for each publish operation.
Solution
Updated the Kafka component to reuse clients instead of creating a new one for each message, improving performance and resource efficiency.
Fix Kafka AWS Authentication Configuration Bug
Problem
The Kafka AWS authentication configuration was not initialized correctly, causing authentication failures.
Impact
Kafka components using AWS authentication could fail to connect, preventing message publishing and consumption.
Root cause
A bug in the Kafka AWS auth config initialization prevented proper setup of authentication parameters.
Solution
Fixed the initialization logic in the Kafka AWS auth configuration to ensure proper authentication and connectivity.
Enhanced debug logs for placement server
Problem
Users experiencing issues with Placement server don't get enough information from the debug logs to troubleshoot or understand in what state the Placement server is
Impact
Inability to troubleshoot placement server.
Solution
Add more debug logs to get more detailed information about placement server dissemination logic.
Workflow actors never registered again after failed actors registration on GetWorkItems connection callback
Problem
Workflow workers connect to dapr but the workflow actors are never registered, resulting in workflows not executing and being unable to schedule new workflows.
Impact
Workflows API becoming unavailable.
Root cause
When the durabletask-go library executes the "on GetWorkItems connection callback" if this callback fails to actually register the actors and returns an error, then the "on GetWorkItems disconnect callback" was not being invoked. This resulted in sidecar not trying to register the actors ever again, because the workflow engine kept a counter that was incremented by 1 but never got decreased.
Enabled DynamoDB as workflow statestore
Solution
Refactor durabletask-go to guarantee that the "on disconnect" callback will always be invoked if the "on connection" callback has been invoked.
Problem
AWS DynamoDb could not be used as a state store for workflows.
Impact
Users trying to configure AWS DynamoDB for workflows would face hang-ups in their workflow execution
Root cause
The AWS DynamoDB component did not handle binary data types correctly, saving them in the underlying component as strings.
Solution
The component was modified to save the binary data in the correct DynamoDB Binary type.
v1.16.0: Dapr Runtime v1.16.0Compare Source
Dapr 1.16
We're excited to announce the release of Dapr 1.16! There have been in improvements to workflow performance, issues resolved and features that we recommend you upgrade.
Thanks to all the new and existing contributors who helped make this release happen.
If you're new to Dapr, visit the getting started page and familiarize yourself with Dapr.
The docs have been updated with all the new features and changes of this release. To get started with new capabilities introduced in this release, go to the Concepts and the Developing applications.
Go to the upgrading Dapr section for steps to upgrade to version 1.16.
Acknowledgements
Thanks to everyone who made this release possible!
@abossard, @acroca, @adam6878, @aladd04, @alicejgibbons, @antontroshin, @artur-ciocanu, @artursouza, @bibryam, @cicoyle, @ConstantinChirila, @cwalsh2189, @dani-maarouf, @Dzvezdana, @Eileen-Yu, @elena-kolevska, @elKei24, @ericsyh, @famarting, @filintod, @fvandillen, @Gallardot, @giterinhub, @hhunter-ms, @iddeepak, @inishchith, @javier-aliaga, @jev-e, @jjcollinge, @jmenziessmith, @JoshVanL, @kaibocai, @kaspernissen, @KentHsu, @knotseaborg, @ManuInNZ, @marcduiker, @mathieu-benoit, @mcruzdev, @middt, @mikeee, @msfussell, @MyMirelHub, @nelson-parente, @ngruson, @nmalocic, @olitomlinson, @osouzaelias, @passuied, @pnagaraj80, @salaboy, @sicoyle, @siri-varma, @swatimodi-scout, @theonefx, @thrubovc, @tmiddlet2666, @TomasEkeli, @tscolari, @twinguy, @vil02, @WhitWaldo, @willvelida, @yaron2
Highlights
These are the v1.16 release highlights:
Multi-application workflows
The Workflow API now supports multi-application workflows, enabling you to orchestrate complex business processes that span across multiple applications. This allows a workflow to call activities or start child workflows in different applications, distributing the workflow execution while maintaining the security, reliability, and durability guarantees of Dapr's workflow engine.
By using multi-app workflows, you can design distributed business processes, such as cross-application order processing, complex approval chains that involve multiple workflows and activites, and AI/ML pipelines that coordinate between LLM services and GPU-intensive workloads. Workflow durability and consistency is maintained across application boundaries, ensuring your distributed workflows remain resilient even when individual applications experience temporary failures.
Mutli-application workflows are supported in Java and Go SDKs as of this release. Learn more with the multi-app workflows documentation.
Workflow performance
Dapr continues to invest into the Workflows API building block; with this release focusing on performance and stabilization, particularly when using Workflows for production at scale. These enhancements make Dapr Workflows more robust and performant for high-throughput workloads with high-concurrency requirements.
Key Improvements:
The results of these improvements mean that Dapr handles larger Workflow throughputs, consumes less memory & CPU overall, and the consumption of these resources is more stable.
The following table shows results of testing the performance of Workflows from Dapr v1.15 and v1.16.
Configuration
📅 Schedule: Branch creation - Between 06:00 PM and 09:59 PM, only on Friday ( * 18-21 * * 5 ) in timezone Europe/Berlin, Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR is behind base branch, or you tick the rebase/retry checkbox.
👻 Immortal: This PR will be recreated if closed unmerged. Get config help if that's undesired.
This PR was generated by Mend Renovate. View the repository job log.