llo/datasource: consistent cache expiry calculation, reduce lock contention#21224
llo/datasource: consistent cache expiry calculation, reduce lock contention#21224
Conversation
|
I see you updated files related to
|
|
There was a problem hiding this comment.
Pull request overview
This PR is a follow-up to PR #21189 that improves the LLO datasource observation cache implementation. It focuses on two specific optimizations: ensuring consistent expiry time calculation across batch operations and reducing lock contention during cache reads.
Changes:
- Modified
AddManyto calculate expiry time once before the loop, ensuring all items in a batch have identical expiry timestamps - Refactored
GetandGetManymethods to defer Prometheus metric updates until after releasing read locks, reducing lock contention - Added comprehensive concurrency tests for
AddManyand combinedAddMany/GetManyoperations
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| core/services/llo/observation/cache.go | Refactored Add/AddMany to calculate expiry before locking; moved metric updates outside read locks in Get/GetMany |
| core/services/llo/observation/cache_test.go | Added TestCache_ConcurrentAddMany and TestCache_ConcurrentAddManyGetMany to verify thread safety |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.




follow up to #21189