Skip to content

Conversation

toddbaert
Copy link
Member

@toddbaert toddbaert commented Oct 7, 2025

Adds "debounce" hook.

This is a utility "meta" hook, which can be used to effectively debounce or rate limit other hooks based on various parameters.
This can be especially useful for certain UI frameworks and SDKs that frequently re-render and re-evaluate flags (React, Angular, etc).

The hook maintains a simple expiring cache with a fixed max size and keeps a record of recent evaluations based on a user-defined key-generation function (keySupplier).

Simply wrap your hook with the debounce hook by passing it a constructor arg, and then configure the remaining options.
In the example below, we wrap a logging hook so that it only logs a maximum of once a minute for each flag key, no matter how many times that flag is evaluated.

const debounceHook = new DebounceHook<string>(loggingHook, {
  debounceTime: 60_000,             // how long to wait before the hook can fire again
  maxCacheItems: 100,               // max amount of items to keep in the cache; if exceeded, the oldest item is dropped
});

// add the hook globally
OpenFeature.addHooks(debounceHook);

// or at a specific client
client.addHooks(debounceHook);

⚠️ Initially I implemented this with an LRU cache, but after Gemini pointed out some non-LRU behavior with my cache, I realized we don't actually want an LRU cache here, so I renamed it... we explicitly don't want to update recency on retrieval of items (something fundamental to LRUs) because that would mean that if a flag with the same cache key is evaluated over and over, it would NEVER fire the hook again; we simply want to rate limit hook side effects, not prevent them forever.

@toddbaert toddbaert requested review from a team as code owners October 7, 2025 19:35
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new debounce hook, which is a great utility. The implementation is mostly solid, but I've found a critical issue in the LRU cache implementation that could lead to incorrect cache eviction. I've also identified a few configuration issues with file paths that might break the build or IDE features. Additionally, there are some minor improvements for documentation, tests, and robustness that I've suggested. Overall, this is a great addition once these issues are addressed.

@toddbaert toddbaert marked this pull request as draft October 7, 2025 21:14
Signed-off-by: Todd Baert <[email protected]>
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch from 47789bc to 5aa11ce Compare October 8, 2025 01:11
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@toddbaert
Copy link
Member Author

/gemini review

@toddbaert toddbaert marked this pull request as ready for review October 8, 2025 01:14
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new debounce hook, which is a great utility for rate-limiting hook executions in environments with frequent re-renders. The implementation is solid, using a fixed-size expiring cache to manage debouncing. I've identified a few areas for improvement, mainly related to configuration paths and documentation clarity, including a couple of high-severity path issues in project.json and tsconfig.base.json that could affect the monorepo tooling. I've also suggested improvements to a test case to make it clearer and more accurate. Overall, this is a well-executed feature.

@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch from 415055a to d073cf7 Compare October 8, 2025 02:07
@toddbaert toddbaert force-pushed the feat/add-debounce-hook branch from d073cf7 to e2586a1 Compare October 8, 2025 02:09
Copy link
Member

@guidobrei guidobrei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job. Only 2 minor comments.

toddbaert and others added 5 commits October 8, 2025 08:54
Co-authored-by: Michael Beemer <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Copy link
Member

@lukas-reining lukas-reining left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, I left some nits you can ignore.
I have question regarding hook data that I would like to clarify before approving :)

}

before(hookContext: HookContext, hookHints?: HookHints) {
this.maybeSkipAndCache(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to consider hookContext.hookData here. As the stages are cached individually, the hook data would be missing if before is evaluated from cache, while isfinally relies on it and is not cached yet.
The new OTEL Hooks in #1372 (which should even work in the web-sdk now, which we might not want to debounce but nevermind :D) rely on the Hook data, and an example like: https://github.com/open-feature/js-sdk/blob/main/packages/web/test/hooks-data.spec.ts#L45 would suffer from it.

I think that would especially apply in cases where we have thrashing on the cache as then there could be very large gaps in TTL between the stages resulting in a case where before is cached and after is not, calling after with empty hook data.

I see two options:

  1. Caching them all together instead of individually could fix this? Then we would not care about hook data in debounced cases.
  2. We could cache the hook data and see if the former stage has cached hook data. But I think this could become problematic as there might not only be one hook data set because of different combinations of hook data from (non-)cached before, error and after stages that can all contribute to the hook data observed in the finally stage.

In the end, I think I would prefer to option 1, as I think the set of stages should always be the same when cached, which it mostly "nearly" is due to similar (not same) TTLs if there are no evictions.

Does that make sense or am I getting it wrong @toddbaert?

Copy link
Member Author

@toddbaert toddbaert Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.

I think I also agree with 1...

@beeme1mr ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've changed the options so there's only a single cacheKeySupplier function. The cached entry now has a result for each stage - this is needed because we want all stages to expire at the same time as a single entry to prevent disjunction.

/**
* Function to generate the cache key for the before stage of the wrapped hook.
* If the cache key is found in the cache, the hook stage will not run.
* If not defined, the DebounceHook will no-op for this stage (inner hook will always run for this stage).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we default to using the 'flagKey'? It seems like that would be the most common use case. It looks like the hook doesn't do anything if you don't set the beforeCacheKeySupplier, but it's optional. I think we should make it either required or set a default. My preference would be to set a default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make it either required or set a default. My preference would be to set a default.

We can do this, but the awkward part is that then it would debounce every stage, and if you want to NOT debounce one, you'd have to explicitly assign undefined.

Why don't we default to using the 'flagKey'?

I think flag key is almost certainly NOT what you'd want for after/finally hooks - you would almost certainly want to inolve the value/variant as well... but then we have to ask which of those we should use, considering variant isn't always present. All these questions is why I didn't provide a default.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment has implications here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially in light of @lukas-reining 's comment above, I'm starting to agree with a single supplier, and providing a default just based on flag-key.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done this... () => flagKey is now the default.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want to target the web only, please make that very clear at the top of the file.

We could consider making this work on the server and client, like @lukas-reining did for the telemetry hooks. Since this hook is so flexible, I could see some use cases on the server, specifically when running it with a logging hook.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya I see no reason why this can't be for both, I can make that change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great!
We need to cache the return value of the before hook too then. But if we cache all stages together as proposed for the other issue it does not matter anymore :)

Copy link
Member Author

@toddbaert toddbaert Oct 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is actually possible.

The problem is that server hooks are possibly async, so to cache the return value of the before hook (and also to generally handle async errors) the inner hook as to be awaited (forcing the debounce hook to be async) - but the web doesn't support async hooks. It's basically the same reason we don't have a shared hook implementation in the web/server SDKs (only a BaseHook type) - the async support of server is incompatible with the web hook. This isn't a problem for the OTel hooks because they are fire-and-forget no nothing is actually awaited.

I COULD have a shared base version of this package and server/web version that shared most code, but like I said... I think the same thing preventing us from having a single Hook in both SDKs prevents us from creating a single version of this hook for both SDKs.

Open to suggestions here, or to be told I'm wrong.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
toddbaert and others added 5 commits October 8, 2025 11:59
@toddbaert
Copy link
Member Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Thank you for adding the debounce hook. This is a great utility for performance-sensitive applications. I've reviewed the code and found a few issues, including some critical bugs in the caching logic and a few configuration problems. I've also suggested a refactoring to simplify the implementation and improve maintainability. Please take a look at my comments.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
toddbaert and others added 2 commits October 8, 2025 17:01
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Todd Baert <[email protected]>
@open-feature open-feature deleted a comment from gemini-code-assist bot Oct 8, 2025
@toddbaert
Copy link
Member Author

@beeme1mr @lukas-reining please re-review.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants