-
Notifications
You must be signed in to change notification settings - Fork 59
feat: add debounce hook #1389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add debounce hook #1389
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new debounce
hook, which is a great utility. The implementation is mostly solid, but I've found a critical issue in the LRU cache implementation that could lead to incorrect cache eviction. I've also identified a few configuration issues with file paths that might break the build or IDE features. Additionally, there are some minor improvements for documentation, tests, and robustness that I've suggested. Overall, this is a great addition once these issues are addressed.
Signed-off-by: Todd Baert <[email protected]>
47789bc
to
5aa11ce
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new debounce
hook, which is a great utility for rate-limiting hook executions in environments with frequent re-renders. The implementation is solid, using a fixed-size expiring cache to manage debouncing. I've identified a few areas for improvement, mainly related to configuration paths and documentation clarity, including a couple of high-severity path issues in project.json
and tsconfig.base.json
that could affect the monorepo tooling. I've also suggested improvements to a test case to make it clearer and more accurate. Overall, this is a well-executed feature.
415055a
to
d073cf7
Compare
Signed-off-by: Todd Baert <[email protected]>
d073cf7
to
e2586a1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job. Only 2 minor comments.
Co-authored-by: Michael Beemer <[email protected]> Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, I left some nits you can ignore.
I have question regarding hook data that I would like to clarify before approving :)
} | ||
|
||
before(hookContext: HookContext, hookHints?: HookHints) { | ||
this.maybeSkipAndCache( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to consider hookContext.hookData
here. As the stages are cached individually, the hook data would be missing if before
is evaluated from cache, while isfinally
relies on it and is not cached yet.
The new OTEL Hooks in #1372 (which should even work in the web-sdk now, which we might not want to debounce but nevermind :D) rely on the Hook data, and an example like: https://github.com/open-feature/js-sdk/blob/main/packages/web/test/hooks-data.spec.ts#L45 would suffer from it.
I think that would especially apply in cases where we have thrashing on the cache as then there could be very large gaps in TTL between the stages resulting in a case where before
is cached and after
is not, calling after
with empty hook data.
I see two options:
- Caching them all together instead of individually could fix this? Then we would not care about hook data in debounced cases.
- We could cache the hook data and see if the former stage has cached hook data. But I think this could become problematic as there might not only be one
hook data
set because of different combinations of hook data from (non-)cachedbefore
,error
andafter
stages that can all contribute to the hook data observed in thefinally
stage.
In the end, I think I would prefer to option 1, as I think the set of stages should always be the same when cached, which it mostly "nearly" is due to similar (not same) TTLs if there are no evictions.
Does that make sense or am I getting it wrong @toddbaert?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've changed the options so there's only a single cacheKeySupplier function. The cached entry now has a result for each stage - this is needed because we want all stages to expire at the same time as a single entry to prevent disjunction.
/** | ||
* Function to generate the cache key for the before stage of the wrapped hook. | ||
* If the cache key is found in the cache, the hook stage will not run. | ||
* If not defined, the DebounceHook will no-op for this stage (inner hook will always run for this stage). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why don't we default to using the 'flagKey'? It seems like that would be the most common use case. It looks like the hook doesn't do anything if you don't set the beforeCacheKeySupplier
, but it's optional. I think we should make it either required or set a default. My preference would be to set a default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should make it either required or set a default. My preference would be to set a default.
We can do this, but the awkward part is that then it would debounce every stage, and if you want to NOT debounce one, you'd have to explicitly assign undefined.
Why don't we default to using the 'flagKey'?
I think flag key is almost certainly NOT what you'd want for after/finally hooks - you would almost certainly want to inolve the value/variant as well... but then we have to ask which of those we should use, considering variant isn't always present. All these questions is why I didn't provide a default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment has implications here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Especially in light of @lukas-reining 's comment above, I'm starting to agree with a single supplier, and providing a default just based on flag-key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done this... () => flagKey
is now the default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to target the web only, please make that very clear at the top of the file.
We could consider making this work on the server and client, like @lukas-reining did for the telemetry hooks. Since this hook is so flexible, I could see some use cases on the server, specifically when running it with a logging hook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ya I see no reason why this can't be for both, I can make that change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great!
We need to cache the return value of the before
hook too then. But if we cache all stages together as proposed for the other issue it does not matter anymore :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is actually possible.
The problem is that server hooks are possibly async, so to cache the return value of the before hook (and also to generally handle async errors) the inner hook as to be awaited (forcing the debounce hook to be async) - but the web doesn't support async hooks. It's basically the same reason we don't have a shared hook implementation in the web/server SDKs (only a BaseHook
type) - the async support of server is incompatible with the web hook. This isn't a problem for the OTel hooks because they are fire-and-forget no nothing is actually awaited.
I COULD have a shared base version of this package and server/web version that shared most code, but like I said... I think the same thing preventing us from having a single Hook in both SDKs prevents us from creating a single version of this hook for both SDKs.
Open to suggestions here, or to be told I'm wrong.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: Lukas Reining <[email protected]> Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: Lukas Reining <[email protected]> Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
Signed-off-by: Todd Baert <[email protected]>
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
Thank you for adding the debounce
hook. This is a great utility for performance-sensitive applications. I've reviewed the code and found a few issues, including some critical bugs in the caching logic and a few configuration problems. I've also suggested a refactoring to simplify the implementation and improve maintainability. Please take a look at my comments.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Todd Baert <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Signed-off-by: Todd Baert <[email protected]>
@beeme1mr @lukas-reining please re-review. |
Adds "debounce" hook.
This is a utility "meta" hook, which can be used to effectively debounce or rate limit other hooks based on various parameters.
This can be especially useful for certain UI frameworks and SDKs that frequently re-render and re-evaluate flags (React, Angular, etc).
The hook maintains a simple expiring cache with a fixed max size and keeps a record of recent evaluations based on a user-defined key-generation function (keySupplier).
Simply wrap your hook with the debounce hook by passing it a constructor arg, and then configure the remaining options.
In the example below, we wrap a logging hook so that it only logs a maximum of once a minute for each flag key, no matter how many times that flag is evaluated.