-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto batch #72
base: master
Are you sure you want to change the base?
Auto batch #72
Conversation
With this implementation, .autoBatch() works almost exactly like .live(), setting a flag for the client/endpoint to true. Since it doesn't revert back, it may be best to remove the .autoBatch() method from the client, so a single call to autoBatch doesn't unexpectedly affect all future endpoints created from it? Pros/cons to having it or not, not too sure. I used _cacheHash() with the autoBatchDelay, in case the delay is changed, it can have separate batches with different delays, and won't behave unexpectedly when there is a batch pending with different delay. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code-wise this is pretty much done! Great stuff. 👍 I just left some tiny nitpicks.
The only thing still missing is the documentation in README.md
, similar to the Caching/Error handling/Retrying parts.
With this implementation, .autoBatch() works almost exactly like .live(), setting a flag for the client/endpoint to true. Since it doesn't revert back, it may be best to remove the .autoBatch() method from the client, so a single call to autoBatch doesn't unexpectedly affect all future endpoints created from it? Pros/cons to having it or not, not too sure.
I think it's nice to have as long as it's documented as such.
Could you also check if #74 is caused by this feature? Edit: Someone else also reported this in combination with the |
Met with belst to examine the issue, and found the root cause. I created a test case which reproduces the issue, and am merging the fix proposed by belst's PR. |
This is much less over-engineered, following the suggested implementation from our previous conversation