-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set performance regression during full sync or cluster migration #4809
Comments
We should probably introduce a similar test in cluster_test.py where we fill up a single shard, and then push a config that evicts a slot range. |
@kostasrim I believe your first set statistics are affected a lot from the debug populate command which you run in this test. You populate 1M keys and the statistics of |
@romange I saw your PR, you can then "drop" them and measure explicitly +1 |
+1 will address |
I used the following branch: https://github.com/dragonflydb/dragonfly/tree/ReproFlush Attaching here partial log.
I believe that SleepFor should be an ok short term solution. Longer term solution - to introduce a more sophisticated scheduling in helio. |
When using this branch for benchmark the migration -https://github.com/dragonflydb/dragonfly/pull/4821/files which uses SleepFor we actually got high cpu (90%) and worst throughput (90K) than just yielding on every bucket in flush slots (155K and 100% cpu) |
so lets yield on every bucket |
To reproduce:
Results on opt-build locally:
'cmdstat_set': {'calls': 1448000, 'usec': 322803, 'usec_per_call': 0.22293}
with replication during full sync:
set': {'calls': 4130785, 'usec': 68150755, 'usec_per_call': 16.4983}
That's
80x
slowdownThe text was updated successfully, but these errors were encountered: