-
Notifications
You must be signed in to change notification settings - Fork 748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Save to Disk in Bio Thread - draft #1784
base: unstable
Are you sure you want to change the base?
Save to Disk in Bio Thread - draft #1784
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## unstable #1784 +/- ##
============================================
- Coverage 71.09% 71.02% -0.08%
============================================
Files 123 123
Lines 65671 65816 +145
============================================
+ Hits 46692 46748 +56
- Misses 18979 19068 +89
🚀 New features to boost your workflow:
|
2d9c776
to
466a0ca
Compare
6bcc6be
to
c67c618
Compare
c67c618
to
0a14eff
Compare
2b6e9a5
to
ad1b8fc
Compare
ad1b8fc
to
f1418b1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial comments.
@@ -2649,6 +2678,12 @@ void freePendingReplDataBuf(void) { | |||
server.pending_repl_data.len = 0; | |||
} | |||
|
|||
void receiveRDBinBioThread(connection *conn) { | |||
serverLog(LL_NOTICE, "Replica main thread creating Bio thread to save RDB to disk"); | |||
connSetReadHandler(conn, NULL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens to the write handler? Is the main thread supposed to do any writes in the meantime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question, it was never set to anything on the replica's side, see:
https://github.com/valkey-io/valkey/blob/unstable/src/replication.c#L3517
^ this is executed during sync handshake. After the sync is done and we enter steady-state we initialize it: https://github.com/valkey-io/valkey/blob/unstable/src/replication.c#L4380
src/replication.c
Outdated
@@ -3918,6 +3963,10 @@ void replicationAbortSyncTransfer(void) { | |||
cleanupTransferResources(); | |||
} | |||
|
|||
void waitForDiskSaveBioThreadComplete(void) { | |||
while (bioPendingJobsOfType(BIO_SAVE_TO_DISK)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a blocking operation and should be avoided in the main thread. Although this is already being done in the main thread I think, via: bioDrainWorker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to do this to prevent a race condition where a new sync starts before the previous Bio thread finishes.
Busy-waiting is also how it's done for io-threads, see waitForClient()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(1) can bioDrainWorker
be used?
(2) Did you consider timing out the operation and shutting down ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point - since we switched to blocking mode on the bio thread we can have a deadlock here. I'll remove this, and then the timeouts of read()
will guarantee that we eventually reach shouldAbortSave()
which will free the main thread.
Edit:
Actually we set a timeout so we cannot block forever:
connRecvTimeout(conn, server.repl_syncio_timeout * 1000);
So we are guaranteed to eventually reach shouldAbortSave()
which will free the main thread from busy-waiting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking the main thread forever is bad practice, even if the child thread is logically guaranteed to finish.
Maybe timing it out and crashing is the right approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your suggestion to switch to bioDrain
seems to be the right solution. Updated.
About blocking forever, this seems to be a risk in other places in the code too, whether they busy-wait with bioDrain
or with bioPendingJobsOfType
. I suggest we open an issue to address this more comprehensively.
error = 1; | ||
goto done; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not to create an error
label instead of using a variable?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we cannot have more than one label inside the else if (job_type == BIO_RDB_SAVE)
.
If we have
done:
...
error:
...
then error
would always get executed after done
(we cannot return after done since we need to reach the end of the iteration:
zfree(job);
/* Lock again before reiterating the loop, if there are no longer
* jobs to process we'll block again in pthread_cond_wait(). */
pthread_mutex_lock(&bio_mutex[worker]);
listDelNode(bio_jobs[worker], ln);
bio_jobs_counter[job_type]--;
pthread_cond_signal(&bio_newjob_cond[worker]);
)
I guess we can do
done:
...
goto really_done;
error:
...
really_done:
But I'm not sure it makes things better
goto done; | ||
} else if (ret == INSPECT_BULK_PAYLOAD_PRIMARY_PING) { | ||
atomic_store_explicit(&server.repl_transfer_lastio, atomic_load_explicit(&server.unixtime, memory_order_relaxed), memory_order_relaxed); | ||
memset(buf, 0, PROTO_IOBUF_LEN); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this is being done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the purpose of a ping from the primary - to refresh last_io
field in order to avoid timeout
* We'll restore it when the RDB is received. */ | ||
connBlock(conn); | ||
connRecvTimeout(conn, server.repl_syncio_timeout * 1000); | ||
do { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need two loops in the code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are basically imitating what readSyncBulkPayload
does for normal replication. We try to read the bulk payload length from the primary. Ideally one pass would be enough (no need for loop), but the primary is sometimes not fast enough in sending the length, so it periodically sends pings (newlines) until it's ready. We have to keep looping while we receive these pings.
The second loop goes on until the primary finishes sending all the data (we see an EOF or the amount we read is equal to the previously passed payload length).
So the first loop is conditioned on receiving pings, the second one on sync completion.
298e925
to
ca16a76
Compare
Introduction
This PR introduces a new feature that enables replicas to perform disk-based synchronization on a dedicated background thread (Bio thread). Benchmarking results demonstrate significant improvements in synchronization duration. In extreme cases, this optimization allows syncs that would have previously failed to succeed.
This is an early draft pull request, as requested by the maintainers, to allow for review of the overall structure and approach before the full implementation is completed.
Problem Statement
Some administrators prefer the disk-based full synchronization mode for replicas. This mode allows replicas to continue serving clients with data while downloading the RDB file.
Valkey's predominantly single-threaded nature creates a challenge: serving client read requests and saving data from the socket to disk are not truly concurrent operations. In practice, the replica alternates between processing client requests and replication data, leading to inefficient behavior and prolonged sync durations, especially under high load.
Proposed Solution
To address this, the solution offloads the task of downloading the RDB file from the socket to a background thread. This allows the main thread to focus exclusively on handling client read requests while the background thread handles communication with the primary.
Benchmarking Results
Potential for Improvement
In theory, this optimization can lead to unbounded improvement in sync duration. By eliminating competition between client read events and socket communication (i.e., events related to handling RDB download with the primary), sync times become independent on load - the main thread handles only client reads, while the background thread focuses on primary RDB download events, allowing the system to perform consistently even under high load.
The full valkey-benchmark commands can be found in the appendix below.
Sync Duration with Feature Disabled (times in seconds)
16 threads, 64 clients: 172 seconds
32 threads, 128 clients: 436 seconds
48 threads, 192 clients: 710 seconds
Sync Duration with Feature Enabled (times in seconds)
16 threads, 64 clients: 33 seconds (80.8% improvement)
32 threads, 128 clients: 33 seconds (92.4% improvement)
48 threads, 192 clients: 33 seconds (95.3% improvement)
Alternative Solutions Considered
IO Threads
IO threads to not have an advantage over Bio in this case: The save-to-disk job is rare (most likely no more than several executions in a replica's lifetime), and there is never more than one simultaneous execution. Bio threads make more sense for a single, slow long running operation.
io_uring
For a single connection, io_uring doesn't provide as much of a performance boost because the primary advantage comes from batching many I/O operations together to reduce syscall overhead. With just one connection, we won't have enough operations to benefit significantly from these optimizations.
Prioritizing primary's socket in the event loop
This approach would help, but less effectively than using a Bio thread. We would still need to allocate attention to handling read requests, which could limit its benefit. It could be more useful on smaller instance types with limited CPU cores.
Appendix:
Benchmarking Setup