@@ -17,7 +17,7 @@ here influenced my views on the proposed
17
17
(a.k.a. Senders and Receivers), which is likely to become the basis of
18
18
networking in upcoming C++ standards.
19
19
20
- Although the analysis presented here uses the Redis communication
20
+ Although the analysis presented in this article uses the Redis communication
21
21
protocol for illustration I expect it to be useful in general since
22
22
[ RESP3] ( https://github.com/antirez/RESP3/blob/master/spec.md ) shares
23
23
many similarities with other widely used protocols such as HTTP.
@@ -98,7 +98,7 @@ This is illustrated in the diagram below
98
98
|<---- offset threshold ---->|
99
99
| |
100
100
"+PONG\r\n:100\r\n+OK\r\n_ \r\n+PONG\r\n"
101
- | # Initial message
101
+ | # Initial offset
102
102
103
103
"+PONG\r\n:100\r\n+OK\r\n_ \r\n+PONG\r\n"
104
104
|<------>| # After 1st message
@@ -110,7 +110,7 @@ This is illustrated in the diagram below
110
110
|<--------------------->| # After 3rd message
111
111
112
112
"+PONG\r\n:100\r\n+OK\r\n_ \r\n+PONG\r\n"
113
- |<-------------------------->| # 4th message crosses the threashold
113
+ |<-------------------------->| # Threshold crossed after the 4th message
114
114
115
115
"+PONG\r\n"
116
116
| # After rotation
@@ -255,7 +255,7 @@ avoided, this is what worked for Boost.Redis
255
255
`try_send_via_dispatch` for a more aggressive optimization).
256
256
257
257
3. Coalescing of individual requests into a single payload to reduce
258
- the number of necessary writes on the socket,this is only
258
+ the number of necessary writes on the socket, this is only
259
259
possible because Redis supports pipelining (good protocols
260
260
help!).
261
261
@@ -282,8 +282,8 @@ avoided, this is what worked for Boost.Redis
282
282
`resp3::async_read` is IO-less.
283
283
284
284
Sometimes it is not possible to avoid asynchronous operations that
285
- complete synchronously, in the following sections we will therefore
286
- see how favoring throughput over fairness works in Boost.Asio.
285
+ complete synchronously, in the following sections we will see how to
286
+ favor throughput over fairness in Boost.Asio.
287
287
288
288
### Calling the continuation inline
289
289
@@ -299,7 +299,7 @@ async_read_until(socket, buffer, "\r\n", continuation);
299
299
300
300
// Immediate completions are executed in exec2 (otherwise equal to the
301
301
// version above). The completion is called inline if exec2 is the
302
- same // executor that is running the operation.
302
+ // same executor that is running the operation.
303
303
async_read_until(socket, buffer, "\r\n", bind_immediate_executor(exec2, completion));
304
304
```
305
305
@@ -388,7 +388,7 @@ Although faster, this strategy has some downsides
388
388
- Requires additional layers of complexity such as
389
389
` bind_immediate_executor ` in addition to ` bind_executor ` .
390
390
391
- - Not compliat with more strict
391
+ - Non- compliat with more strict
392
392
[ guidelines] ( https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Developing_Safety-Critical_Code )
393
393
that prohibits reentrat code.
394
394
@@ -432,7 +432,7 @@ instructing the asynchronous operation to call the completion inline
432
432
on immediate completion. It turns out however that coroutine support
433
433
for _ tail-calls_ provide a way to completely sidestep this problem.
434
434
This feature is described by
435
- [ Backer ] ( https://lewissbaker.github.io/2020/05/11/understanding_symmetric_transfer )
435
+ [ Lewis Baker ] ( https://lewissbaker.github.io/2020/05/11/understanding_symmetric_transfer )
436
436
as follows
437
437
438
438
> A tail-call is one where the current stack-frame is popped before
@@ -581,7 +581,7 @@ reentracy, allowing
581
581
[ sixteen] ( https://github.com/NVIDIA/stdexec/blob/83cdb92d316e8b3bca1357e2cf49fc39e9bed403/include/exec/trampoline_scheduler.hpp#L52 )
582
582
levels of inline calls by default. While in Boost.Asio it is possible to use
583
583
reentracy as an optimization for a corner cases, here it is made its
584
- _ modus operandi_ , my opinion about this has already been stated in a
584
+ _ modus operandi_ , the downsides of this approach have already been stated in a
585
585
previous section so I won't repeat it here.
586
586
587
587
Also the fact that a special scheduler is needed by specific
0 commit comments