Skip to content

Update h2 windowing algo & Http Client benchmark #388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 59 commits into from
Apr 29, 2025
Merged

Update h2 windowing algo & Http Client benchmark #388

merged 59 commits into from
Apr 29, 2025

Conversation

TingDaoK
Copy link
Contributor

@TingDaoK TingDaoK commented Aug 26, 2022

  • Initial build of our http client benchmark
  • Test using a local host, that using our http client to connect to the host and collect how many requests are made during a certain time.
  • To run:

ISSUE FOUND & FIXED

  • We update the window for connection for each data frame received, which really slows us down for frequent small chunk of data frames receiving.

    • We fixed it by only update the connection window whenever it drops to 50% of the max.
    • Same issue may happens to streams windows
    • Or the padding of the connection window.
  • Providing tiny increments to flow control in WINDOW_UPDATE frames can cause a sender to generate a large number of DATA frames. from here

    • Is the client's responsibility to make sure not doing this? Even if the user do manual window update and doing small window update?
  • h2: respect the initial window size setting #514

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@TingDaoK TingDaoK changed the title Canary Http Client benchmark Aug 26, 2022
@lgtm-com
Copy link

lgtm-com bot commented Sep 12, 2022

This pull request introduces 1 alert when merging 4cd7338 into f81ee94 - view on LGTM.com

new alerts:

  • 1 for Variable defined multiple times

@lgtm-com
Copy link

lgtm-com bot commented Sep 12, 2022

This pull request introduces 1 alert when merging f445de0 into f81ee94 - view on LGTM.com

new alerts:

  • 1 for Variable defined multiple times

@lgtm-com
Copy link

lgtm-com bot commented Sep 12, 2022

This pull request introduces 1 alert when merging f86a9f7 into f81ee94 - view on LGTM.com

new alerts:

  • 1 for Variable defined multiple times

@lgtm-com
Copy link

lgtm-com bot commented Sep 12, 2022

This pull request introduces 1 alert when merging ca0c7c5 into f81ee94 - view on LGTM.com

new alerts:

  • 1 for Variable defined multiple times

@TingDaoK TingDaoK marked this pull request as ready for review September 12, 2022 23:53
@@ -1762,6 +1767,8 @@ static void s_handler_installed(struct aws_channel_handler *handler, struct aws_
aws_linked_list_push_back(
&connection->thread_data.outgoing_frames_queue, &connection_window_update_frame->node);
connection->thread_data.window_size_self += initial_window_update_size;
/* For automatic window management, we only update connectio windows when it droped blow 50% of MAX. */
connection->thread_data.window_size_self_dropped_threshold = AWS_H2_WINDOW_UPDATE_MAX / 2;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: pull this magic number into a constant?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's derive from a constant... and it's more clear about where it comes from/

@@ -408,12 +408,6 @@ static int s_localhost_integ_h2_upload_stress(struct aws_allocator *allocator, v
s_tester.alloc = allocator;

size_t length = 2500000000UL;
#ifdef AWS_OS_LINUX
Copy link
Contributor Author

@TingDaoK TingDaoK Apr 2, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this, since it seems to be the flow control window issue.

  1. we sent out too much
  2. the initial window from the local server is small (default to the magic 65535). https://httpwg.org/specs/rfc7540.html#iana-settings
  3. The local server code also update the window as data received.

Don't know why it matters that much for linux comparing to the other platform. (I think we do found this windowing issue affect linux more as well from the canary before, which matches this)

@TingDaoK TingDaoK changed the title Http Client benchmark Update windowing algo & Http Client benchmark Apr 7, 2025
@TingDaoK TingDaoK changed the title Update windowing algo & Http Client benchmark Update h2 windowing algo & Http Client benchmark Apr 7, 2025
@TingDaoK TingDaoK marked this pull request as draft April 8, 2025 22:58
@TingDaoK TingDaoK marked this pull request as ready for review April 18, 2025 20:21
* drops below the threshold.
* Default to half of the initial connection flow-control window size, which is 32767.
*/
uint32_t conn_window_size_threshold_to_send_update;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extremely debatable: you could just leave these out of the public options, until someone actually asks for it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can keep it. It makes the documentation easier to explain the behavior.😉

* The client will send the WINDOW_UPDATE frame to the server only valid.
* If the pending_window_update_size is too large, we will leave the excess to send it out later.
*/
uint64_t pending_window_update_size_thread;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utterly trivial. It's already in a struct named thread_data, no need to repeat

Suggested change
uint64_t pending_window_update_size_thread;
uint64_t pending_window_update_size;

or

Suggested change
uint64_t pending_window_update_size_thread;
uint64_t pending_window_update_size_self;

@@ -150,7 +165,7 @@ struct aws_h2_connection {
bool is_cross_thread_work_task_scheduled;

/* The window_update value for `thread_data.window_size_self` that haven't applied yet */
size_t window_update_size;
uint64_t pending_window_update_size_sync;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trivial: it's already in synced_data.

Suggested change
uint64_t pending_window_update_size_sync;
uint64_t pending_window_update_size;

or

Suggested change
uint64_t pending_window_update_size_sync;
uint64_t pending_window_update_size_self;

@@ -150,7 +165,7 @@ struct aws_h2_connection {
bool is_cross_thread_work_task_scheduled;

/* The window_update value for `thread_data.window_size_self` that haven't applied yet */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
/* The window_update value for `thread_data.window_size_self` that haven't applied yet */
/* Value for `thread_data.pending_window_update_size` that we haven't applied yet */

* The client will send the WINDOW_UPDATE frame to the server only valid.
* If the pending_window_update_size is too large, we will leave the excess to send it out later.
**/
uint64_t pending_window_update_size_thread;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thread

Copy link
Contributor

@graebm graebm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

all my feedback was style stuff. Looks good overall

@TingDaoK TingDaoK merged commit ca7e0e2 into main Apr 29, 2025
42 checks passed
@TingDaoK TingDaoK deleted the canary branch April 29, 2025 21:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants