Skip to content

Add {Response,DataSink}::is_alive #1956

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed

Conversation

ochafik
Copy link

@ochafik ochafik commented Oct 4, 2024

This is to be able to detect if the client connection was lost.

Context: #1952 / ggml-org/llama.cpp#9679

As suggested by @ngxson, since is_writable / select_write may be a bit heavyweight, I tried using just select_read (given it's in a context when every other thread is done reading) but it didn't do the trick alone, so settled for is_socket_alive (which uses select_read + read_socket): seems to work.

Update: I finally realized having is_alive on DataSink isn't enough, so moved it to Response itself (by lack of a better place?). In the llama.cpp use case we need to test for liveness before status / headers are even written back. Added a tiny test to exercise this.

Tested (no new failures): cmake -B build -DHTTPLIB_TEST=1 -DHTTPLIB_REQUIRE_OPENSSL=0 -DHTTPLIB_COMPILE=1 -DHTTPLIB_TEST=1 -DHTTPLIB_REQUIRE_ZLIB=1 -DHTTPLIB_REQUIRE_BROTLI=1 && cmake --build build -j && ctest --test-dir build -j

@ochafik ochafik marked this pull request as ready for review October 4, 2024 18:30
@ochafik ochafik changed the title Add {Stream,DataSink}::is_alive Add {Response,DataSink}::is_alive Oct 5, 2024
@ochafik
Copy link
Author

ochafik commented Oct 10, 2024

@yhirose What do you think about this approach?

@yhirose
Copy link
Owner

yhirose commented Oct 10, 2024

@ochafik thanks for the pull request. I took a look at the change, but I am not comfortable to expose such a method in Response.

Also your test code looks working, but it's a very very dangerous code actually.

If I make the following change in Server::process_request in httplib.h to make a response object on heap instead of stack as below, the hidden problem will be revealed.

  // Response res; // Original code on stack
  auto resp = detail::make_unique<Response>(); // on heap
  Response& res = *resp;

We get a crash when calling res.is_alive() the second time in the loop, since res has already been gone.

        while (count > 0 && res.is_alive()) {

I added debug print statements as below, and tested.

        std::cout << "enter loop" << std::endl;
        while (count > 0 && res.is_alive()) {
          this_thread::sleep_for(chrono::milliseconds(10));
        }
        std::cout << "leave loop" << std::endl;

Here is the result.

~/Projects/cpp-httplib/test$ make test && ./test --gtest_filter="*ClientCloseDetectionOnResponse*"
clang++ -o test -I.. -g -std=c++11 -I. -Wall -Wextra -Wtype-limits -Wconversion -Wshadow  test.cc include_httplib.cc gtest/gtest-all.cc gtest/gtest_main.cc -DCPPHTTPLIB_OPENSSL_SUPPORT -I/opt/homebrew/opt/openssl@3/include -L/opt/homebrew/opt/openssl@3/lib -lssl -lcrypto -DCPPHTTPLIB_USE_CERTS_FROM_MACOSX_KEYCHAIN -framework CoreFoundation -framework Security -DCPPHTTPLIB_ZLIB_SUPPORT -lz -DCPPHTTPLIB_BROTLI_SUPPORT -I/opt/homebrew/opt/brotli/include -L/opt/homebrew/opt/brotli/lib -lbrotlicommon -lbrotlienc -lbrotlidec -pthread -lcurl
Running main() from gtest/gtest_main.cc
Note: Google Test filter = *ClientCloseDetectionOnResponse*
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from LongPollingTest
[ RUN      ] LongPollingTest.ClientCloseDetectionOnResponse
enter loop
libc++abi: terminating due to uncaught exception of type std::__1::bad_function_call: std::exception
Abort trap: 6

So at this point, this pull request is not acceptable...

@yhirose
Copy link
Owner

yhirose commented Oct 20, 2024

I'll close it for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants