Add info on garbage collection of async clients #2388
+20
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changes being requested
When using multiple AsyncOpenAI clients to send records to a local vLLM server, we ran into an issue with garbage collection after switching from the sync client to the async client. We started seeing a bunch of errors like:
which indicated that the event loop was being closed before we could close the client. Concurrently, we saw degraded performance. When we switched to
with
blocks for resource management, the problem went away (PR - https://github.com/apache/beam/pull/35053/files). I seems like relying on normal garbage collection calling https://github.com/openai/openai-python/blob/f588695f77aad9279a355f5f483d8debf92b46ed/src/openai/_base_client.py#L1318C21-L1318C37 can cause a meaningful performance issue because the event loop can be destroyed before the client cleanup finishes.Its not clear to me if the slowness was caused by:
Regardless, it would be nice to doc this since switching clients did not work without some minor changes.