Skip to content

Conversation

@karolinepauls
Copy link

Summary

Creating a client takes 25ms of CPU time. This is a lot. Most of that time is spent initialising the SSL context. Since the context is private anyway, we can cache it. In order to avoid memory problems and privacy issues, only the default context is cached.

Prior discussion, when the problem was left for later (2022): #2298 (comment). Personally, I'm finding solving this problem in httpx to be a comparable amount of work to solving it in applications.

Before:

In [2]: timeit httpx.AsyncClient()
26.6 ms ± 364 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [3]: timeit httpx.Client()
26.8 ms ± 268 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

After:

In [2]: timeit httpx.AsyncClient()
46.2 µs ± 224 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

In [3]: timeit httpx.Client()
46.5 µs ± 1.14 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

CPU Profile (only before):
image

Checklist

  • I understand that this PR may be closed in case there was no previous discussion. (This doesn't apply to typos!)
  • I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
  • I've updated the documentation accordingly.

@karolinepauls karolinepauls force-pushed the transport-cache-default-ssl-context branch 2 times, most recently from 684e579 to 19afa7a Compare January 15, 2025 02:40
@karolinepauls karolinepauls force-pushed the transport-cache-default-ssl-context branch from 19afa7a to 953d715 Compare May 17, 2025 18:14
@deathaxe
Copy link

deathaxe commented Jul 6, 2025

Maybe a caller should rather prefer to create a global Client instance for all its requests, then arbitrarily and implicitly caching internals.

@karolinepauls
Copy link
Author

Maybe a caller should rather prefer to create a global Client instance for all its requests, then arbitrarily and implicitly caching internals.

The problem is that in practice this simply doesn't happen - instead, CPUs are spun, more instances are added, and energy is burned.

@deathaxe
Copy link

deathaxe commented Jul 7, 2025

Well, than probably re-considder your application's architecture.

Also note you simply being able to create your own global ssl context and pass it to each client instance on creation. It would at least be way more explicit and in control of a caller.

@karolinepauls
Copy link
Author

Well, than probably re-considder your application's architecture.

In the last application I worked on I have solved this problem.

The argument I'm making is about the ethics of engineering - this library causes 100% CPU usage for 20ms with default settings. Given the popularity of Python and the typical quality of engineering, this could be gigawatthours of contribution to global warming every day.

@deathaxe
Copy link

deathaxe commented Jul 7, 2025

Well, those are arguments we could start a neverending debate on software development in general. I agree proper choise of programming languages, architectures and smart development decisions could save our earth 80% of CO2, but this would in a first step to abondon all those inefficient technologies which drive our modern web, such as bloated text based data transfers driven by inefficient resource eating scripting languages at all.

However those are still non-arguments against explicity.

A Client instance should always be treated as an isolated object not sharing any resources with other instances by default, as it is completely unclear for a library such as httpx, in which context those clients are used. They may require different ssl contexts for security/isolation requirements.

For those who just want to use httpx.get() and httpx.put() it would probably be more straight forward for httpx to provide a shared client object, which lives forever, instead of creating new ones for each function call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants