-
Notifications
You must be signed in to change notification settings - Fork 197
Leak of connections to Redis #279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I suspect it's this. Try updating to PY38+ — Please report back. |
I tried with Python 3.8 and 3.9.
|
OK, if you could work to isolate what's going on there that would help too. |
Investigated this together with @jberci during the past few afternoons, because we were facing the same issue. tl;dr: This is an issue with a bad implementation of The OP is correct that the issue was introduced in 3656d87. However, the leak does not occur only when closing the connection without accepting, but in general. It's a timing issue, and closing the connection quickly makes it more likely to appear (the call to The cause of the issue from After creating the connection, a There are two problems in the
When the pending However, even after fixing that and ensuring the
After obtaining the connection with After these two fixes, the issue can no longer be reproduced. Notably, the Sadly, the issue cannot be easily fixed in
|
Hi @jureslak thanks for the write up there. I think updating aioredis is the way to go no? Would you be up for making a PR there? |
Hi! We have the same problem with our app. To prevent this, we have to restart daphne periodically. Any progress here? |
could you provide some working monkey patching example for this? thx in advance |
There's WIP to address the updating going on now: |
Even with This also occurs with 3.4.0 with @jureslak 's patch to "Heavier load" here means hundreds to thousands of new connections per second. With 10k total connections and an arrival rate of 2000/s on an 8-core Hetzner CX51 instance, I'm leaking up to 8k Redis connections. As the cause is most likely unrelated, should I file a new issue? Update: checked that this only occurs with |
Thanks @mikaraunio. At first pass seems related no? If you can help pin down the leak that would be handy. Planning to cut back this way after Django 4.1a1 is out of the door. |
Yes @carltongibson, agree - could be related, and in any case don't see the need for multiple Redis connection leak issues. I have created a demo project with a single Daphne worker at: https://github.com/mikaraunio/channels-redis-leak When creating 500 client connections in one second, this leaks around 300 Redis connections on my test system. If I increase the arrival rate and client count, I can get this to leak with a I have been unable to duplicate this with |
I encountered similar memory leak with RedisChannelLayer in our production server: Python 3.9.6 Dependencies:
memory usage slowly goes up and we have to restart the processes periodically to release memory. using
there is also a lot of this warning:
everything added with it is also similar to #212 with the |
changed the backend to
|
We have an issue with
daphne
opening connections to redis and (sometimes) never closing them. After a while, daphne cannot accept new connection and raises eitherOSError(24, 'Too many open files')
orMaxClientsError('ERR max number of clients reached')
.This leak sometimes occurs when a
WebsocketConsumer
closes the connection before accepting it (i.e. it rejects the connection).I created a micro project to help reproduce this issue: https://github.com/simonbru/channels-redis-issue
I can reproduce the leak on
channels-redis>=3.3.0
but notchannels-redis==3.2.0
. To be more specific, I can reproduce the leak starting from this commit: 3656d87Environment
Python dependencies
Configuration
channels_redis.core.RedisChannelLayer
The text was updated successfully, but these errors were encountered: