-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Connection leakage #15
Comments
Did you manage to work around this issue? |
See linked issue (#16). I believe that connection management remains necessary. Those haven't been incorporated, so I have my own fork that contains those along with a number of other updates. You're welcome to use my repo or extract the improvements at github.com/vgough/grpc-proxy. |
Thank you so much! |
Hi @vgough your fork is unusable due to your repo is vendored which leads to two problems. First: Second: |
@tsony-tsonev Thanks. I've updated my repo to use Go modules rather than the dep vendoring. |
@vgough I have read your fork about memory leakage,but i have no idea when should i release those connections. (https://github.com/vgough/grpc-proxy/blob/master/proxy/examples_test.go).Can you provide me some examples about [ Release(ctx context.Context, conn *grpc.ClientConn)] ,thank you very much |
@gakkiyomi There is more documentation in the readme: https://github.com/vgough/grpc-proxy/blob/master/proxy/README.md#type-streamdirector I think that normally you would be implementing a director, not calling one. The handler calls Release on the director when it is done proxying a call: https://github.com/vgough/grpc-proxy/blob/master/proxy/handler.go#L71 When your code receives a Release call, it has the opportunity to release resources used by the director. |
@vgough I truly appreciate your timely help. |
Guys just want to inform you for another bug if you are using this on prod. Memory leakage is working good, in my case I'm just closing the connections in the release function of the Director. But couple of months ago I had problems with bidirectional streaming and had to fork the repo and fix it. The proxy is forwarding messages on the bidi stream successfully from the client to the proxy than to the backend service and on the other direction without problems. The bug occurs when the backend service is restarted then the client continues to be connected to the proxy and is thinking that it has connection with the backend. This can be seen if we set 1 sec keep alive for the client connection. |
@tsony-tsonev How you managed to fix this bug? Closing client connection or redialing the backend? |
@SergeyNarozhny i don't remember exactly how, but you can check out our fork https://github.com/taxime-hq/grpc-proxy |
Using grpc-proxy with a simple test server, I see every connection from the proxy to the endpoint leaking a file descriptor.
I've narrowed it down to 2 cases in handler.handler, both involve not closing backendConn.
if the client goes away unexpectedly, grpc.NewClientStream can fail and the backendConn leaks. Need to close the backendConn in that case.
seems that backendConn needs to be closed at some point. Experimentally, I found that closing backendConn after serverStream.SetTrailer eliminated the leaks I was seeing.
Without those two fixes, I see connections pile up within the proxy until it runs out of file descriptors and fails subsequent requests.
The text was updated successfully, but these errors were encountered: