-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kind does not work with mitmproxy when using port redirection #3867
Comments
Those rules take all traffic that is not sent by the mitmproxy process directed to port 443 and redirects to port 8080 that is where the mitmproxy process runs. Kubernetes networking is more complex like that, since you are using Ingress and Services your need to check in your config if you are using port 443 and from where to where. This is not something easy to answer as it requires to understand how kubernetes networking work, |
The nginx ingress controller listens on port 443. That's exactly the point with those two rules, to let mitmproxy handle incoming packets that go through port 443 before the nginx controller gets to process them, and to also let it handle packets coming out of the nginx ingress controller before they are returned to the client. And that's not happening. What's happening when no kind cluster is running, and when the nginx ingress controller doesn't run, is that packets sent to port 443 coming out of an application are routed to mitmproxy first, which, after it processes them (logs their contents) sends their payload on to the actual application they were intended for, after encrypting them with a new session key. When packets are sent pack from the server application, they first reach mitmproxy, which decrypts them, logs their contents and then sends their payload, re-encrypted, back to the original client. If everything was set up the same way when a kind cluster is running, I'd expect the same to happen - packets sent by a client to port 443 to be intercepted by mitmproxy, which would then send them on to the nginx ingress controller, after re-encrypting their payload with a different key that it had negotiated with the nginx ingress controller, and then, when the nginx ingress controller sends answer packets to mitmproxy, to re-encrypt those and send them back to the original client. But this is not what happens. In order to understand why, I need to understand what exactly the kind networking does to the routing tables on the host. That's what I was asking for. I believe the developers of kind's networking are in a much better position to answer this question than I am. |
What I'm trying to say is that the kubernetes networking model is different, so you need to understand it to know where and what rules you need, is not that there is a one to one conversion https://kubernetes.io/docs/concepts/cluster-administration/networking/ |
kind does not setup routing tables on the host machine, that would be docker/podman/nerdctl (and it doesn't necessarily work that way, see https://github.com/moby/vpnkit) |
Mitmproxy is a tool meant for debugging - just like kind. Mitmproxy can decrypt TLS traffic on the fly and forward it re-encrypted. It shows details about the certificates associated with an encrypted connection and about the payload of https requests and responses.
Mitmproxy supports a mode of operation where you manipulate routing to forward certain traffic to mitmproxy, depending on th originating user, like so:
then start mitmproxy in transparent mode as that user and then run your request. Mitmproxy will intercept and then forward traffic, and decrypt both the request and the response and also show details about the server certificate.
I tried this out with a locally running echo server, and it works great. I also used wireshark to capture traffic between the client and mitmproxy and mitmproxy and the echo server, and set up both the client and the server to save the TLS session keys, and was able to decrypt and read both dumps.
However, when the server application is deployed behind an nginx ingress deployed on a local kind cluster, this doesn't work anymore. No packet seems to go out of mitmproxy to the nginx ingress (or at least none is received by the nginx ingress controller pod). The packets that go out of the client and return to the client do look like a regular TLS handshake, but one that's never answered by the server, and the packets that go in and out of mitmproxy literally look like gibberish.
If I do the same experiment with an echo server outside of the kind cluster, running on another port than 443, I again get expected/correct behavior from mitmproxy - i.e. the request and response are correctly intercepted, decrypted and forwarded.
My suspicion is that this is due to kind manipulating routing in a way that's incompatible with mitmproxy. But I'm not knowledgeable enough to investigate this on my own. Can someone explain to me, at least at a high level, why this is happening, and how I can go forward investigating this? Is there additional information that would be useful to be provided?
The text was updated successfully, but these errors were encountered: