-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
All requests that reach pods have the load balancer's IP #31
Comments
As stated in this comment:
I guess you're using k3s as Kubernetes distribution and probably Traefik as your cluster router (default router for k3s). If you're using Traefik, here's how you get the original client IP address (for other routers, the settings may be a bit different, but the same logic still applies):
For more details have a look at this article: K3S Thing: Make Traefik Forward Real Client IP |
As per the guide's 3rd step, I disabled Traefik and was using Nginx Ingress Controller |
Ah sorry, I didn't read that guide. But you can still use my answer to fix your problem. Just make sure that |
I am running one node. Set To |
@Taymindis Where and how did you set the externalTrafficPolicy to Local? Because if you do it at runtime you have to restart Traefik. And how do you check it? With the whoami container from containous? |
Hi @mamiu , I am running bare-metal k3s without My Steps of procedure
service:
enabled: true
# -- If enabled is adding an appProtocol option for Kubernetes service. An appProtocol field replacing annotations that were
# using for setting a backend protocol. Here is an example for AWS: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
# It allows choosing the protocol for each backend specified in the Kubernetes service.
# See the following GitHub issue for more details about the purpose: https://github.com/kubernetes/kubernetes/issues/40244
# Will be ignored for Kubernetes versions older than 1.20
##
appProtocol: true
annotations: {}
labels: {}
# clusterIP: ""
# -- List of IP addresses at which the controller services are available
## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
##
externalIPs: []
# loadBalancerIP: ""
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
## Set external traffic policy to: "Local" to preserve source IP on providers supporting it.
## Ref: https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer
externalTrafficPolicy: "Local" And I have a app pod which echo back the client IP back when we service a specific url. Please note that I am not using |
Sorry, but that is simply not true:
This configures iptables to do exactly that. The whole thing about getting the router to run on the same node as the end service indeed resolves this, but it defies loadbalancing quite a bit... |
@jeroenrnl I 100% agree with that! But haven't found a good alternative solution yet.
@Taymindis Your router (in your case nginx) will get the correct client IP address but then has to translate it so that your app pod (which echoes the client IP address) is sending the traffic back to the router. So your router can't send the request with your client IP address otherwise your app will try to respond to the client IP directly without taking the extra step through the router and that's not how networking works (for more details watch this YouTube video). To solve this issue, load balancers, routers, reverse proxies, etc. use a special HTTP header called |
The PROXY protocol as defined on haproxy.org provides a solution to this issue. It requires that Klipper LB and the backend of the service be compatible with that protocol. Maybe this could get implemented in Klipper LB under a togglable flag? |
@mamiu is there way to achieve your solution in a HA (high availability k3s setup) ? im talking about this - https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/ would it mean that the masters have traefik running on them with externalTrafficPolicy: Local ? unsure on how to achieve this then |
@sandys Yes it's definitely possible. But it only works if the Traefik instance runs on the node where you send the traffic to. As explained in my comment up here. |
@sandys @mamiu This is a major problem for a loadbalancer especially with the default helm configuration. Perhaps a solution would be to have a traefik service on each node and each klipper instance pointing to the nearby traefik pod. However, this is beyond my capabilities at the moment. |
Anyone still dealing with this? I am, but am at my wits end. |
I am unable to replicate this behavior on my multi-node k3s cluster. I have setup Traefik's affinity to the correct node, and can confirm that it scheduled on the right node, and have I also have Pi hole DNS running on a Is there a way to get this working? It's entirely possible I am missing something here. |
@dakota-marshall I'd recommend you not to use klipper (the default load balancer of K3s), but instead have a Traefik instance running on each node that listens to external traffic. I know this will prevent you from using Traefik's Let's Encrypt integration, but if you want that you can just use Switchboard. |
@mamiu Thanks for the info! In that case ill look at switching over to doing that and using metallb for my other services that need a LoadBalancer. |
I have same issue, Info"controller": {
"kind": "DaemonSet",
"allowSnippetAnnotations": True,
"service": {
"externalTrafficPolicy": "Local",
},
"config": {
"enable-real-ip": True,
"use-forwarded-headers": True,
"compute-full-forwarded-for": True,
"use-proxy-protocol": True,
"proxy-add-original-uri-header": True,
"forwarded-for-header": "proxy_protocol",
"real-ip-header": "proxy_protocol",
},
}, Info
Info
Info
Info
Useful notes. |
I am also running into this. |
Would this help? https://kubernetes.io/blog/2023/12/18/kubernetes-1-29-feature-loadbalancer-ip-mode-alpha/ |
I have the same issue on my single node k3s. On my machine, svclb-traefik pod uses its own IP address before sending package to traefik, thus x-forward-for is always filled with the IP of svclb-traefik pod. I found a possible reason in k3s document:
So it seems like it's impossible to manually set the ipMode to 'Proxy'? |
Hey, to avoid copy-pasting the same question, here's the StackOverflow link.
Basically I want my pods to get the original client IP address... or at least have
X-Forwarded-For
header, in a worse-case scenario. I used this guide to set up my cluster.As I said there, happy to share more details to get this sorted out.
The text was updated successfully, but these errors were encountered: