- 
                Notifications
    You must be signed in to change notification settings 
- Fork 5.1k
minikube: Add ipv6 and dual stack support for docker/Podman #21630
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
minikube: Add ipv6 and dual stack support for docker/Podman #21630
Conversation
| [APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: kartikjoshi21 The full list of commands accepted by this bot can be found here. 
Needs approval from an approver in each of these files:
 Approvers can indicate their approval by writing  | 
| Hi @kartikjoshi21. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with  Once the patch is verified, the new status will be reflected by the  I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. | 
| Can one of the admins verify this patch? | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please add before/after this PR with examples of running it
| 
 @medyagh im yet to add more commits to it, i will add full logs once this is ready, Thanks | 
9ce26dd    to
    c4dbb60      
    Compare
  
    | All tests run on WSL2 (Debian 12) with Docker, Calico CNI, Kubernetes v1.34.1. A) IPv4-only cluster — DNS, ClusterIP, Pod-to-Pod, NodePort B) Dual-stack cluster — Dual DNS, Dual Pod IPs, Dual Service & Endpoints, HTTP (v4 & v6) C) IPv6-only cluster — v6 Services, v6 DNS, v6 Pod-to-Pod  | 
| I was trying to add support for hyperv driver as well but The minikube VM’s Linux kernel is missing the IPv6 netfilter pieces that Kubernetes needs. Both stacks that kube-proxy/Calico can use for IPv6 Services are unavailable: Proof (commands + output) 
 
 
 
  | 
Signed-off-by: Kartik Joshi <[email protected]>
Signed-off-by: Kartik Joshi <[email protected]>
Signed-off-by: Kartik Joshi <[email protected]>
707c79d    to
    7ab2ee0      
    Compare
  
    There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR, I left some comments!
This PR is really complete and it must have been a lot of effort. Thanks! It's great to see the "end game" of how all can really be dual-stack and it's great to know that this is a path forward that allows for that. However, I think we shouldn't aim to get all the fishes in one go.
IMHO (bear in mind I'm not a maintainer here, so it's not authoritative in any way) this PR is changing a lot of stuff_:
- The docker network to support ipv6
- The control plane components to listen on ipv6
- setting a bunch of sysctls
- Certificates for the control plane to include the IPv6 addresses
- Rewriting some templates to rely on a function (like in renderBridgeConflist())
Almost any of those changes is complex on itself and is risky. Let's reduce the risk by smaller PRs, that do less things.
Can we make this PR handle the IPv6 support on user pods (not the control plane) and all the things that are needed to make that happen? That is not small, as we will need the pod_cidr, maybe the service_cidr (or maybe not?), calico and the docker network to be ipv6/dual-stack aware. But we can leave everything else out.
Let's make this small and make sure it fails nicely on setups that don't support ipv6 (like using the VM driver on linux seems to not have ipv6 I think you mentioned?)
Does it make sense? Or am I missing something?
|  | ||
| // Friendly reminder about enabling daemon IPv6 (actual failure will occur during network create otherwise) | ||
| out.Styled(style.Tip, | ||
| "If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart:\n {\"ipv6\": true, \"fixed-cidr-v6\": \"fd00:55:66::/64\"}") | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is not the default, then we should check it is configured like this and fail if it's not. Is it hard to verify this?
In my limited testing, it seems it is enabled by default and even if you set "{ ipv6": false } in the daemon.json, you can later run docker network create --ipv6 ip6net and if you inspect it docker network inspect xxx it has ipv6 enabled:
[
    {
        "Name": "ip6net",
        "Id": "42320600e9855745223e56eae7d35e6804c1e4d5675ba3ef80218c0c935b293e",
        "Created": "2025-10-30T13:26:30.43523507+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": true,
...
Did you hit this? Maybe on a host not running with ipv6 support?
| // For IPv6/dual clusters, enable forwarding inside the node container | ||
| // (safe sysctl; avoid disable_ipv6 which may be blocked by Docker's safe list) | ||
|  | ||
| // Ensure service rules apply to bridged traffic inside the node container. | ||
| // Do both families; harmless if already set. | ||
| runArgs = append(runArgs, | ||
| "--sysctl", "net.ipv4.ip_forward=1", | ||
| "--sysctl", "net.bridge.bridge-nf-call-iptables=1", | ||
| // Allow kube-proxy/IPVS or iptables to program and accept IPv4 Service VIPs. | ||
| "--sysctl", "net.ipv4.ip_nonlocal_bind=1", | ||
| ) | ||
| // IPv6/dual clusters need IPv6 forwarding and IPv6 bridge netfilter, too. | ||
| if p.IPFamily == "ipv6" || p.IPFamily == "dual" { | ||
| runArgs = append(runArgs, | ||
| "--sysctl", "net.ipv6.conf.all.forwarding=1", | ||
| "--sysctl", "net.bridge.bridge-nf-call-ip6tables=1", | ||
| // Allow kube-proxy/IPVS or iptables to program and accept Service VIPs. | ||
| "--sysctl", "net.ipv4.ip_nonlocal_bind=1", | ||
| // Same for IPv6 VIPs. | ||
| "--sysctl", "net.ipv6.ip_nonlocal_bind=1", | ||
| ) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this? This is setting it inside the container, there is no bridge inside the container that we create. Am I missing something?
| } | ||
|  | ||
| // ipv6-only (no IPv4) | ||
| args = append(args, "--subnet", subnetv6) | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This fails in my ubuntu 22.04. Why do we need all this complex logic? If you don't specify a subnet, docker will give you a free subnet (see man docker-network-create). In fact, commenting this line makes it work in my ubuntu.
It would be great if we can remove most of this logic and rely on docker on what it already does.
| metricsBindAddress: 0.0.0.0:10249 | ||
| metricsBindAddress: {{.KubeProxyMetricsBindAddress}} | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we move all these changes to make k8s components listen on non ipv4 address to a different commit? I don't think this is needed to get the cluster up and running, right?
If it's not needed, I'm unsure if we can do this in another PR
| if podCIDR != "" { | ||
| klog.Infof("Using pod subnet(s): %s", podCIDR) | ||
| } else { | ||
| klog.Infof("No pod subnet set via kubeadm (CNI will configure)") | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were logging what the CIDR used is, let's continue doing that.
| cpEndpoint := fmt.Sprintf("%s:%d", constants.ControlPlaneAlias, nodePort) | ||
| if family == "ipv6" && advertiseAddress != "" { | ||
| cpEndpoint = fmt.Sprintf("[%s]:%d", advertiseAddress, nodePort) | ||
| } | ||
|  | ||
| if family == "ipv6" || family == "dual" { | ||
| ensured := false | ||
| for i := range componentOpts { | ||
| // match "apiServer" regardless of accidental casing | ||
| if strings.EqualFold(componentOpts[i].Component, "apiServer") { | ||
| if componentOpts[i].ExtraArgs == nil { | ||
| componentOpts[i].ExtraArgs = map[string]string{} | ||
| } | ||
| if _, ok := componentOpts[i].ExtraArgs["bind-address"]; !ok { | ||
| componentOpts[i].ExtraArgs["bind-address"] = "::" | ||
| } | ||
| // normalize the component name so the template emits 'apiServer' | ||
| componentOpts[i].Component = "apiServer" | ||
| ensured = true | ||
| break | ||
| } | ||
| } | ||
| if !ensured { | ||
| componentOpts = append(componentOpts, componentOptions{ | ||
| Component: "apiServer", | ||
| ExtraArgs: map[string]string{ | ||
| "bind-address": "::", | ||
| }, | ||
| }) | ||
| } | ||
| } | ||
|  | ||
| apiServerCertSANs := []string{constants.ControlPlaneAlias} | ||
| switch strings.ToLower(k8s.IPFamily) { | ||
| case "ipv6": | ||
| apiServerCertSANs = append(apiServerCertSANs, "::1") | ||
| case "dual": | ||
| apiServerCertSANs = append(apiServerCertSANs, "127.0.0.1", "::1") | ||
| default: // ipv4 | ||
| apiServerCertSANs = append(apiServerCertSANs, "127.0.0.1") | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need this? Can't we keep the control plane running on IPv4 and move this to another commit or another PR?
| family := strings.ToLower(k8s.IPFamily) | ||
| switch family { | ||
| case "ipv6": | ||
| if nc.IPv6 != "" { | ||
| extraOpts["node-ip"] = nc.IPv6 | ||
| } else { | ||
| // fallback if IPv6 wasn’t wired yet | ||
| extraOpts["node-ip"] = nc.IP | ||
| } | ||
| case "dual": | ||
| // Don’t set node-ip at all; kubelet will advertise both families. | ||
| // (If a user explicitly set node-ip, we honor it above.) | ||
| default: // "ipv4" or empty | ||
| extraOpts["node-ip"] = nc.IP | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is repeaded from kubeadm.go. I'm lost, why do we need it here too?
| return nil | ||
| } | ||
|  | ||
| // ensureControlPlaneAlias adds control-plane.minikube.internal -> IP mapping in /etc/hosts | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we move dual-stack support for the control plane to another PR, we can remove this :)
| return nil, errors.Wrap(err, "get service cluster ip") | ||
| } | ||
|  | ||
| // Collect both service VIPs if present | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And we could remove all of this, if we don't aim to make the control plane run on IPv6/dual-stack in this PR.
| // renderBridgeConflist builds a bridge CNI config that supports IPv4-only, IPv6-only, or dual-stack. | ||
| func renderBridgeConflist(k8s config.KubernetesConfig) ([]byte, error) { | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a big rewrite. Can't we just add the IPv4/6 to the original template?
This PR adds first-class IPv6 and dual-stack support to minikube for the Docker/Podman (KIC) drivers and tightens IPv6 handling across bootstrap, CNIs(Bridge and calico), and networking. It introduces new CLI flags, validation, safer defaults, and a handful of fixes to make IPv6-only and dual-stack clusters work end-to-end.
New CLI flags (start):
--ip-family = ipv4 (default) | ipv6 | dual
--service-cluster-ip-range-v6
--pod-cidr (IPv4), --pod-cidr-v6 (IPv6)
--subnet-v6 (Docker/Podman network IPv6 CIDR)
--host-only-cidr-v6 (VirtualBox)
--static-ipv6 (static node IPv6 for Docker/Podman)
Refer to this comment for testing logs: #21630 (comment)