Skip to content

Conversation

@kartikjoshi21
Copy link
Contributor

@kartikjoshi21 kartikjoshi21 commented Sep 24, 2025

This PR adds first-class IPv6 and dual-stack support to minikube for the Docker/Podman (KIC) drivers and tightens IPv6 handling across bootstrap, CNIs(Bridge and calico), and networking. It introduces new CLI flags, validation, safer defaults, and a handful of fixes to make IPv6-only and dual-stack clusters work end-to-end.

New CLI flags (start):

--ip-family = ipv4 (default) | ipv6 | dual
--service-cluster-ip-range-v6
--pod-cidr (IPv4), --pod-cidr-v6 (IPv6)
--subnet-v6 (Docker/Podman network IPv6 CIDR)
--host-only-cidr-v6 (VirtualBox)
--static-ipv6 (static node IPv6 for Docker/Podman)

Refer to this comment for testing logs: #21630 (comment)

@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Sep 24, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: kartikjoshi21
Once this PR has been reviewed and has the lgtm label, please assign spowelljr for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 24, 2025
@k8s-ci-robot k8s-ci-robot requested a review from nirs September 24, 2025 11:47
@k8s-ci-robot
Copy link
Contributor

Hi @kartikjoshi21. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 24, 2025
@minikube-bot
Copy link
Collaborator

Can one of the admins verify this patch?

@k8s-ci-robot k8s-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Sep 24, 2025
Copy link
Member

@medyagh medyagh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add before/after this PR with examples of running it

@kartikjoshi21
Copy link
Contributor Author

please add before/after this PR with examples of running it

@medyagh im yet to add more commits to it, i will add full logs once this is ready, Thanks

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 9, 2025
@kartikjoshi21 kartikjoshi21 force-pushed the kartikjoshi/ipv6-support branch from 9ce26dd to c4dbb60 Compare October 25, 2025 21:07
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 25, 2025
@kartikjoshi21 kartikjoshi21 changed the title minikube: Add ipv6 and dual stack support for docker on bridge cni minikube: Add ipv6 and dual stack support for docker/Podman Oct 27, 2025
@kartikjoshi21
Copy link
Contributor Author

kartikjoshi21 commented Oct 27, 2025

All tests run on WSL2 (Debian 12) with Docker, Calico CNI, Kubernetes v1.34.1.
Container runtimes used: Docker 28.5.0 (IPv4 test), containerd 1.7.28 (IPv6 & Dual).

A) IPv4-only cluster — DNS, ClusterIP, Pod-to-Pod, NodePort

Command:
./out/minikube start --driver=docker \
  --cni=calico \
  --ip-family=ipv4 \
  --service-cluster-ip-range=10.96.0.0/12 \
  --pod-cidr=10.244.0.0/16 \
  --alsologtostderr --v=3
kubectl get nodes -o wide


Output

NAME       STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
minikube   Ready    control-plane   7m17s   v1.34.1   192.168.49.2   <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   docker://28.5.0
ns=ipv4test
kubectl create ns "$ns"
kubectl -n "$ns" create deploy echo \
  --image=registry.k8s.io/e2e-test-images/agnhost:2.45 \
  -- /agnhost netexec --http-port=8080
kubectl -n "$ns" expose deploy echo --port=80 --target-port=8080
kubectl -n "$ns" wait deploy/echo --for=condition=Available --timeout=90s


Output

namespace/ipv4test created
deployment.apps/echo created
service/echo exposed
deployment.apps/echo condition met
kubectl -n "$ns" run testbox --image=busybox:1.36 --restart=Never -- sleep 1d
kubectl -n "$ns" wait pod/testbox --for=condition=Ready --timeout=60s
kubectl -n "$ns" exec testbox -- sh -c 'nslookup kubernetes.default.svc.cluster.local >/dev/null && echo OK || echo FAIL'
kubectl -n "$ns" exec testbox -- sh -c 'nslookup echo.'"$ns"'.svc.cluster.local >/dev/null && echo OK || echo FAIL'


Output

pod/testbox created
pod/testbox condition met
OK
OK
kubectl -n "$ns" exec testbox -- wget -qO- "http://echo.$ns.svc.cluster.local/hostname"


Output
echo-7cc99fd968-sg9wt
cip=$(kubectl -n "$ns" get svc echo -o jsonpath='{.spec.clusterIP}')
kubectl -n "$ns" exec testbox -- wget -qO- "http://$cip/hostname"


Output

echo-7cc99fd968-sg9wt
peer_ip=$(kubectl -n "$ns" get pod -l app=echo -o jsonpath='{.items[0].status.podIP}')
kubectl -n "$ns" exec testbox -- wget -qO- "http://$peer_ip:8080/hostname"


Output

echo-7cc99fd968-sg9wt
kubectl -n "$ns" exec testbox -- sh -c 'wget -qO- http://example.com | head -n1'


Output

<!doctype html><html lang="en"><head><title>Example Domain</title>...
kubectl -n "$ns" patch svc echo -p '{"spec":{"type":"NodePort"}}'
nodeport=$(kubectl -n "$ns" get svc echo -o jsonpath='{.spec.ports[0].nodePort}')
minikube_ip=$(minikube ip)
curl -s "http://${minikube_ip}:${nodeport}/hostname"


Output

service/echo patched
echo-7cc99fd968-sg9wt

B) Dual-stack cluster — Dual DNS, Dual Pod IPs, Dual Service & Endpoints, HTTP (v4 & v6)

Command

./out/minikube start \
  --container-runtime=containerd \
  --driver=docker \
  --ip-family=dual \
  --cni=calico \
  --kubernetes-version=v1.34.1 \
  --v=3 --alsologtostderr
kubectl get nodes -o wide
kubectl get node minikube -o jsonpath='{.spec.podCIDRs}{"\n"}'
kubectl -n kube-system get cm kube-proxy -o go-template='{{index .data "config.conf"}}' | grep clusterCIDR


Output

NAME       STATUS   ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION                     CONTAINER-RUNTIME
minikube   Ready    control-plane   52m   v1.34.1   192.168.49.2   <none>        Debian GNU/Linux 12 (bookworm)   6.6.87.2-microsoft-standard-WSL2   containerd://1.7.28
["10.244.0.0/24","fd01::/64"]
clusterCIDR: 10.244.0.0/16,fd01::/64
kubectl -n kube-system get svc kube-dns \
  -o custom-columns=NAME:.metadata.name,IPs:.spec.clusterIPs,FAMS:.spec.ipFamilies --no-headers


Output

kube-dns   [10.96.0.10 fd00::6:137b]   [IPv4 IPv6]
kubectl run ds-a --image=nicolaka/netshoot --restart=Never -- sleep 1d
kubectl run ds-b --image=nicolaka/netshoot --restart=Never -- sleep 1d
kubectl get pod ds-a -o jsonpath='{.status.podIPs}{"\n"}'
kubectl get pod ds-b -o jsonpath='{.status.podIPs}{"\n"}'


Output

pod/ds-a created
pod/ds-b created
[{"ip":"10.244.120.66"},{"ip":"fd01::de3c:697a:87ea:7842"}]
[{"ip":"10.244.120.67"},{"ip":"fd01::de3c:697a:87ea:7843"}]
IPS_B=$(kubectl get pod ds-b -o jsonpath='{range .status.podIPs[*]}{.ip}{"\n"}{end}')
V4_B=$(echo "$IPS_B" | grep -v :)
V6_B=$(echo "$IPS_B" | grep :)
kubectl exec ds-a -- ping -c1 "$V4_B"
kubectl exec ds-a -- ping -6 -c1 "$V6_B"


Output

... IPv4 ping success ...
... IPv6 ping success ...
kubectl create deploy ds-echo --image=nginx:1.25-alpine --port=80
kubectl rollout status deploy/ds-echo
cat <<'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata: { name: ds-svc }
spec:
  selector: { app: ds-echo }
  ports: [{ name: http, port: 8080, targetPort: 80 }]
  ipFamilyPolicy: PreferDualStack
EOF
kubectl get svc ds-svc -o custom-columns=NAME:.metadata.name,IPs:.spec.clusterIPs,FAMS:.spec.ipFamilies --no-headers


Output

deployment.apps/ds-echo created
deployment "ds-echo" successfully rolled out
service/ds-svc created
ds-svc   [10.100.218.19 fd00::b:ceac]   [IPv4 IPv6]
kubectl -n default get endpointslice -l kubernetes.io/service-name=ds-svc -o json \
| jq -r '.items[] | .addressType as $t | .endpoints[]? | .addresses[]? | "\($t)\t\(.)"'


Output

IPv4    10.244.120.68
IPv6    fd01::de3c:697a:87ea:7844
PAIR=$(kubectl -n default get svc ds-svc -o json \
  | jq -r '[.spec.ipFamilies,.spec.clusterIPs] | transpose[] | @tsv')
V4_SVC=$(echo "$PAIR" | awk '$1=="IPv4"{print $2}')
V6_SVC=$(echo "$PAIR" | awk '$1=="IPv6"{print $2}')
kubectl exec ds-a -- sh -lc "curl -sS --connect-timeout 3 http://$V4_SVC:8080/ | head -n1"
kubectl exec ds-a -- sh -lc "curl -sS --connect-timeout 3 http://[$V6_SVC]:8080/ | head -n1"


Output

<!DOCTYPE html>
<!DOCTYPE html>

C) IPv6-only cluster — v6 Services, v6 DNS, v6 Pod-to-Pod

Command

./out/minikube start -p ipv6-only \
  --driver=docker \
  --container-runtime=containerd \
  --cni=calico \
  --kubernetes-version=v1.34.1 \
  --ip-family=ipv6
kubectl get pods -A | grep -E 'calico|coredns|kube-|etcd|storage' && \
kubectl get nodes -o wide


Output

kube-system   calico-kube-controllers-...   1/1   Running
kube-system   calico-node-...               1/1   Running
kube-system   coredns-...                   1/1   Running
kube-system   etcd-ipv6-only                1/1   Running
kube-system   kube-apiserver-ipv6-only      1/1   Running
kube-system   kube-controller-manager-...   1/1   Running
kube-system   kube-proxy-...                1/1   Running
kube-system   kube-scheduler-ipv6-only      1/1   Running
kube-system   storage-provisioner           1/1   Running
NAME        STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE   KERNEL-VERSION                     CONTAINER-RUNTIME
ipv6-only   Ready    control-plane   2m    v1.34.1   fd00::2       <none>        Debian...  6.6.87.2-microsoft-standard-WSL2   containerd://1.7.28
echo "PodCIDRs:"; kubectl get node -o jsonpath='{range .items[*]}{.metadata.name}{" "}{.spec.podCIDRs}{"\n"}{end}'
kubectl -n kube-system get cm kube-proxy -o go-template='{{index .data "config.conf"}}' | grep -E '^clusterCIDR'
kubectl get ippools.crd.projectcalico.org -o wide || true


Output

PodCIDRs:
ipv6-only ["fd01::/64"]
clusterCIDR: fd01::/64
NAME                  AGE
default-ipv4-ippool   61s
default-ipv6-ippool   61s
kubectl -n kube-system get svc kube-dns \
  -o custom-columns=NAME:.metadata.name,IPs:.spec.clusterIPs,FAMS:.spec.ipFamilies --no-headers


Output

kube-dns   [fd00::a]   [IPv6]
kubectl create deploy v6-echo --image=nginx:1.25-alpine --port=80
kubectl rollout status deploy/v6-echo
cat <<'EOF' | kubectl apply -f -
apiVersion: v1
kind: Service
metadata: { name: v6-svc }
spec:
  selector: { app: v6-echo }
  ports: [{ name: http, port: 8080, targetPort: 80 }]
  ipFamilyPolicy: SingleStack
  ipFamilies: [ IPv6 ]
EOF
kubectl get svc v6-svc -o custom-columns=NAME:.metadata.name,IPs:.spec.clusterIPs,FAMS:.spec.ipFamilies --no-headers


Output

deployment.apps/v6-echo created
deployment "v6-echo" successfully rolled out
service/v6-svc created
v6-svc   [fd00::a:2739]   [IPv6]
kubectl run v6-box --image=busybox:1.36 --restart=Never -- sleep 1d
kubectl wait --for=condition=Ready pod/v6-box --timeout=120s
SVC_V6=$(kubectl get svc v6-svc -o json | jq -r '.spec.clusterIPs[]' | head -n1)
kubectl exec v6-box -- wget -qO- "http://[$SVC_V6]:8080" | head -n3


Output

pod/v6-box created
pod/v6-box condition met
<!DOCTYPE html>
<html>
<head>
kubectl run v6a --image=busybox:1.36 --restart=Never -- sleep 1d
kubectl run v6b --image=busybox:1.36 --restart=Never -- sleep 1d
kubectl wait --for=condition=Ready pod/v6a pod/v6b --timeout=120s
V6A=$(kubectl get pod v6a -o jsonpath='{.status.podIP}')
V6B=$(kubectl get pod v6b -o jsonpath='{.status.podIP}')
kubectl exec v6a -- ping -6 -c3 -W2 "$V6B"
kubectl exec v6b -- ping -6 -c3 -W2 "$V6A"


Output

pod/v6a created
pod/v6b created
pod/v6a condition met
pod/v6b condition met
... 0% packet loss ...
... 0% packet loss ...

@kartikjoshi21 kartikjoshi21 marked this pull request as ready for review October 27, 2025 11:41
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Oct 27, 2025
@kartikjoshi21 kartikjoshi21 requested a review from medyagh October 27, 2025 11:41
@kartikjoshi21
Copy link
Contributor Author

@medyagh

I was trying to add support for hyperv driver as well but The minikube VM’s Linux kernel is missing the IPv6 netfilter pieces that Kubernetes needs. Both stacks that kube-proxy/Calico can use for IPv6 Services are unavailable:
ip6tables (legacy) → lacks the raw and nat tables
nftables → the nf_tables kernel modules don’t exist
Because kube-proxy and Calico program IPv6 Service VIPs/NodePorts via these tables, IPv6 Services cannot function on this VM.

Proof (commands + output)

  1. IPv6 is enabled (so the problem isn’t sysctl)
$ minikube ssh -- 'sysctl -n net.ipv6.conf.all.disable_ipv6; sysctl -n net.ipv6.conf.default.disable_ipv6'0
0
  1. ip6tables legacy: filter/mangle exist, but raw and nat are missing
Exit code 0 = OK; 3 = “can’t initialize table” (table not present / kernel support missing) 
$ minikube ssh -- 'for t in filter mangle raw nat; do printf "%s: " $t; sudo -n ip6tables -t $t -S >/dev/null 2>&1; echo $?; done'filter:0
mangle:0
raw:3
nat:3
Tried to load the usual IPv6 netfilter modules (including ip6table_raw/ip6table_nat) and re-tested — still missing:
$ minikube ssh -- 'sudo modprobe ip6_tables ip6table_filter ip6table_mangle ip6table_raw ip6table_nat nf_conntrack nf_nat || true'$ minikube ssh -- 'for t in filter mangle raw nat; do printf "%s: " $t; sudo -n ip6tables -t $t -S >/dev/null 2>&1; echo $?; done'filter:0
mangle:0
raw:3
nat:3
  1. nftables stack: kernel modules don’t exist
Attempting to load nftables modules fails; kernel directory shows version and missing modules:
$ minikube ssh -- 'for m in nf_tables nfnetlink nft_chain_nat nft_masq nft_ct nft_reject_ipv6 nf_conntrack nf_nat; do sudo modprobe $m || true; done'modprobe: FATAL: Module nf_tables not found in directory /lib/modules/6.6.95
modprobe: FATAL: Module nft_chain_nat not found in directory /lib/modules/6.6.95
modprobe: FATAL: Module nft_masq not found in directory /lib/modules/6.6.95
modprobe: FATAL: Module nft_ct not found in directory /lib/modules/6.6.95
modprobe: FATAL: Module nft_reject_ipv6 not found in directory /lib/modules/6.6.95
testing the nftables backend also fails:
$ minikube ssh -- 'for t in filter raw nat; do printf "%s: " $t; sudo -n ip6tables-nft -t $t -S >/dev/null 2>&1; echo $?; done'filter:1
raw:1
nat:1
  1. Calico Felix corroborates the missing tables (both backends)
When forced to NFT earlier, Felix failed to save v6 tables:
ip6tables-nft-save command failed ... table="raw" 
After switching to Legacy, Felix still fails because v6 raw/nat don’t exist, and eventually panics:
ip6tables-legacy-save command failed ... table="raw"PANIC ... command failed after retries ... table="raw" 
 
Kubelet probes show Felix never becomes ready:
Readiness probe failed: felix is not ready

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 28, 2025
@kartikjoshi21 kartikjoshi21 force-pushed the kartikjoshi/ipv6-support branch from 707c79d to 7ab2ee0 Compare October 30, 2025 10:08
@k8s-ci-robot k8s-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 30, 2025
Copy link
Member

@rata rata left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, I left some comments!

This PR is really complete and it must have been a lot of effort. Thanks! It's great to see the "end game" of how all can really be dual-stack and it's great to know that this is a path forward that allows for that. However, I think we shouldn't aim to get all the fishes in one go.

IMHO (bear in mind I'm not a maintainer here, so it's not authoritative in any way) this PR is changing a lot of stuff_:

  • The docker network to support ipv6
  • The control plane components to listen on ipv6
  • setting a bunch of sysctls
  • Certificates for the control plane to include the IPv6 addresses
  • Rewriting some templates to rely on a function (like in renderBridgeConflist())

Almost any of those changes is complex on itself and is risky. Let's reduce the risk by smaller PRs, that do less things.

Can we make this PR handle the IPv6 support on user pods (not the control plane) and all the things that are needed to make that happen? That is not small, as we will need the pod_cidr, maybe the service_cidr (or maybe not?), calico and the docker network to be ipv6/dual-stack aware. But we can leave everything else out.

Let's make this small and make sure it fails nicely on setups that don't support ipv6 (like using the VM driver on linux seems to not have ipv6 I think you mentioned?)

Does it make sense? Or am I missing something?


// Friendly reminder about enabling daemon IPv6 (actual failure will occur during network create otherwise)
out.Styled(style.Tip,
"If Docker daemon IPv6 is disabled, enable it in /etc/docker/daemon.json and restart:\n {\"ipv6\": true, \"fixed-cidr-v6\": \"fd00:55:66::/64\"}")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is not the default, then we should check it is configured like this and fail if it's not. Is it hard to verify this?

In my limited testing, it seems it is enabled by default and even if you set "{ ipv6": false } in the daemon.json, you can later run docker network create --ipv6 ip6net and if you inspect it docker network inspect xxx it has ipv6 enabled:

[
    {
        "Name": "ip6net",
        "Id": "42320600e9855745223e56eae7d35e6804c1e4d5675ba3ef80218c0c935b293e",
        "Created": "2025-10-30T13:26:30.43523507+01:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv4": true,
        "EnableIPv6": true,
...

Did you hit this? Maybe on a host not running with ipv6 support?

Comment on lines +200 to +220
// For IPv6/dual clusters, enable forwarding inside the node container
// (safe sysctl; avoid disable_ipv6 which may be blocked by Docker's safe list)

// Ensure service rules apply to bridged traffic inside the node container.
// Do both families; harmless if already set.
runArgs = append(runArgs,
"--sysctl", "net.ipv4.ip_forward=1",
"--sysctl", "net.bridge.bridge-nf-call-iptables=1",
// Allow kube-proxy/IPVS or iptables to program and accept IPv4 Service VIPs.
"--sysctl", "net.ipv4.ip_nonlocal_bind=1",
)
// IPv6/dual clusters need IPv6 forwarding and IPv6 bridge netfilter, too.
if p.IPFamily == "ipv6" || p.IPFamily == "dual" {
runArgs = append(runArgs,
"--sysctl", "net.ipv6.conf.all.forwarding=1",
"--sysctl", "net.bridge.bridge-nf-call-ip6tables=1",
// Allow kube-proxy/IPVS or iptables to program and accept Service VIPs.
"--sysctl", "net.ipv4.ip_nonlocal_bind=1",
// Same for IPv6 VIPs.
"--sysctl", "net.ipv6.ip_nonlocal_bind=1",
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this? This is setting it inside the container, there is no bridge inside the container that we create. Am I missing something?

}

// ipv6-only (no IPv4)
args = append(args, "--subnet", subnetv6)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fails in my ubuntu 22.04. Why do we need all this complex logic? If you don't specify a subnet, docker will give you a free subnet (see man docker-network-create). In fact, commenting this line makes it work in my ubuntu.

It would be great if we can remove most of this logic and rely on docker on what it already does.

Comment on lines -96 to +98
metricsBindAddress: 0.0.0.0:10249
metricsBindAddress: {{.KubeProxyMetricsBindAddress}}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we move all these changes to make k8s components listen on non ipv4 address to a different commit? I don't think this is needed to get the cluster up and running, right?

If it's not needed, I'm unsure if we can do this in another PR

if podCIDR != "" {
klog.Infof("Using pod subnet(s): %s", podCIDR)
} else {
klog.Infof("No pod subnet set via kubeadm (CNI will configure)")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We were logging what the CIDR used is, let's continue doing that.

Comment on lines +150 to +190
cpEndpoint := fmt.Sprintf("%s:%d", constants.ControlPlaneAlias, nodePort)
if family == "ipv6" && advertiseAddress != "" {
cpEndpoint = fmt.Sprintf("[%s]:%d", advertiseAddress, nodePort)
}

if family == "ipv6" || family == "dual" {
ensured := false
for i := range componentOpts {
// match "apiServer" regardless of accidental casing
if strings.EqualFold(componentOpts[i].Component, "apiServer") {
if componentOpts[i].ExtraArgs == nil {
componentOpts[i].ExtraArgs = map[string]string{}
}
if _, ok := componentOpts[i].ExtraArgs["bind-address"]; !ok {
componentOpts[i].ExtraArgs["bind-address"] = "::"
}
// normalize the component name so the template emits 'apiServer'
componentOpts[i].Component = "apiServer"
ensured = true
break
}
}
if !ensured {
componentOpts = append(componentOpts, componentOptions{
Component: "apiServer",
ExtraArgs: map[string]string{
"bind-address": "::",
},
})
}
}

apiServerCertSANs := []string{constants.ControlPlaneAlias}
switch strings.ToLower(k8s.IPFamily) {
case "ipv6":
apiServerCertSANs = append(apiServerCertSANs, "::1")
case "dual":
apiServerCertSANs = append(apiServerCertSANs, "127.0.0.1", "::1")
default: // ipv4
apiServerCertSANs = append(apiServerCertSANs, "127.0.0.1")
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this? Can't we keep the control plane running on IPv4 and move this to another commit or another PR?

Comment on lines +88 to +102
family := strings.ToLower(k8s.IPFamily)
switch family {
case "ipv6":
if nc.IPv6 != "" {
extraOpts["node-ip"] = nc.IPv6
} else {
// fallback if IPv6 wasn’t wired yet
extraOpts["node-ip"] = nc.IP
}
case "dual":
// Don’t set node-ip at all; kubelet will advertise both families.
// (If a user explicitly set node-ip, we honor it above.)
default: // "ipv4" or empty
extraOpts["node-ip"] = nc.IP
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is repeaded from kubeadm.go. I'm lost, why do we need it here too?

return nil
}

// ensureControlPlaneAlias adds control-plane.minikube.internal -> IP mapping in /etc/hosts
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we move dual-stack support for the control plane to another PR, we can remove this :)

return nil, errors.Wrap(err, "get service cluster ip")
}

// Collect both service VIPs if present
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And we could remove all of this, if we don't aim to make the control plane run on IPv6/dual-stack in this PR.

Comment on lines +35 to +36
// renderBridgeConflist builds a bridge CNI config that supports IPv4-only, IPv6-only, or dual-stack.
func renderBridgeConflist(k8s config.KubernetesConfig) ([]byte, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a big rewrite. Can't we just add the IPv4/6 to the original template?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants