Skip to content

Commit 5db8f1e

Browse files
1 parent 3dae386 commit 5db8f1e

File tree

1 file changed

+31
-0
lines changed

1 file changed

+31
-0
lines changed

Diff for: site/content/en/docs/faq/_index.md

+31
Original file line numberDiff line numberDiff line change
@@ -203,3 +203,34 @@ sudo mkdir -p "$CNI_PLUGIN_INSTALL_DIR"
203203
sudo tar -xf "$CNI_PLUGIN_TAR" -C "$CNI_PLUGIN_INSTALL_DIR"
204204
rm "$CNI_PLUGIN_TAR"
205205
```
206+
207+
## How to increase open files limit for minikube
208+
209+
When using a container-based driver (docker, podman)) and you are creating multiple nodes, like when creating a [Highly Available Control Plane](https://minikube.sigs.k8s.io/docs/tutorials/multi_control_plane_ha_clusters/) you may see pods in Error status using the `kubectl get po -A` command.
210+
211+
Inspecting the logs of a Pod shows a "too many open files" Linux error:
212+
213+
```shell
214+
minikube kubectl -- logs -n kube-system kube-proxy-84gm6
215+
E1210 11:50:42.117036 1 run.go:72] "command failed" err="failed complete: too many open files"
216+
```
217+
This can be fixed by increasing the number of inotify watchers on the host where you run minikube:
218+
219+
```shell
220+
# cat > /etc/sysctl.d/minikube.conf <<EOF
221+
fs.inotify.max_user_watches = 524288
222+
fs.inotify.max_user_instances = 512
223+
EOF
224+
```
225+
226+
```shell
227+
# sysctl --system
228+
...
229+
* Applying /etc/sysctl.d/minikube.conf ...
230+
* Applying /etc/sysctl.conf ...
231+
...
232+
fs.inotify.max_user_watches = 524288
233+
fs.inotify.max_user_instances = 512
234+
```
235+
236+
After increasing the number of watchers, restart the minikube cluster and the error should disappear.

0 commit comments

Comments
 (0)