You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -503,6 +503,7 @@ kubectl get secrets mjs-metrics-client-certs --template="{{.data.prometheus.key
503
503
504
504
MATLAB Job Scheduler in Kubernetes uses a Kubernetes load balancer service to expose MATLAB Job Scheduler to MATLAB clients running outside of the Kubernetes cluster.
505
505
By default, the Helm chart creates the load balancer for you.
506
+
You can customize the annotations on the Kubernetes load balancer service by setting the `loadBalancerAnnotations` parameter in your `values.yaml` file.
506
507
You can also create and customize your own load balancer service before you install the Helm chart.
507
508
508
509
Create a Kubernetes load balancer service `mjs-ingress-proxy` to expose MATLAB Job Scheduler to MATLAB clients running outside of the Kubernetes cluster.
Copy file name to clipboardExpand all lines: chart/mjs/values.yaml
+15-1Lines changed: 15 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Default values for MATLAB Job Scheduler (MJS) in Kubernetes.
2
-
# Copyright 2024 The MathWorks, Inc.
2
+
# Copyright 2024-2025 The MathWorks, Inc.
3
3
4
4
# Release number of the MATLAB version to use
5
5
matlabRelease: "r2024a"
@@ -60,10 +60,20 @@ poolProxyCPULimit: "" # CPU limit for each parallel pool proxy process
60
60
poolProxyCPURequest: "0.5"# CPU request for each parallel pool proxy process
61
61
poolProxyMemoryLimit: ""# Memory limit for each parallel pool proxy process
62
62
poolProxyMemoryRequest: "500Mi"# Memory request for each parallel pool proxy process
63
+
controllerCPULimit: ""# CPU limit for the MJS controller
64
+
controllerCPURequest: "100m"# CPU request for the MJS controller
65
+
controllerMemoryLimit: ""# Memory limit for the MJS controller
66
+
controllerMemoryRequest: "128Mi"# Memory request for the MJS controller
67
+
haproxyCPULimit: ""# CPU limit for the HAProxy pod
68
+
haproxyCPURequest: "100m"# CPU request for the HAProxy pod
69
+
haproxyMemoryLimit: ""# Memory limit for the HAProxy pod
70
+
haproxyMemoryRequest: "256Mi"# Memory request for the HAProxy pod
63
71
64
72
# Node settings
65
73
jobManagerNodeSelector: {} # Node selector for the job manager, specified as key-value pairs
66
74
workerNodeSelector: {} # Node selector for the workers, specified as key-value pairs
75
+
jobManagerTolerations: [] # Tolerations for the job manager pod
76
+
workerTolerations: [] # Tolerations for the worker pods
67
77
68
78
# Auto-scaling settings
69
79
idleStop: 300# Time after which idle worker pods will be removed
@@ -72,6 +82,7 @@ stopWorkerGracePeriod: 60 # Grace period in seconds for running stopworker
72
82
73
83
# Network settings
74
84
autoCreateLoadBalancer: true # Flag to automatically create a Kubernetes load balancer to expose MATLAB Job Scheduler to MATLAB clients outside the cluster
85
+
loadBalancerAnnotations: {} # Annotations to use for the load balancer
75
86
76
87
# Parallel pool proxy settings
77
88
poolProxyBasePort: 30000# Base port for parallel pool proxies
@@ -94,6 +105,9 @@ matlabPVC: "" # Name of a PVC that contains a MATLAB Parallel Server installatio
94
105
jobManagerUsesPVC: false # If true, the job manager container mounts the MATLAB Parallel Server installation from the PVC rather than using the jobManagerImage parameter
95
106
additionalMatlabPVCs: [] # Names of PersistentVolumeClaims containing installations of older MATLAB Parallel Server releases, specified as an array
96
107
108
+
# Worker pod settings
109
+
additionalWorkerPVCs: {} # Additional PersistentVolumeClaims to mount on worker pods, specifed as a map of claim names to mount paths
110
+
97
111
# Specify the maximum number of workers that the cluster can automatically resize to in your custom values.yaml file.
0 commit comments