Releases: hpc-gridware/clusterscheduler
GCS v9.0.2
Enhanced NVIDIA GPU Support with qgpu
- With the release of patch 9.0.2, the
qgpu
command has been added to simplify workload management for GPU resources. Theqgpu
command allows administrators to manage GPU resources more efficiently. It is available for Linux amd64 and Linux arm64.qgpu
is a multi-purpose command which can act as aload sensor
reporting the characteristics and metrics of of NVIDIA GPU devices. For that it depends on NVIDIA DCGM to be installed on the GPU nodes. It also works as aprolog
andepilog
for jobs to setup NVIDIA runtime and environment variables. Further it sets up per job GPU accounting so that the GPU usage and power consumption is automatically reported in the accounting being visible in the standardqacct -j
output. It supports all NVIDIA GPUs which are supported by Nvidias DCGM including NVIDIA's latest Grace Hopper superchips. For more information aboutqgpu
please refer to theAdmin Guide
.
(Available in Gridware Cluster Scheduler only)
Automatic Session Management
-
Patch 9.0.2 introduces the new concept of automatic sessions. Session allows the Gridware Cluster Scheduler system to synchronize internal data stores, so that client commands can be enforced to get the most recent data. Session management is enabled, but can be disabled by setting the
DISABLE_AUTOMATIC_SESSIONS
parameter to true in theqmaster_params
of the cluster configuration.The default for the
qmaster_param
DISABLE_SECONDARY_DS_READER
is now also false. This means that the reader thread pool is enabled by default and does not need to be enabled manually as in patch 9.0.1.The reader thread pool in combination with sessions ensure that commands that trigger changes within the cluster (write-requests), such as submitting a job, modifying a queue or changing a complex value, are executed and the outcome of those commands is guaranteed to be visible to the user who initiated the change. Commands that only read data (read-requests), such as
qstat
,qhost
orqconf -s...
, that are triggered by the same user, always return the most recent data although all read-requests in the system are executed completely in parallel to the other Gridware Cluster Scheduler core components. This additional synchronization ensures that the data is consistent for the user with each read-request but on the other side might slow down individual read-requests.Assume following script:
#!/bin/sh job_id=`qsub -terse ...` qstat -j $job_id
Without activated sessions it is not guaranteed that the
qstat -j
command will see the job that was submitted before. With sessions enabled, theqstat -j
command will always see the job but the command will be slightly slower compared to the same scenario without sessions.Sessions eliminate the need to poll for information about an action until it is visible in the system. Unlike other workload management systems, session management in Gridware Cluster Scheduler is automatic. There is no need to manually create or destroy sessions after they have been enabled globally.
-
The
sge_qmaster
monitoring has been improved. Beginning with this patch the output for worker and reader threads will show following numbers in the output section for reader and worker threads:... OTHER (ql:0,rql:0,wrql:0) ...
All three values show internal request queue lengths. Usually they are all 0 but in high load situations or when sessions are enabled then they can increase:
- ql shows the queue length of the worker threads. This request queue contains requests that require a write lock on the main data store.
- rql shows the queue length of the reader threads. The queue contains requests that require a read lock on the secondary reader data store.
- wrql shows the queue length of the waiting reader threads. All requests that cannot be handled by reader threads immediately are stored in this list till the secondary reader data store is ready to handle them. If sessions are disabled then the number will always be 0.
Increasing values are uncritical as long as the numbers also decrease again. If the numbers increase continuously then the system is under high load and the performance might be impacted.
(Available in Open Cluster Scheduler and Gridware Cluster Scheduler)
Departments, Users and Jobs - Department View
With the release of patch 9.0.2, we have removed the restriction that users can only be assigned to one department. Users can now be assigned to multiple departments. This is particularly useful in environments where users are members of multiple departments in a company and access to resources is based on department affiliation.
Jobs must still be assigned to a single department. This means that a user who is a member of multiple departments can submit jobs to any of the departments of which he/she is a member, by specifying the department in the job submission command using the -dept
switch. If a user does not specify a particular department, sge_qmaster
assigns the job to the first department found.
Using qstat
and qhost
, the output can be filtered based on access lists and departments using the -sdv
switch. When this switch is used, the following applies:
- Only the hosts/queues to which the user has access are displayed.
- Jobs are only displayed if they belong to the executing user or a user who belongs to one of the departments where the executing user is also part of.
- Child objects are only displayed if the user also has access to the corresponding parent object. This means that jobs are not displayed if the queue or host does not offer access (anymore) where the jobs are running, and queues if the host is not accessible (anymore).
Please note that this may result in situations where users are no longer being able to see their own jobs if the access permissions are changed for a user who has jobs running in the system.
Users having the manager role always see all hosts/queues and jobs independent of the use of the -sdv
switch.
Please note that this specific functionality is still in beta phase. It is only available in Gridware Cluster Scheduler and the implementation will change with upcoming patch releases.
GCS v9.0.1
The first patch release of Gridware Cluster Scheduler v9.0.1 is available. Packages can be found here: https://www.hpc-gridware.com/download-main/
Starting with patch 9.0.1, the new internal architecture of sge_qmaster
is enabled, allowing the component to use
additional data stores that can be utilized by pools of threads.
-
Listener threads: The listener thread pool was already available in earlier versions of Grid Engine. Starting with version 9.0.0 of Cluster Scheduler, this pool received a dedicated datastore to forward incoming requests faster to the component that ultimately has to process the request. New in version 9.0.1 is that this datastore includes more information so that the listener threads themselves can directly answer certain requests without having to forward them. This reduces internal friction and makes the cluster more responsive even in high load situations.
-
Reader thread pool: The reader thread pool is activated and can now utilize a corresponding data store. This will boost the performance of clusters in large environments where also users tend to request the status of the system very often, by using client commands like
qstat
,qhost
or other commands that send read-only requests tosge_qmaster
. The additional data store needs to be enabled manually by setting following parameter in the qmaster_params of the cluster configuration:> qconf -mconf ... qmaster_params ...,DISABLE_SECONDARY_DS_READER=false ...
Please note that requests answered by the reader thread pool might deliver slightly outdated data compared to the requests answered with data from the main data store because both data stores can be slightly out of sync. The maximum deviation can be configured by setting the
MAX_DS_DEVIATION
in milliseconds within in theqmaster_params
.> qconf -mconf ... qmaster_params ...,MAX_DS_DEVIATION=1000 ...
The default value is 1000 milliseconds. The value should be chosen carefully to balance the performance gain with the accuracy of the data.
With one of the upcoming patches we will introduce an addition concept of automatic-sessions that will allow to synchronize the data stores more efficiently so that client commands can be enforced to get the most recent data.
-
Enhanced monitoring: The monitoring of
sge_qmaster
has been enhanced to provide more detailed information about the utilization of the different thread pools. As also in the past the monitoring is enabled by setting the monitor time:> qconf -mconf ... qmaster_params ...,MONITOR_TIME=10 ...
qping
will then show statistics about the handled requests per thread.qping -i 1 -f <master_host> $SGE_QMASTER_PORT qmaster 1 ... 10/11/2024 12:54:53 | reader: runs: 261.04r/s ( GDI (a:0.00,g:2871.45,m:0.00,d:0.00,c:0.00,t:0.00,p:0.00)/s OTHER (ql:0)) out: 261.04m/s APT: 0.0007s/m idle: 80.88% wait: 0.01% time: 9.99s 10/11/2024 12:54:53 | reader: runs: 279.50r/s ( GDI (a:0.00,g:3074.50,m:0.00,d:0.00,c:0.00,t:0.00,p:0.00)/s OTHER (ql:0)) out: 279.50m/s APT: 0.0007s/m idle: 79.08% wait: 0.01% time: 10.00s 10/11/2024 12:54:53 | listener: runs: 268.65r/s ( in (g:268.34 a:0.00 e:0.00 r:0.30)/s GDI (g:0.00,t:0.00,p:0.00)/s) out: 0.00m/s APT: 0.0001s/m idle: 98.42% wait: 0.00% time: 9.99s 10/11/2024 12:54:53 | listener: runs: 255.37r/s ( in (g:255.37 a:0.00 e:0.00 r:0.00)/s GDI (g:0.00,t:0.00,p:0.00)/s) out: 0.00m/s APT: 0.0001s/m idle: 98.54% wait: 0.00% time: 10.00s
Here is the download link to the full Release Notes of Gridware Cluster Scheduler v9.0.1
OCS v9.0.0
Open Cluster Scheduler v9.0.0 is available. Pre-built packages can be found here: https://www.hpc-gridware.com/download-main/