Skip to content

Commit

Permalink
Implemented edits from doc review
Browse files Browse the repository at this point in the history
According to style checker, fixed some wording and style, shortened
sentences.
  • Loading branch information
chabowski committed Jan 22, 2025
1 parent 56d7ee9 commit 75699d4
Showing 1 changed file with 16 additions and 9 deletions.
25 changes: 16 additions & 9 deletions adoc/SLES4SAP-HANAonKVM-15SP5.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -519,7 +519,7 @@ cpupower -c all info
Modern processors also attempt to save power when they are idle, by switching to a lower power state.
Unfortunately, this incurs latency when switching in and out of these states.

To avoid that, and to achieve better and more consistent performance, the CPUs should not be allowed to switch those power saving modes (known as *C-states*) and should stay in normal operation mode all the time.
To avoid that, and to achieve better and more consistent performance, the CPUs should not be allowed to switch those power saving modes (known as *C-states*). This means it should stay in normal operation mode all the time.
Therefore, it is recommended to only use the state *C0*.

This can be enforced by adding the following parameters to the kernel boot command line: `intel_idle.max_cstate=0`.
Expand Down Expand Up @@ -712,7 +712,8 @@ This means that, in total, there needs to be the following number of huge pages:

Such number must be passed to the host kernel command line parameter on boot (that is `hugepages=3758`, see <<_sec_technical_explanation_of_the_above_described_configuration_settings>>).

Both the total amount of memory the guest VM should use and the fact that such memory must come from 1 GiB huge pages need to be specified in the guest VM configuration file. This means that the total available memory is the total of all configured 1GiB sized hugepages on the host (in KiB).
The guest VM configuration file must specify both the total memory the VM will use and that the memory must come from 1 GiB huge pages.
This means that the total available memory is the total of all configured 1GiB sized huge pages on the host (in KiB).

You must also ensure that the `memory` and the `currentMemory` element have the same value. This is to disable memory ballooning, which, if enabled, would cause unacceptable latency:

Expand All @@ -734,7 +735,7 @@ You must also ensure that the `memory` and the `currentMemory` element have the
.Memory Unit
[NOTE]
====
The memory unit can be set to GiB to ease the memory computations.
The memory unit can be set to GiB to simplify memory calculations.
====

[[_sec_vcpu_and_vnuma_topology]]
Expand All @@ -755,7 +756,7 @@ Also refer to <<_sec_memory_backing>> and <<_sec_memory_sizing>> of the document
** each NUMA cell of the guest VM has 56 vCPUs.
** the distances between the cells are identical to those of the physical hardware (as per the output of the command `numactl --hardware`).

The examples below show configurationsnipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
The examples below show configuration snipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).

----
<domain type='kvm'>
Expand Down Expand Up @@ -839,7 +840,8 @@ For example, assuming that the first hyperthread sibling pair is CPU 0 and CPU 1

It is recommended to pin both the various sibling pairs of vCPUs to (the corresponding) sibling pairs of host CPUs.
For example, vCPU 0 should be pinned to pCPU 0 and 112, and the same applies to vCPU 1.
As far as both the vCPUs always run on the same physical core, the host scheduler is allowed to execute them on either thread, for example in case only one is free while the other is busy executing host or hypervisor activities.
As long as both vCPUs always run on the same physical core, the host scheduler can execute them on either thread, for instance,
if one is free while the other is busy with host or hypervisor activities.

Using the above information, the CPU and memory pinning section of the guest VM XML can be created.
Below find a practical example based on the hypothetical example above.
Expand All @@ -852,7 +854,7 @@ Make sure to take note of the following configuration components:
** The `mode` attribute should be set to `strict`.
** The appropriate number of nodes should be entered in the `nodeset` and `memnode` attributes. In the first example, there are 4 sockets, therefore the values are `nodeset=0-3` and `cellid` 0 to 3.

The examples below show configurationsnipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).
The examples below show configuration snipets for full size single-vm layouts on a 4-node system containing {cascadelake} CPU's (first example) and on a 2-node system containing {sapphirerapids} CPU's (second example).

----
<domain type='kvm'>
Expand Down Expand Up @@ -1056,7 +1058,11 @@ More details about how to directly assign PCI devices to a guest VM are describe

===== Local storage

To achieve the best possible performance, it is recommended to directly attach the block device(s) and/or raid controllers, which will be used as storage for the SAP HANA data files. If there is a dedicated raid controller available in the system that only manages devices and raid volumes that will be used in one single VM, the recommendation is to connect it via PCI passthrough as described in the section above. If single devices need to be used (for example NVMe devices), you can connect those to the VM by doing something similar to the below:
To achieve the best possible performance, it is recommended to directly attach the block device(s) and/or raid controllers, which will be used as storage for the SAP HANA data files.
If a dedicated RAID controller is available in the system that only manages devices and RAID volumes used in one single VM, the recommendation is to connect it via PCI passthrough.
This is described in the section above.i
If single devices need to be used (for example NVMe devices), you can connect those to the VM by doing something similar to the below:

// TODO: Trockencode! Check this before publishing!!!

----
Expand Down Expand Up @@ -1295,7 +1301,8 @@ This overhead leads to an additional transactional throughput loss. However, it
** The measured performance deviation for OLAP workload is below 5%.
** During performance analysis with standard workload, most of the test cases stayed within the defined KPI of 10% performance degradation compared to bare metal.
However, there are low-level performance tests in the test suite exercising various HANA kernel components that exhibit a performance degradation of more than 10%.
This also indicates that there are particular scenarios which might not be suited for SAP HANA on SUSE KVM with kvm.nx_huge_pages = AUTO; especially those workloads generating high resource utilization, which must be considered when sizing SAP HANA instance in a SUSE KVM virtual machine.
This also indicates that certain scenarios may not be suited for SAP HANA on SUSE KVM with kvm.nx_huge_pages = AUTO.
This is especially true for workloads that generate high resource utilization, which must be considered when sizing the SAP HANA instance in a SUSE KVM virtual machine.
Thorough tests of configurations for all workload conditions are highly recommended.


Expand Down Expand Up @@ -1385,7 +1392,7 @@ The XML file below is only an *example* showing the key configurations to assist
The actual XML configuration must be based on your respective hardware configuration and VM requirements.
====

Points of interest in this example (refer to the detailed sections of the *SUSE Best Practices for SAP HANA on KVM* ({sles4sap} {slesProdVersion}) document at hand for a full explanation):
Points of interest in this example (refer to the detailed sections of this *SUSE Best Practices for SAP HANA on KVM* [{sles4sap} {slesProdVersion}] guide for a full explanation):

* Memory
** The hypervisor has 4 TiB RAM (or 4096 GiB), of which 3698 GiB have been allocated as 1 GB huge pages and therefore 3698 GiB is the max VM size in this case
Expand Down

0 comments on commit 75699d4

Please sign in to comment.