Skip to content

Commit

Permalink
Implemented edits from doc review
Browse files Browse the repository at this point in the history
Added article ID needed for revhistory tag.
Added subtitle to differentiate this document from exisiting version
without angi.
Fixed typos, punctuation, wording, format.
Fixed wrong section names of SLE HA admin guide.
  • Loading branch information
chabowski committed May 23, 2024
1 parent 68243ce commit 98067cb
Showing 1 changed file with 61 additions and 60 deletions.
121 changes: 61 additions & 60 deletions adoc/SLES4SAP-hana-angi-perfopt-15.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,16 @@
:docinfo:

// defining article ID
[#art-sles4sap-hana-angi-perfopt-15]

// Load document variables
include::Var_SLES4SAP-hana-angi-perfopt-15.txt[]
include::Var_SLES4SAP-hana-angi-perfopt-15-param.txt[]
//
// Start of the document
//

= {SAPHANA} System Replication Scale-Up - Performance Optimized Scenario
= {SAPHANA} System Replication Scale-Up - Performance Optimized Scenario: with SAPHanaSR-angi

[[pre.hana-sr]]
== About this guide
Expand Down Expand Up @@ -46,15 +49,15 @@ the high availability solution scenario "{SAPHANA} Scale-Up System Replication P

From the application perspective, the following variants are covered:

- plain system replication
- Plain system replication

- system replication with secondary site read-enabled
- System replication with secondary site read-enabled

- multi-tier (chained) system replication
- Multi-tier (chained) system replication

- multi-target system replication
- Multi-target system replication

- multi-tenant database containers for all above
- Multi-tenant database containers for all above

From the infrastructure perspective, the following variants are covered:

Expand All @@ -68,7 +71,7 @@ From the infrastructure perspective, the following variants are covered:

Deployment automation simplifies roll-out. There are several options available,
particularly on public cloud platfoms (for example https://www.suse.com/c/automating-the-sap-hana-high-availability-cluster-deployment-for-microsoft-azure/).
Ask your public cloud provider or your SUSE contact for details.
Ask your public cloud provider or your SUSE contact for more information.

See <<cha.hana-sr.scenario>> for details.

Expand Down Expand Up @@ -189,7 +192,7 @@ Thus you can use the above documents for both kinds of scenarios.

In case of failure of the primary {HANA} on node 1 (node or database
instance) the cluster first tries to start the takeover process. This
allows to use the already loaded data at the secondary site. Typically
allows to use the already loaded data at the secondary site. Typically,
the takeover is much faster than the local restart.

To achieve an automation of this resource handling process, you must
Expand All @@ -214,7 +217,8 @@ the cluster continues to poll the system replication status on a regular basis.

You can adjust the level of automation by setting the parameter `AUTOMATED_REGISTER`.
If automated registration is activated, the cluster will automatically register
a former failed primary to become the new secondary. Refer to the manual pages SAPHanaSR(7) and ocf_suse_SAPHana(7) for details on all supported parameters and features.
a former failed primary to become the new secondary. Refer to the manual pages
SAPHanaSR(7) and ocf_suse_SAPHana(7) for details on all supported parameters and features.

IMPORTANT: The solution is not designed to manually 'migrate' the primary or
secondary instance using HAWK or any other cluster client commands. In the
Expand All @@ -239,20 +243,18 @@ resources that are either available on the system or on the Internet.

For the latest documentation updates, see https://documentation.suse.com/.

You can also find numerous white-papers, best-practices, setup guides, and
other resources at the {sles4sap} best practices Web page:
{reslibrary}.
There is particularly an overview on all {suse} high availability solutions for
{saphana} and {s4hana} workloads.
You can find numerous whitepapers, best practices, setup guides, and
other resources on the {sles4sap} best practices Web page:
{reslibrary}. In particular, there is an overview of all {suse} high availability solutions for
{saphana} and {s4hana} workloads. Find the overview of high availability solutions supported by {sles4sap} here:

https://documentation.suse.com/sles-sap/sap-ha-support/html/sap-ha-support/article-sap-ha-support.html

SUSE also publishes blog articles about {sap} and high availability.
Join us by using the hashtag #TowardsZeroDowntime. Use the following link:
https://www.suse.com/c/tag/TowardsZeroDowntime/.

Supported high availability solutions by {sles4sap} overview:
https://documentation.suse.com/sles-sap/sap-ha-support/html/sap-ha-support/article-sap-ha-support.html

Lastly, there are manual pages shipped with the product.
Finally, there are manual pages shipped with the product.

==== Errata

Expand All @@ -269,8 +271,7 @@ see also the blog article https://www.suse.com/c/lets-flip-the-flags-is-my-sap-h

// TODO PRIO2: replace below with correct TID
// In addition to this guide, check the SUSE SAP Best Practice Guide Errata for
// other solutions
{tidNotes}7023713.
// other solutions {tidNotes}7023713.

// Standard SUSE includes
==== Feedback
Expand All @@ -281,7 +282,7 @@ include::common_intro_feedback.adoc[]
[[cha.hana-sr.scenario]]
== Supported scenarios and prerequisites

For the `{saphanasr}` package configure as decribed in the document at hand,
For the `{saphanasr}` package configuration as decribed in this document,
we limit the support to scale-up (single-box to single-box) system replication
with the following configurations and parameters:

Expand Down Expand Up @@ -388,10 +389,10 @@ memory can be used, as long as they are transparent to Linux HA.
For the HA/DR provider hook scripts susHanaSR.py and susTkOver.py, the following
requirements apply:

* {HANA} 2.0 SPS05 revision 059.04 and later provides Python3 as well as the HA/DR
provider hook method srConnectionChanegd() with multi-target aware parameters.
Python 3 and multi-target aware parameters are needed for the {saphanasr} package.
* {HANA} 2.0 SPS05 and later provides the HA/DR provider hook method preTakeover().
* {HANA} 2.0 SPS05 revision 059.04 and later provides Python3 and the HA/DR
provider hook method *srConnectionChanegd()* with multi-target-aware parameters.
Python 3 and multi-target-aware parameters are needed for the `{saphanasr}` package.
* {HANA} 2.0 SPS05 and later provides the HA/DR provider hook method *preTakeover()*.
// TODO PRIO1: check above version
* The user _{refsidadm}_ needs execution permission as user root for the command
crm_attribute.
Expand Down Expand Up @@ -436,7 +437,7 @@ because careful testing is needed.
This document describes how to set up the cluster to control {HANA} in
System Replication scenarios. The document focuses on the steps to integrate
an already installed and working {HANA} with System Replication.
In this document {sles4sap} {prodNr} {prodSP} is used. This concept can also be
To create this document, {sles4sap} {prodNr} {prodSP} was used. However, the concept can also be
used with {sles4sap} {prodNr} SP4 or newer.

The described example setup builds an {HANA} HA cluster in two data centers in
Expand Down Expand Up @@ -608,6 +609,7 @@ system replication also Instance Number+1 is blocked.
|NTP Server |pool pool.ntp.org|Address or name of your time server
|=======================================================================

// DO NOT CHANGE SECTION ID: refer to Trento check
[[cha.s4s.os-install]]
== {stepOS}

Expand Down Expand Up @@ -650,8 +652,8 @@ that fit all requirements for {HANA} are available from the SAP notes:
// Refer to Trento checks
==== Installing additional software
With {sles4sap}, {SUSE} delivers special resource agents for {HANA}. With the
pattern _sap-hana_, the old-style resource agent package SAPHanaSR is installed.
This package needs to be replaced by the new {saphanasr} package.
pattern _sap-hana_, the old-style resource agent package `SAPHanaSR` is installed.
This package needs to be replaced by the new `{saphanasr}` package.
Follow the instructions
below on each node if you have installed the systems based on SAP note {sapnote15}.
The pattern _High Availability_ summarizes all tools recommended to be installed on
Expand All @@ -667,7 +669,7 @@ The pattern _High Availability_ summarizes all tools recommended to be installed
{sapnode1}:~ # zypper in --type pattern ha_sles
----
. De-install the old-style package and install the new {saphanasr} resource agents.
. Uninstall the old-style package and install the new `{saphanasr}` resource agents.
Do this on all nodes.
+
[subs="attributes,quotes"]
Expand All @@ -678,16 +680,16 @@ Do this on all nodes.
====

NOTE: Do not replace the package SAPHanaSR by SAPHanaSR-angi in an already running cluster.
Upgrading from SAPHanaSR to {saphanasr} requires a certain procedure. See manual page
NOTE: Do not replace the package `SAPHanaSR` by `SAPHanaSR-angi` in an already running cluster.
Upgrading from `SAPHanaSR` to `{saphanasr}` requires a specific procedure. See manual page
SAPHanaSR_upgrade_to_angi(7) for details.

Installing the packages supportutils-plugin-ha-sap and ClusterTools2 is highly
Installing the packages `supportutils-plugin-ha-sap` and `ClusterTools2` is highly
recommended. The first helps collecting data for support requests, the second
simplifies common administrative tasks.

For more information, see section _Installation and Basic Setup_ of the {uarr}
{sleha} guide.
For more information, see section _Installation and Setup_ of the
{sleha} Administration Guide.


// DO NOT CHANGE SECTION ID: refer to Trento check
Expand Down Expand Up @@ -776,7 +778,7 @@ The SAP hostagent `saphostagent.service` and the instance´s `sapstartsrv` `SAP{
are running in the `SAP.slice`.
See also manual pages systemctl(8) and systemd-cgls(8) for details.


// DO NOT CHANGE SECTION ID: refer to Trento check
[[cha.s4s.hana-sys-replication]]
== {stepHSR}

Expand All @@ -788,7 +790,7 @@ For more information read the section _Setting Up System Replication_ of the
**Procedure**

. Back up the primary database.
. Enable primary database.
. Enable the primary database.
. Register and start the secondary database.
. Verify the system replication.

Expand Down Expand Up @@ -985,29 +987,29 @@ Before you integrate your {HANA} system replication into the HA cluster, it is
mandatory to do a manual takeover. Testing without the cluster helps to make
sure that basic operation (takeover and registration) is working as expected.

* Stop {HANA} on node 1
* Stop {HANA} on node 1.

* Takeover {HANA} to node 2
* Takeover {HANA} to node 2.

* Register node 1 as secondary
* Register node 1 as secondary.

* Start {HANA} on node 1
* Start {HANA} on node 1.

* Wait until sync state is active
* Wait until sync state is active.

=== Optional: Manually re-establishing {HANA} SR to original state

Bring the systems back to the original state:

* Stop {HANA} on node 2
* Stop {HANA} on node 2.

* Take over {HANA} to node 1
* Take over {HANA} to node 1.

* Register node 2 as secondary
* Register node 2 as secondary.

* Start {HANA} on node2
* Start {HANA} on node2.

* Wait until sync state is active
* Wait until sync state is active.

// DO NOT CHANGE SECTION ID: refer to Trento check
[[cha.s4s.hana-hook]]
Expand Down Expand Up @@ -1046,14 +1048,14 @@ script is `susChkSrv.py`.
// TODO PRIO2: Steps "Start" and "Test" are incomplete

This will implement three {HANA} HA/DR provider hook scripts.
The hook script susHanaSR.py is needs no config parameters.
The configuration for susTkOver.py normally does not need to be adapted.
The default for parameter sustkover_timeout is set to 30 seconds and is good
The hook script `susHanaSR.py` does not need any configuration parameters.
The configuration for `susTkOver.py` normally does not need to be adapted.
The default for parameter sustkover_timeout is set to 30 seconds, which is good
for most environments.
The configuration shown for susChkSrv.py is a good starting point. Any tuning
The configuration shown for `susChkSrv.py` is a good starting point. Any tuning
should be aligned with the SAP experts.

NOTE: All hook scripts should be used directly from the SAPHanaSR package.
NOTE: All hook scripts should be used directly from the `SAPHanaSR` package.
If the scripts are moved or copied, regular {SUSE} package updates will not work.

{HANA} must be stopped to change the global.ini and allow {HANA} to integrate
Expand All @@ -1064,7 +1066,7 @@ for details.

=== Implementing susHanaSR hook for srConnectionChanged

Use the hook from the {saphanasr} package /usr/share/SAPHanaSR-angi/susHanaSR.py.
Use the hook from the {saphanasr} package `/usr/share/SAPHanaSR-angi/susHanaSR.py`.
The hook must be configured on all {HANA} cluster nodes.
In global.ini, the section `[ha_dr_provider_sushanasr]` needs to be created.
The section `[trace]` might be adapted.
Expand Down Expand Up @@ -1100,7 +1102,7 @@ ha_dr_sushanasr = info

=== Implementing susTkOver hook for preTakeover

Use the hook from the {saphanasr} package /usr/share/SAPHanaSR-angi/susTkOver.py.
Use the hook from the {saphanasr} package `/usr/share/SAPHanaSR-angi/susTkOver.py`.
The hook must be configured on all {HANA} cluster nodes.
In global.ini, the section `[ha_dr_provider_sustkover]` needs to be created.
The section `[trace]` might be adapted.
Expand Down Expand Up @@ -1136,7 +1138,7 @@ ha_dr_sustkover = info

=== Implementing susChkSrv hook for srServiceStateChanged

Use the hook from the {saphanasr} package /usr/share/SAPHanaSR-angi/susChkSrv.py.
Use the hook from the {saphanasr} package `/usr/share/SAPHanaSR-angi/susChkSrv.py`.
The hook must be configured on all {HANA} cluster nodes.
In global.ini, the section `[ha_dr_provider_suschksrv]` needs to be created.
The section `[trace]` might be adapted.
Expand Down Expand Up @@ -1664,9 +1666,8 @@ the SBD device.
0 {sapnode1} clear
----

For more information on SBD configuration parameters, read the
section _Storage-based Fencing_, {uarr}SUSE Linux Enterprise High
Availability Extension and TIDs 7016880 and 7008216.
For more information on SBD configuration parameters, consult the respective sections of the SUSE Linux Enterprise High
Availability Administration Guide and the TIDs 7016880 and 7008216.

Now it is time to restart the cluster at the first node again (`{clusterstart}`).

Expand Down Expand Up @@ -1743,8 +1744,8 @@ Full List of Resources:
=== Configuring cluster properties and resources
This section describes how to configure constraints, resources, bootstrap, and STONITH,
using the `crm configure` shell command as described in section _Configuring and Managing Cluster Resources (Command Line)_
of the {uarr}SUSE Linux Enterprise High Availability Extension documentation.
using the `crm configure` shell command as described in part II _Configuration and Administration_
of the SUSE Linux Enterprise High Availability Administration Guide.
Use the command `crm` to add the objects to the cluster information base (CIB). Copy the following
examples to a local file, edit the file and then load the configuration to the CIB:
Expand Down Expand Up @@ -2973,7 +2974,7 @@ down by intention, this could trigger a takeover.
[[sec-maintenance]]
To receive updates for the operating system or the {sleha},
To receive updates for the operating system or {sleha},
it is recommended to register your systems to either a local {suma}, to {rmtool} ({rmt}),
or remotely with {scc}.
For more information, visit the respective Web pages:
Expand Down

0 comments on commit 98067cb

Please sign in to comment.