SUSE / ha-sap-terraform-deployments

Automated SAP/HA Deployments in Public/Private Clouds
GNU General Public License v3.0
123 stars 88 forks source link

Hana cost-optimized scenario fails on Azure #703

Closed lpalovsky closed 3 years ago

lpalovsky commented 3 years ago

salt-deployment.log salt-result.log salt-predeployment.log salt-os-setup.log

Used cloud platform Azure

Used SLES4SAP version 15SP3

Used client machine OS SUSE Linux

Expected behaviour vs observed behaviour After deployment of Hana cluster, cost optimized scenario, secondary db does not start with message "HDB deamon not running". After investigating a bit I have found out that in file: /hana/shared/PRD/exe/linuxx86_64/hdb/python_support/hdbcli/dbapi.py there is a relative import causing the problem

cdtrace
prdadm@vmhana02:/usr/sap/PRD/HDB00/vmhana02/trace> grep  srCostOptMemConfig nameserver_*.trc

nameserver_vmhana02.30001.000.trc:[671]{-1}[-1/-1] 2021-05-18 10:54:02.469968 e ha_dr_provider PythonProxyImpl.cpp(00091) : import of srCostOptMemConfig failed: srCostOptMemConfig.py(39): Attempted relative import in non-package

vmhana01:/home/cloudadmin # head -n 10 /hana/shared/PRD/exe/linuxx86_64/hdb/python_support/hdbcli/dbapi.py | grep .result
from .resultrow import ResultRow

Changing '.resultrow' to 'resultrow' fixed the issue. However, the python scripts belongs to SAP software and is specific to Hana version. In my test I used 2.33 and after checking against 2.52, the script and import looks different.

How to reproduce Specify the step by step process to reproduce the issue. This usually would look like something like this:

  1. Create tfvars file with options for active/active cost optimized deployment
  2. terraform plan
  3. terraform apply
  4. salt deployment fails on secondary node while starting secondary DB
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):           ID: start_hana_prd_00
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):     Function: module.run
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):       Result: False
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):      Comment: An exception occurred in this state: Traceback (most recent call last):
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/var/cache/salt/minion/extmods/modules/hanamod.py", line 359, in start
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   hana_inst.start()
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/shaptools/hana.py", line 361, in start
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   self._run_hana_command(cmd)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/shaptools/hana.py", line 190, in _run_hana_command
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   raise HanaError('Error running hana command: {}'.format(result.cmd))
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):               shaptools.hana.HanaError: Error running hana command: su -lc "sapcontrol -nr00 -function WaitforStarted 2700 2" prdadm

module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):               During handling of the above exception, another exception occurred:

module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):               Traceback (most recent call last):
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/state.py", line 2176, in call
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   *cdata["args"], **cdata["kwargs"]
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/loader.py", line 2113, in wrapper
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   return f(*args, **kwargs)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/utils/decorators/__init__.py",line 738, in _decorate
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   return self._call_function(kwargs)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/utils/decorators/__init__.py",line 352, in _call_function
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   return self._function(*args, **kwargs)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/states/module.py", line 422, in run
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   _func, returner=kwargs.get("returner"), func_args=kwargs.get(func)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/states/module.py", line 467, in _call_function
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   mret = salt.utils.functools.call_function(__salt__[name], *func_args, **func_kwargs)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/usr/lib/python3.6/site-packages/salt/utils/functools.py", line 159,in call_function
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   return salt_function(*function_args, **function_kwargs)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                 File "/var/cache/salt/minion/extmods/modules/hanamod.py", line 361, in start
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):                   raise exceptions.CommandExecutionError(err)
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):               salt.exceptions.CommandExecutionError: Error running hana command: su -lc "sapcontrol -nr 00 -function WaitforStarted 2700 2" prdadm
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):      Started: 09:33:39.547356
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):     Duration: 16351.124 ms
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec):      Changes: 

Used terraform.tfvars

#################################
# ha-sap-terraform-deployments project configuration file
# Find all the available variables and definitions in the variables.tf file
#################################

# Region where to deploy the configuration
az_region = "westeurope"

# Use an already existing resource group
#resource_group_name = "my-resource-group"

# Use an already existing virtual network
#vnet_name = "my-vnet"

# Use an already existing subnet in this virtual network
#subnet_name = "my-subnet"

# vnet address range in CIDR notation
# Only used if the vnet is created by terraform or the user doesn't have read permissions in this
# resource. To use the current vnet address range set the value to an empty string
# To define custom ranges
#vnet_address_range = "10.74.0.0/16"
#subnet_address_range = "10.74.1.0/24"
# Or to use already existing address ranges
#vnet_address_range = ""
#subnet_address_range = ""

#################################
# General configuration variables
#################################

# Deployment name. This variable is used to complement the name of multiple infrastructure resources adding the string as suffix
# If it is not used, the terraform workspace string is used
# The name must be unique among different deployments
deployment_name = "lpalovskycostopt"

# Admin user for the created machines
admin_user = "cloudadmin"

# If BYOS images are used in the deployment, SCC registration code is required. Set `reg_code` and `reg_email` variables below
# By default, all the images are PAYG, so these next parameters are not needed
#reg_code = "<<REG_CODE>>"
#reg_email = "<<your email>>"

# To add additional modules from SCC. None of them is needed by default
#reg_additional_modules = {
#    "sle-module-adv-systems-management/12/x86_64" = ""
#    "sle-module-containers/12/x86_64" = ""
#    "sle-ha-geo/12.4/x86_64" = "<<REG_CODE>>"
#}

# Default os_image. This value is not used if the specific values are set (e.g.: hana_os_image)
# Run the next command to get the possible options and use the 4th column value (version can be changed by `latest`)
# az vm image list --output table --publisher SUSE --all
# BYOS example with sles4sap 15 sp2 (this value is a pattern, it will select the latest version that matches this name)
#os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"
#os_image = "SLES15-SP3-SAP-BYOS.x86_64-0.9.10-Azure-Build2.42.vhd"

# The project requires a pair of SSH keys (public and private) to provision the machines
# The private key is only used to create the SSH connection, it is not uploaded to the machines
# Besides the provisioning, the SSH connection for this keys will be authorized in the created machines
# These keys are provided using the next two variables in 2 different ways
# Path to already existing keys
public_key  = "~/.ssh/id_rsa.pub"
private_key = "~/.ssh/id_rsa"

# Or provide the content of SSH keys
#public_key  = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt06V...."
#private_key = <<EOF
#-----BEGIN OPENSSH PRIVATE KEY-----
#b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn
#...
#P9eYliTYFxhv/0E7AAAAEnhhcmJ1bHVAbGludXgtYWZqOQ==
#-----END OPENSSH PRIVATE KEY-----
#EOF

# Authorize additional keys optionally (in this case, the private key is not required)
# Path to local files or keys content
#authorized_keys = ["/home/myuser/.ssh/id_rsa_second_key.pub", "/home/myuser/.ssh/id_rsa_third_key.pub", "ssh-rsa AAAAB3NzaC1yc2EAAAA...."]

# An additional pair of SSH keys is needed to provide the HA cluster the capability to SSH among the machines
# This keys are uploaded to the machines!
# If `pre_deployment = true` is used, this keys are autogenerated
cluster_ssh_pub = "salt://sshkeys/cluster.id_rsa.pub"
cluster_ssh_key = "salt://sshkeys/cluster.id_rsa"

##########################
# Other deployment options
##########################

# Repository url used to install HA/SAP deployment packages"
# The latest RPM packages can be found at:
# https://download.opensuse.org/repositories/network:/ha-clustering:/Factory/{YOUR OS VERSION}
# Contains the salt formulas rpm packages.
# To auto detect the SLE version
ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:devel/"
# Otherwise use a specific SLE version:
#ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:devel/SLE_15/"
#ha_sap_deployment_repo = ""

# Provisioning log level (error by default)
#provisioning_log_level = "info"

# Print colored output of the provisioning execution (true by default)
#provisioning_output_colored = false

# Enable pre deployment steps (disabled by default)
pre_deployment = true

# To disable the provisioning process
#provisioner = ""

# Run provisioner execution in background
#background = true

# QA variables

# Define if the deployment is used for testing purpose
# Disable all extra packages that do not come from the image
# Except salt-minion (for the moment) and salt formulas
# true or false (default)
qa_mode = true

# Execute HANA Hardware Configuration Check Tool to bench filesystems
# qa_mode must be set to true for executing hwcct
# true or false (default)
#hwcct = false

##########################
# Bastion (jumpbox) machine variables
##########################

# Enable bastion usage. If this option is enabled, it will create a unique public ip address that is attached to the bastion machine.
# The rest of the machines won't have a public ip address and the SSH connection must be done through the bastion
bastion_enabled = true

# Bastion SSH keys. If they are not set the public_key and private_key are used
#bastion_public_key  = "/home/myuser/.ssh/id_rsa_bastion.pub"
#bastion_private_key = "/home/myuser/.ssh/id_rsa_bastion"

# Bastion machine os image. If it is not provided, the os_image variable data is used
# BYOS example
 bastion_os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"

#########################
# HANA machines variables
# This example shows the demo option values.Find more options in the README file
#########################

# HANA configuration ()
# VM size to use for the cluster nodes
hana_vm_size = "Standard_E4s_v3"

# Number of nodes in the cluster
#hana_count = "2"

# Instance number for the HANA database. 00 by default.
#hana_instance_number = "00"

# Network options
hana_enable_accelerated_networking = true

# Disk configuration
#hana_data_disks_configuration = {
#  disks_type       = "Premium_LRS,Premium_LRS,Premium_LRS,Premium_LRS,Premium_LRS,Premium_LRS"
#  disks_size       = "512,512,512,512,64,1024"
#  caching          = "ReadOnly,ReadOnly,ReadOnly,ReadOnly,ReadOnly,None"
#  writeaccelerator = "false,false,false,false,false,false"
#  luns             = "0,1,2#3#4#5"
#  names            = "datalog#shared#usrsap#backup"
#  lv_sizes         = "70,100#100#100#100"
#  paths            = "/hana/data,/hana/log#/hana/shared#/usr/sap#/hana/backup"
#}

# SLES4SAP image information
# If custom uris are enabled public information will be omitted
# Custom sles4sap image
#sles4sap_uri = "/path/to/your/image"
sles4sap_uri = "https://openqa.blob.core.windows.net/sle-images/SLES15-SP3-SAP-BYOS.x86_64-0.9.10-Azure-Build2.42.vhd"

# Public OS images
# BYOS example
# hana_os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"

# The next variables define how the HANA installation software is obtained.
# The installation software must be located in a Azure storage account

# Azure storage account name
storage_account_name = "sapinstmasters"
# Azure storage account secret key (key1 or key2)
storage_account_key = "**censored**"

# 'hana_inst_master' is a Azure Storage account share where HANA installation files (extracted or not) are stored
# `hana_inst_master` must be used always! It is used as the reference path to the other variables

# Local folder where HANA installation master will be mounted
hana_inst_folder = "/root/sap_inst/"

# To configure the usage there are multiple options:
# 1. Use an already extracted HANA Platform folder structure.
# The last numbered folder is the HANA Platform folder with the extracted files with
# something like `HDB:HANA:2.0:LINUX_X86_64:SAP HANA PLATFORM EDITION 2.0::XXXXXX` in the LABEL.ASC file
hana_inst_master = "//sapinstmasters.file.core.windows.net/sapinst/51053381"

# 2. Combine the `hana_inst_master` with `hana_platform_folder` variable.
#hana_inst_master = "//YOUR_STORAGE_ACCOUNT_NAME.file.core.windows.net/sapdata/sap_inst_media"
# Specify the path to already extracted HANA platform installation media, relative to hana_inst_master mounting point.
# This will have preference over hana archive installation media
#hana_platform_folder = "51053381"

# 3. Specify the path to the HANA installation archive file in either of SAR, RAR, ZIP, EXE formats, relative to the 'hana_inst_master' mounting point
# For multipart RAR archives, provide the first part EXE file name.
#hana_archive_file = "51053381_part1.exe"

# 4. If using HANA SAR archive, provide the compatible version of sapcar executable to extract the SAR archive
# HANA installation archives be extracted to path specified at hana_extract_dir (optional, by default /sapmedia/HANA)
#hana_archive_file = "IMDB_SERVER.SAR"
#hana_sapcar_exe = "SAPCAR"

# For option 3 and 4, HANA installation archives are extracted to the path specified
# at hana_extract_dir (optional, by default /sapmedia_extract/HANA). This folder cannot be the same as `hana_inst_folder`!
#hana_extract_dir = "/sapmedia_extract/HANA"

# The following SAP HANA Client variables are needed only when you are using a HANA database SAR archive for HANA installation.
# HANA Client is used by monitoring & cost-optimized scenario and it is already included in HANA platform media unless a HANA database SAR archive is used
# You can provide HANA Client in one of the two options below:
# 1. Path to already extracted hana client folder, relative to hana_inst_master mounting point
#hana_client_folder = "SAP_HANA_CLIENT"
# 2. Or specify the path to the hana client SAR archive file, relative to the 'hana_inst_master'. To extract the SAR archive, you need to also provide compatible version of sapcar executable in variable hana_sapcar_exe
# It will be extracted to hana_client_extract_dir path (optional, by default /sapmedia_extract/HANA_CLIENT)
#hana_client_archive_file = "IMDB_CLIENT20_003_144-80002090.SAR"
#hana_client_extract_dir = "/sapmedia_extract/HANA_CLIENT"

# Enable system replication and HA cluster
hana_ha_enabled = true

# Each host IP address (sequential order). If it's not set the addresses will be auto generated from the provided vnet address range
#hana_ips = ["10.74.1.11", "10.74.1.12"]

# IP address used to configure the hana cluster floating IP. It must belong to the same subnet than the hana machines
#hana_cluster_vip = "10.74.1.13"

# Enable Active/Active HANA setup (read-only access in the secondary instance)
hana_active_active = true

# HANA cluster secondary vip. This IP address is attached to the read-only secondary instance. Only needed if hana_active_active is set to true
#hana_cluster_vip_secondary = "10.74.1.14"

# HANA instance configuration
# Find some references about the variables in:
# https://help.sap.com
# HANA instance system identifier. It's composed of 3 characters string
hana_sid = "PRD"
# HANA instance number. It's composed of 2 integers string
#hana_instance_number = "00"
# HANA instance master password. It must follow the SAP Password policies
hana_master_password = "**censored**"
# HANA primary site name. Only used if HANA's system replication feature is enabled (hana_ha_enabled to true)
#hana_primary_site = "Site1"
# HANA secondary site name. Only used if HANA's system replication feature is enabled (hana_ha_enabled to true)
#hana_secondary_site = "Site2"

# Cost optimized scenario
scenario_type = "cost-optimized"

#######################
# SBD related variables
#######################

# In order to enable SBD, an ISCSI server is needed as right now is the only option
# All the clusters will use the same mechanism

# Custom iscsi server image
#iscsi_srv_uri = "/path/to/your/iscsi/image"
iscsi_srv_uri = "https://openqa.blob.core.windows.net/sle-images/SLES15-SP3-SAP-BYOS.x86_64-0.9.10-Azure-Build2.42.vhd"

# Public image usage for iSCSI. BYOS example
#iscsi_os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"

# IP address of the iSCSI server. If it's not set the address will be auto generated from the provided vnet address range
#iscsi_srv_ip = "10.74.1.14"
# Number of LUN (logical units) to serve with the iscsi server. Each LUN can be used as a unique sbd disk
#iscsi_lun_count = 3
# Disk size in GB used to create the LUNs and partitions to be served by the ISCSI service
#iscsi_disk_size = 10

##############################
# Monitoring related variables
##############################

# Custom monitoring server image
#monitoring_uri = "/path/to/your/monitoring/image"

# Public image usage for the monitoring server. BYOS example
#monitoring_os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"

# Enable the host to be monitored by exporters
monitoring_enabled = false

# IP address of the machine where Prometheus and Grafana are running. If it's not set the address will be auto generated from the provided vnet address range
#monitoring_srv_ip = "10.74.1.13"

########################
# DRBD related variables
########################

# Custom drbd nodes image
#drbd_image_uri = "/path/to/your/monitoring/image"

# Public image usage for the DRBD machines. BYOS example
drbd_os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"

# Enable drbd cluster
drbd_enabled = false

# Each drbd cluster host IP address (sequential order). If it's not set the addresses will be auto generated from the provided vnet address range
#drbd_ips = ["10.74.1.21", "10.74.1.22"]
#drbd_cluster_vip = "10.74.1.23"

# NFS share mounting point and export. Warning: Since cloud images are using cloud-init, /mnt folder cannot be used as standard mounting point folder
# It will create the NFS export in /mnt_permanent/sapdata/{netweaver_sid} to be connected as {drbd_cluster_vip}:/{netwaever_sid} (e.g.: )192.168.1.20:/HA1
#drbd_nfs_mounting_point = "/mnt_permanent/sapdata"

#############################
# Netweaver related variables
#############################

netweaver_enabled = false

# Netweaver APP server count (PAS and AAS)
# Set to 0 to install the PAS instance in the same instance as the ASCS. This means only 1 machine is installed in the deployment (2 if HA capabilities are enabled)
# Set to 1 to only enable 1 PAS instance in an additional machine`
# Set to 2 or higher to deploy additional AAS instances in new machines
netweaver_app_server_count = 1

# Custom drbd nodes image
netweaver_image_uri = "https://openqa.blob.core.windows.net/sle-images/SLES15-SP3-SAP-BYOS.x86_64-0.9.10-Azure-Build2.42.vhd"

# Public image usage for the Netweaver machines. BYOS example
#netweaver_os_image = "SUSE:sles-sap-15-sp2-byos:gen2:latest"

# If the addresses are not set they will be auto generated from the provided vnet address range
#netweaver_ips = ["10.74.1.30", "10.74.1.31", "10.74.1.32", "10.74.1.33"]
#netweaver_virtual_ips = ["10.74.1.35", "10.74.1.36", "10.74.1.37", "10.74.1.38"]

# Netweaver installation configuration
# Netweaver system identifier. It's composed of 3 characters string
netweaver_sid = "PRD"
# Netweaver ASCS instance number. It's composed of 2 integers string
netweaver_ascs_instance_number = "00"
# Netweaver ERS instance number. It's composed of 2 integers string
netweaver_ers_instance_number = "10"
# Netweaver PAS instance number. If additional AAS machines are deployed, they get the next number starting from the PAS instance number. It's composed of 2 integers string
netweaver_pas_instance_number = "01"
# Netweaver master password. It must follow the SAP Password policies such as having 8 characters at least combining upper and lower case characters and numbers. It cannot start with special characters.
netweaver_master_password = "L1nux_test1nk"

# Enabling this option will create a ASCS/ERS HA available cluster
netweaver_ha_enabled = true

# VM sizes
#netweaver_xscs_vm_size = Standard_D2s_v3
#netweaver_app_vm_size = Standard_D2s_v3

# Set the Netweaver product id. The 'HA' sufix means that the installation uses an ASCS/ERS cluster
# Below are the supported SAP Netweaver product ids if using SWPM version 1.0:
# - NW750.HDB.ABAP
# - NW750.HDB.ABAPHA
# - S4HANA1709.CORE.HDB.ABAP
# - S4HANA1709.CORE.HDB.ABAPHA
# Below are the supported SAP Netweaver product ids if using SWPM version 2.0:
# - S4HANA1809.CORE.HDB.ABAP
# - S4HANA1809.CORE.HDB.ABAPHA
# - S4HANA1909.CORE.HDB.ABAP
# - S4HANA1909.CORE.HDB.ABAPHA

# Example:
netweaver_product_id = "NW750.HDB.ABAPHA"

# NFS share to store the Netweaver shared files. Only used if drbd_enabled is not set. For single machine deployments (ASCS and PAS in the same machine) set an empty string
netweaver_nfs_share = ""

# Path where netweaver sapmnt data is stored.
#netweaver_sapmnt_path = "/sapmnt"

# Preparing the Netweaver download basket. Check `doc/sap_software.md` for more information

# Azure storage account where all the Netweaver software is available. The next paths are relative to this folder.
netweaver_storage_account_name = "shapnetweaver"
netweaver_storage_account_key = "**censored**"
netweaver_storage_account = "//shapnetweaver.file.core.windows.net/netweaver"

# Netweaver installation required folders
# SAP SWPM installation folder, relative to the netweaver_storage_account mounting point
#netweaver_swpm_folder     =  "your_swpm"
# Or specify the path to the sapcar executable & SWPM installer sar archive, relative to the netweaver_storage_account mounting point
# The sar archive will be extracted to path specified at netweaver_extract_dir under SWPM directory (optional, by default /sapmedia_extract/NW/SWPM)
#netweaver_sapcar_exe = "your_sapcar_exe_file_path"
#netweaver_swpm_sar = "your_swpm_sar_file_path"
# Folder where needed SAR executables (sapexe, sapdbexe) are stored, relative to the netweaver_storage_account mounting point
#netweaver_sapexe_folder   =  "download_basket"
# Additional media archives or folders (added in start_dir.cd), relative to the netweaver_storage_account mounting point
#netweaver_additional_dvds = ["dvd1", "dvd2"]

Logs Logs uploaded:

Additional logs might be required to deepen the analysis on HANA or NETWEAVER installation. They will be asked specifically in case of need.

arbulu89 commented 3 years ago

Hey @lpalovsky , Thank you for the report. I have been able to reproduce even in SLE15SP2. The truth is that we had this issue in the past (at least something similar), and the unique thing we just do is to replicate what the best practices guide say at that time. Now I have realized that the latest cost-optimized guide has some different steps, but all of them are lead to the same error. https://documentation.suse.com/sbp/all/single-html/SLES4SAP-hana-sr-guide-CostOpt-12/

I don't know if I will be able to fix this. @fmherschel and @pirat013, could you help on this?

Edit. After a quick test, I think that our srTakeover hook python script shouldn't import the python dbapi library as it does. It should do from hdbcli import dbapi. The hdbcli package is already installed in the SAP python environment and importing this way we don't have the Attempted relative import in non-package error. In fact, with this way, we shouldn't even need the links of the python files in the folder neither. I didn't test all of this deeply anyway

yeoldegrove commented 3 years ago

I can confirm the behavior and the error messages about the import:

prdadm@vmhana02:/usr/sap/PRD/HDB00/vmhana02/trace> tail -f nameserver_alert_vmhana02.trc
[26968]{-1}[-1/-1] 2021-06-08 12:33:03.350017 e ha_dr_provider   PythonProxyImpl.cpp(00091) : import of srTakeover failed: srTakeover.py(43): Attempted relative import in non-package
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.350209 e ha_dr_provider   HADRProviderManager.cpp(00081) : could not load HA/DR Provider 'srTakeover' from /hana/shared/srHook
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.350666 f NameServer       TREXNameServer.cpp(03884) : exception  1: no.7010003  (TREXNameServer/TREXNameServer.cpp:1569)
    abort
exception throw location:
 1: 0x00007f478d067c4c in NameServer::AbortException::AbortException(char const*, int, ltt_adp::basic_string<char, ltt::char_traits<char>, ltt::integral_constant<bool, true> > const&)+0x28 at TREXNameServer.cpp:323 (libhdbns.so)
 2: 0x00007f478d0c8385 in NameServer::TREXNameServer::startup()+0x1881 at TREXNameServer.cpp:1569 (libhdbns.so)
 3: 0x0000556f6488a1f5 in TRexAPI::TREXIndexServer::startup()+0x5b1 at TREXIndexServer.cpp:3326 (hdbnameserver)
 4: 0x0000556f648548bd in nlsui_main+0x17a9 at TrexService.cpp:578 (hdbnameserver)
 5: 0x00007f476ac51786 in System::mainWrapper(int, char**, char**)+0x72 at IsInMain.cpp:332 (libhdbbasis.so)
 6: 0x00007f476908434a in __libc_start_main+0xe6 (libc.so.6)
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.356431 f NameServer       TREXNameServer.cpp(03905) : Could not load HA/DR Provider -> stopping instance ...
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.356700 f NameServer       TREXNameServer.cpp(03918) :  stopping topology thread
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.356730 f NameServer       TREXNameServer.cpp(03920) :  got shutdown scope
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.356734 f NameServer       TREXNameServer.cpp(03924) :  stopped topology thread
[26948]{-1}[-1/-1] 2021-06-08 12:33:03.356740 e Basis            TREXNameServer.cpp(03934) : Process exited due to an error via explicit exit call with exit code 1 , no crash dump will be written

According to SAP documentation one should import dbapi like @arbulu89 mentioned:

https://help.sap.com/viewer/0eec0d68141541d1b07893a39944924e/2.0.03/en-US/d12c86af7cb442d1b9f8520e2aba7758.html

from hdbcli import dbapi

This PR https://github.com/SUSE/saphanabootstrap-formula/pull/129 in saphanabootstrap-formula includes a fix and several other fixes.

There is also a test rpm available here:

https://build.opensuse.org/package/show/home:waldt:branches:network:ha-clustering:sap-deployments:devel/saphanabootstrap-formula

Which can be used like this:


e.g.:
ha_sap_deployment_repo = "https://download.opensuse.org/repositories/home:/waldt:/branches:/network:/ha-clustering:/sap-deployments:/devel/SLE_15_SP2/"
or:
ha_sap_deployment_repo = "https://download.opensuse.org/repositories/home:/waldt:/branches:/network:/ha-clustering:/sap-deployments:/devel/SLE_15_SP3/"
arbulu89 commented 3 years ago

@yeoldegrove Nice! This should come together with a docu change (as the old implementation was just following the best practices guide)

@pirat013 or @fmherschel, could have a look on this?

fmherschel commented 3 years ago

@yeoldegrove @arbulu89 Give me some time to review that on my installations. It would be an excellent improvement, if we do not longer need to install the dbapi manually as we had the need in the past.

fmherschel commented 3 years ago

@yeoldegrove @arbulu89 Yes we can use the new method to load the dbapi. Thanks for this suggestion. We should think about placing this Hook now in the SAPHanaSR package so customers not using the automation could also consume it. However the code (username and password) (what a bad situation) must be existing in clear text. I would prefer something like a secure store user key instead but I did not found a python binding to use the SAP secure user key storage.

arbulu89 commented 3 years ago

@yeoldegrove @arbulu89 Yes we can use the new method to load the dbapi. Thanks for this suggestion. We should think about placing this Hook now in the SAPHanaSR package so customers not using the automation could also consume it. However the code (username and password) (what a bad situation) must be existing in clear text. I would prefer something like a secure store user key instead but I did not found a python binding to use the SAP secure user key storage.

@fmherschel The dbapi has the option to use userkey (hdbuserstore). The key must be provided obviously.

We can open 2 additional tickets for the future.

  1. Add the hook in the SAPHanaSR package
  2. Explore how to use a key from the user store
fmherschel commented 3 years ago

@arbulu89 Ok I did a research in the past and asked SAP and they said that user keys was not implemented. So I missed that update. The explore of usage of the key I would like to see in the HA team of Markus Guertler. What do you think?

arbulu89 commented 3 years ago

@arbulu89 Ok I did a research in the past and asked SAP and they said that user keys was not implemented. So I missed that update. The explore of usage of the key I would like to see in the HA team of Markus Guertler. What do you think?

Of course. Let us know if you need any assistance.

arbulu89 commented 3 years ago

Fixed in: https://github.com/SUSE/saphanabootstrap-formula/pull/129