Expected behaviour vs observed behaviour
module.hana_node.module.hana_provision.null_resource.provision[1]: Provisioning with 'remote-exec'...
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Connecting to remote host via SSH...
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Host: 54.244.22.62
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): User: ec2-user
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Password: false
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Private key: true
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Certificate: false
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): SSH Agent: false
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Checking Host Key: false
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Target Platform: unix
module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): Connected!
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Connected!
module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): inactive
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): inactive
module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): Warning: No repositories defined.
module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): Use the 'zypper addrepo' command to add one or more repositories.
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Warning: No repositories defined.
module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Use the 'zypper addrepo' command to add one or more repositories.
│ Error: remote-exec provisioner error
│
│ with module.hana_node.module.hana_provision.null_resource.provision[1],
│ on ../generic_modules/salt_provisioner/main.tf line 65, in resource "null_resource" "provision":
│ 65: provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_596872772.sh": Process exited with status 1
╵
╷
│ Error: remote-exec provisioner error
│
│ with module.hana_node.module.hana_provision.null_resource.provision[0],
│ on ../generic_modules/salt_provisioner/main.tf line 65, in resource "null_resource" "provision":
│ 65: provisioner "remote-exec" {
│
│ error executing "/tmp/terraform_1552163070.sh": Process exited with status 1
How to reproduce
Specify the step by step process to reproduce the issue. This usually would look like something like this:
Move to any of the cloud providers folder
Create the terraform.tfvars file based on terraform.tfvars.example
Run the next terraform commands:
terraform init
terraform plan
terraform apply -auto-approve
The usage of the provisioning_log_level = "info" option in the terraform.tfvars file is interesting to get more information during the terraform commands execution. So it is suggested to run the deployment with this option to see what happens before opening any ticket.
Used terraform.tfvars
Paste here the used `terraform.tfvars` file content. If the file has any secret, change them by dummy information.
#################################
# ha-sap-terraform-deployments project configuration file
# Find all the available variables and definitions in the variables.tf file
#################################
# Region where to deploy the configuration
aws_region = "us-west-2"
# Use an already existing vpc. Make sure the vpc has the internet gateway already attached
#vpc_id = "vpc-xxxxxxxxxxxxxxxxx"
# Use an already existing security group
#security_group_id = "sg-xxxxxxxxxxxxxxxxx"
# vpc address range in CIDR notation
# Only used if the vpc is created by terraform or the user doesn't have read permissions in this
# resource. To use the current vpc address range set the value to an empty string
# To define custom ranges
#vpc_address_range = "10.0.0.0/16"
# Or to use already existing vpc address ranges
#vpc_address_range = ""
#################################
# General configuration variables
#################################
# Deployment name. This variable is used to complement the name of multiple infrastructure resources adding the string as suffix
# If it is not used, the terraform workspace string is used
# The name must be unique among different deployments
# deployment_name = "mydeployment"
# Add the "deployment_name" as a prefix to the hostname.
#deployment_name_in_hostname = true
# aws-cli credentials data
# access keys parameters have preference over the credentials file (they are self exclusive)
#aws_access_key_id = my-access-key-id
#aws_secret_access_key = my-secret-access-key
# aws-cli credentials file. Located on ~/.aws/credentials on Linux, MacOS or Unix or at C:\Users\USERNAME\.aws\credentials on Windows
aws_credentials = "~/.aws/credentials"
# If BYOS images are used in the deployment, SCC registration code is required. Set `reg_code` and `reg_email` variables below
# By default, all the images are PAYG, so these next parameters are not needed
#reg_code = "<<REG_CODE>>"
#reg_email = "<<your email>>"
# To add additional modules from SCC. None of them is needed by default
#reg_additional_modules = {
# "sle-module-adv-systems-management/12/x86_64" = ""
# "sle-module-containers/12/x86_64" = ""
# "sle-ha-geo/12.4/x86_64" = "<<REG_CODE>>"
#}
# Default os_image and os_owner. These values are not used if the specific values are set (e.g.: hana_os_image)
# BYOS example with sles4sap 15 sp4 (this value is a pattern, it will select the latest version that matches this name)
#os_image = "suse-sles-sap-15-sp4-byos"
#os_owner = "amazon"
# The project requires a pair of SSH keys (public and private) to provision the machines
# The private key is only used to create the SSH connection, it is not uploaded to the machines
# Besides the provisioning, the SSH connection for this keys will be authorized in the created machines
# These keys are provided using the next two variables in 2 different ways
# Path to already existing keys
public_key = "~/.ssh/id_rsa.pub"
private_key = "~/.ssh/id_rsa"
# Or provide the content of SSH keys
#public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCt06V...."
#private_key = <<EOF
#-----BEGIN OPENSSH PRIVATE KEY-----
#b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABFwAAAAdzc2gtcn
#...
#P9eYliTYFxhv/0E7AAAAEnhhcmJ1bHVAbGludXgtYWZqOQ==
#-----END OPENSSH PRIVATE KEY-----
#EOF
# Authorize additional keys optionally (in this case, the private key is not required)
# Path to local files or keys content
#authorized_keys = ["/home/myuser/.ssh/id_rsa_second_key.pub", "/home/myuser/.ssh/id_rsa_third_key.pub", "ssh-rsa AAAAB3NzaC1yc2EAAAA...."]
# An additional pair of SSH keys is needed to provide the HA cluster the capability to SSH among the machines
# This keys are uploaded to the machines!
# If `pre_deployment = true` is used, this keys are autogenerated
cluster_ssh_pub = "salt://sshkeys/cluster.id_rsa.pub"
cluster_ssh_key = "salt://sshkeys/cluster.id_rsa"
##########################
# Other deployment options
##########################
# Repository url used to install HA/SAP deployment packages
# It contains the salt formulas rpm packages and other dependencies.
#
## Specific Release - for latest release look at https://github.com/SUSE/ha-sap-terraform-deployments/releases
# To auto detect the SLE version
ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:v9/"
# Otherwise use a specific SLE version:
#ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:v9/SLE_15_SP4/"
#
## Development Release (use if on `develop` branch)
# To auto detect the SLE version
#ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:devel/"
# Otherwise use a specific SLE version:
#ha_sap_deployment_repo = "https://download.opensuse.org/repositories/network:ha-clustering:sap-deployments:devel/SLE_15_SP4/"
# Provisioning log level (error by default)
provisioning_log_level = "info"
# Print colored output of the provisioning execution (true by default)
#provisioning_output_colored = false
# Enable pre deployment steps (disabled by default)
pre_deployment = true
# Enable post deployment steps (disabled by default)
# This e.g. deletes /etc/salt/grains after a successful deployment
#cleanup_secrets = true
# To disable the provisioning process
#provisioner = ""
# Run provisioner execution in background
#background = true
# Testing and QA purpose
# Define if the deployment is used for testing purpose
# Disable all extra packages that do not come from the image
# Except salt-minion (for the moment) and salt formulas
# true or false (default)
#offline_mode = false
# Execute HANA Hardware Configuration Check Tool to bench filesystems
# true or false (default)
#hwcct = false
##########################
# Bastion (jumpbox) machine variables
##########################
# Enable bastion usage. If this option is enabled, it will create a unique public ip address that is attached to the bastion machine.
# The rest of the machines won't have a public ip address and the SSH connection must be done through the bastion
bastion_enabled = false
# Bastion SSH keys. If they are not set the public_key and private_key are used
#bastion_public_key = "/home/myuser/.ssh/id_rsa_bastion.pub"
#bastion_private_key = "/home/myuser/.ssh/id_rsa_bastion"
# bastion server image. By default, PAYG image is used. The usage is the same as the HANA images
#bastion_os_image = "suse-sles-sap-15-sp1-byos"
#bastion_os_owner = "amazon"
#########################
# HANA machines variables
#########################
# Hostname, without the domain part
#hana_name = "vmhana"
# Instance type to use for the hana cluster nodes
# SAP certified instances types can be found at https://aws.amazon.com/sap/instance-types/
# and example sizing at https://aws.amazon.com/sap/solutions/s4hana/ .
#hana_instancetype = "r6i.xlarge"
# Number of nodes in the cluster
# 2 nodes will always be scale-up
# 4+ nodes are needed for scale-out (also set hana_scale_out_enabled=true)
#hana_count = "2"
# enable to use HANA scale-out
#hana_scale_out_enabled = true
# HANA scale-out role assignments (optional, this can be defined automatically based on "hana_scale_out_standby_count")
# see https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.03/en-US/0d9fe701e2214e98ad4f8721f6558c34.html for reference
#hana_scale_out_addhosts = {
# site1 = "vmhana03:role=standby:group=default:workergroup=default,vmhana05:role=worker:group=default:workergroup=default"
# site2 = "vmhana04:role=standby:group=default:workergroup=default,vmhana06:role=worker:group=default:workergroup=default"
#}
# HANA scale-out roles
# These role assignments are made per HANA site
# Number of standby nodes per site (not recommended on AWS)
#hana_scale_out_standby_count = 0 # default:0 - Deploy X standby nodes per site. The rest of the nodes will be worker nodes.
#hana_majority_maker_instancetype = "t3.micro"
#hana_majority_maker_ip = "10.0.3.9"
#########################
# shared storage variables
# Needed if HANA is deployed in scale-out scenario
# see https://documentation.suse.com/sbp/all/html/SLES-SAP-hana-scaleOut-PerfOpt-12-AWS/index.html#id-create-efs-file-systems
# for reference and minimum requirements
#########################
#hana_scale_out_shared_storage_type = "efs" # only efs supported at the moment (default: "")
# local disk configuration - scale-up example
#hana_data_disks_configuration = {
# disks_type = "gp2,gp2,gp2,gp2,gp2,gp2,gp2"
# disks_size = "128,128,128,128,64,64,128"
# # The next variables are used during the provisioning
# luns = "0,1#2,3#4#5#6"
# names = "data#log#shared#usrsap#backup"
# lv_sizes = "50#50#100#100#100"
# paths = "/hana/data#/hana/log#/hana/shared#/usr/sap#/hana/backup"
#}
# local disk configuration - scale-out without standby nodes example
# on scale-out without standby nodes, we need shared storage for shared only and everything else on local disks
#hana_data_disks_configuration = {
# disks_type = "gp2,gp2,gp2,gp2,gp2,gp2"
# disks_size = "256,256,256,256,64,512"
# # The next variables are used during the provisioning
# luns = "0,1#2,3#4#5"
# names = "data#log#usrsap#backup"
# lv_sizes = "100#100#100#100"
# paths = "/hana/data#/hana/log#/usr/sap#/hana/backup"
#}
# HANA machines image. By default, PAYG images are used
# BYOS example with sles4sap 15 sp1 (this value is a pattern, it will select the latest version that matches this name)
#hana_os_image = "suse-sles-sap-15-sp1-byos"
# Or use a specific ami image
#hana_os_image = "ami-xxxxxxxxxxxx"
# Custom owner for private AMI
#hana_os_owner = "amazon"
# Enable system replication and HA cluster
#hana_ha_enabled = true
# Disable minimal memory checks for HANA. Useful to deploy development clusters.
# Low memory usage can cause a failed deployment. Be aware that this option does
# not work with any memory size and will most likely fail with less than 16 GiB
#hana_ignore_min_mem_check = false
# The next variables define how the HANA installation software is obtained.
# The installation software must be located in a AWS S3 bucket
# 'hana_inst_master' is a S3 bucket where HANA installation files (extracted or not) are stored
# `hana_inst_master` must be used always! It is used as the reference path to the other variables
# Local folder where HANA installation master will be mounted
#hana_inst_folder = "/sapmedia/HANA"
# To configure the usage there are multiple options:
# 1. Use an already extracted HANA Platform folder structure.
# The last numbered folder is the HANA Platform folder with the extracted files with
# something like `HDB:HANA:2.0:LINUX_X86_64:SAP HANA PLATFORM EDITION 2.0::XXXXXX` in the LABEL.ASC file
hana_inst_master = "s3://xxxxxxxxx/hana/51056441"
# 2. Combine the `hana_inst_master` with `hana_platform_folder` variable.
#hana_inst_master = "s3://sapdata/sap_inst_media/"
# Specify the path to already extracted HANA platform installation media, relative to hana_inst_master mounting point.
# This will have preference over hana archive installation media
#hana_platform_folder = "51053381"
# 3. Specify the path to the HANA installation archive file in either of SAR, RAR, ZIP, EXE formats, relative to the 'hana_inst_master' mounting point
# For multipart RAR archives, provide the first part EXE file name.
hana_archive_file = "51056441.ZIP"
# 4. If using HANA SAR archive, provide the compatible version of sapcar executable to extract the SAR archive
# HANA installation archives be extracted to path specified at hana_extract_dir (optional, by default /sapmedia/HANA)
#hana_archive_file = "IMDB_SERVER.SAR"
#hana_sapcar_exe = "SAPCAR"
# For option 3 and 4, HANA installation archives are extracted to the path specified
# at hana_extract_dir (optional, by default /sapmedia_extract/HANA). This folder cannot be the same as `hana_inst_folder`!
#hana_extract_dir = "/sapmedia_extract/HANA"
# The following SAP HANA Client variables are needed only when you are using a HANA database SAR archive for HANA installation or if you are installing >= HANA 2.0 SPS 06 (platform media format changed).
# HANA Client is used by monitoring & cost-optimized scenario and it is already included in HANA platform media unless a HANA database SAR archive is used.
# You can provide HANA Client in one of the two options below:
# 1. Path to already extracted hana client folder, relative to hana_inst_master mounting point
#hana_client_folder = "DATA_UNITS/HDB_CLIENT_LINUX_X86_64/SAP_HANA_CLIENT" # e.g. inside the HANA platform media
# 2. Or specify the path to the hana client SAR archive file, relative to the 'hana_inst_master'. To extract the SAR archive, you need to also provide compatible version of sapcar executable in variable hana_sapcar_exe
# It will be extracted to hana_client_extract_dir path (optional, by default /sapmedia_extract/HANA_CLIENT)
#hana_client_archive_file = "IMDB_CLIENT20_003_144-80002090.SAR"
#hana_client_extract_dir = "/sapmedia_extract/HANA_CLIENT"
# IP address used to configure the hana cluster floating IP. It must belong to the same subnet than the machines!
#hana_cluster_vip = "192.168.1.10"
# Select HANA cluster fencing mechanism. 'native' by default
# Find more information in `doc/fencing.md` documentation page
#hana_cluster_fencing_mechanism = "sbd"
# Enable Active/Active HANA setup (read-only access in the secondary instance)
#hana_active_active = true
# HANA cluster secondary vip. This IP address is attached to the read-only secondary instance. Only needed if hana_active_active is set to true
#hana_cluster_vip_secondary = "192.168.1.11"
# Each host IP address (sequential order). The first ip must be in 10.0.0.0/24 subnet and the second in 10.0.1.0/24 subnet
#hana_ips = ["10.0.0.5", "10.0.1.6"]
# HANA instance configuration
# Find some references about the variables in:
# https://help.sap.com
# HANA instance system identifier. The system identifier must be composed by 3 uppercase chars/digits string starting always with a character (there are some restricted options).
#hana_sid = "PRD"
# HANA instance number. It's composed of 2 integers string
#hana_instance_number = "00"
# HANA instance master password (length 10-14, 1 digit, 1 lowercase, 1 uppercase). For detailed password rules see: doc/sap_passwords.md
hana_master_password = "Matthew0923"
# HANA primary site name. Only used if HANA's system replication feature is enabled (hana_ha_enabled to true)
#hana_primary_site = "Site1"
# HANA secondary site name. Only used if HANA's system replication feature is enabled (hana_ha_enabled to true)
#hana_secondary_site = "Site2"
# Cost optimized scenario
#scenario_type = "cost-optimized"
#######################
# SBD related variables
#######################
# In order to enable SBD, an ISCSI server is needed as right now is the only option
# All the clusters will use the same mechanism
# In order to enable the iscsi machine creation _fencing_mechanism must be set to 'sbd' for any of the clusters
# Hostname, without the domain part
#iscsi_name = "vmiscsi"
# iSCSI server image. By default, PAYG image is used. The usage is the same as the HANA images
#iscsi_os_image = "suse-sles-sap-15-sp4-byos"
#iscsi_os_owner = "amazon"
# iSCSI server address. It should be in same iprange as hana_ips
#iscsi_srv_ip = "10.0.0.254"
# Number of LUN (logical units) to serve with the iscsi server. Each LUN can be used as a unique sbd disk
#iscsi_lun_count = 3
# Disk size in GB used to create the LUNs and partitions to be served by the ISCSI service
#iscsi_disk_size = 10
##############################
# Monitoring related variables
##############################
# Enable the host to be monitored by exporters
#monitoring_enabled = true
#
# Hostname, without the domain part
#monitoring_name = "vmmonitoring"
# Monitoring server image. By default, PAYG image is used. The usage is the same as the HANA images
#monitoring_os_image = "suse-sles-sap-15-sp4-byos"
#monitoring_os_owner = "amazon"
# IP address of the machine where Prometheus and Grafana are running. Must be in 10.0.0.0/24 subnet
#monitoring_srv_ip = "10.0.0.253"
########################
# DRBD related variables
########################
# netweaver will use AWS efs for nfs share by default, unless drbd is enabled
# Enable drbd cluster
#drbd_enabled = false
# Hostname, without the domain part
#drbd_name = "vmdrbd"
#drbd_instancetype = "t3.medium"
# DRBD machines image. By default, PAYG image is used. The usage is the same as the HANA images
#drbd_os_image = "suse-sles-sap-15-sp4-byos"
#drbd_os_owner = "amazon"
#drbd_data_disk_size = 15
#drbd_data_disk_type = "gp2"
# Each drbd cluster host IP address (sequential order).
#drbd_ips = ["10.0.5.20", "10.0.6.21"]
#drbd_cluster_vip = "192.168.1.20"
# Select DRBD cluster fencing mechanism. 'native' by default
#drbd_cluster_fencing_mechanism = "sbd"
# NFS share mounting point and export. Warning: Since cloud images are using cloud-init, /mnt folder cannot be used as standard mounting point folder
# If DRBD is used, it will create the NFS export in /mnt_permanent/sapdata/{netweaver_sid} to be connected as {drbd_cluster_vip}:/{netwaever_sid} (e.g.: )192.168.1.20:/HA1
#drbd_nfs_mounting_point = "/mnt_permanent/sapdata"
#############################
# Netweaver related variables
#############################
#netweaver_enabled = true
# Hostname, without the domain part
#netweaver_name = "vmnetweaver"
# Netweaver APP server count (PAS and AAS)
# Set to 0 to install the PAS instance in the same instance as the ASCS. This means only 1 machine is installed in the deployment (2 if HA capabilities are enabled)
# Set to 1 to only enable 1 PAS instance in an additional machine`
# Set to 2 or higher to deploy additional AAS instances in new machines
#netweaver_app_server_count = 2
# Instance type to use for the Netweaver nodes
# SAP certified instances types can be found at https://aws.amazon.com/sap/instance-types/
# and example sizing at https://aws.amazon.com/sap/solutions/s4hana/ .
#netweaver_instancetype = "r5.large"
# Netweaver machines image. By default, PAYG image is used. The usage is the same as the HANA images
#netweaver_os_image = "suse-sles-sap-15-sp4-byos"
#netweaver_os_owner = "amazon"
#netweaver_ips = ["10.0.2.7", "10.0.3.8", "10.0.2.9", "10.0.3.10"]
#netweaver_virtual_ips = ["192.168.1.20", "192.168.1.21", "192.168.1.22", "192.168.1.23"]
# Netweaver installation configuration
# Netweaver system identifier. The system identifier must be composed by 3 uppercase chars/digits string starting always with a character (there are some restricted options)
#netweaver_sid = "HA1"
# Netweaver ASCS instance number. It's composed of 2 integers string
#netweaver_ascs_instance_number = "00"
# Netweaver ERS instance number. It's composed of 2 integers string
#netweaver_ers_instance_number = "10"
# Netweaver PAS instance number. If additional AAS machines are deployed, they get the next number starting from the PAS instance number. It's composed of 2 integers string
#netweaver_pas_instance_number = "01"
# NetWeaver or S/4HANA master password (length 10-14, ASCII prefered). For detailed password rules see: doc/sap_passwords.md
#netweaver_master_password = "SuSE1234"
# Enabling this option will create a ASCS/ERS HA available cluster together with a PAS and AAS application servers
# Set to false to only create a ASCS and PAS instances
#netweaver_ha_enabled = true
# Select Netweaver cluster fencing mechanism. 'native' by default
#netweaver_cluster_fencing_mechanism = "sbd"
# Set the Netweaver product id. The 'HA' sufix means that the installation uses an ASCS/ERS cluster
# Below are the supported SAP Netweaver product ids if using SWPM version 1.0:
# - NW750.HDB.ABAP
# - NW750.HDB.ABAPHA
# - S4HANA1709.CORE.HDB.ABAP
# - S4HANA1709.CORE.HDB.ABAPHA
# Below are the supported SAP Netweaver product ids if using SWPM version 2.0:
# - S4HANA1809.CORE.HDB.ABAP
# - S4HANA1809.CORE.HDB.ABAPHA
# - S4HANA1909.CORE.HDB.ABAP
# - S4HANA1909.CORE.HDB.ABAPHA
# - S4HANA2020.CORE.HDB.ABAP
# - S4HANA2020.CORE.HDB.ABAPHA
# - S4HANA2021.CORE.HDB.ABAP
# - S4HANA2021.CORE.HDB.ABAPHA
# Example:
#netweaver_product_id = "NW750.HDB.ABAPHA"
#########################
# Netweaver shared storage variables
# Needed if Netweaver is deployed HA
#########################
#netweaver_shared_storage_type = "efs" # drbd,efs supported at the moment (default: "efs")
#AWS efs performance mode used by netweaver nfs share, if efs storage is used
#netweaver_efs_performance_mode = "generalPurpose"
# Path where netweaver sapmnt data is stored.
#netweaver_sapmnt_path = "/sapmnt"
# Preparing the Netweaver download basket. Check `doc/sap_software.md` for more information
# AWS S3 bucket where all the Netweaver software is available. The next paths are relative to this folder.
#netweaver_s3_bucket = "s3://path/to/your/netweaver/installation/s3bucket"
# SAP SWPM installation folder, relative to the netweaver_s3_bucket folder
#netweaver_swpm_folder = "your_swpm"
# Or specify the path to the sapcar executable & SWPM installer sar archive, relative to the netweaver_s3_bucket folder
# The sar archive will be extracted to path specified at netweaver_extract_dir under SWPM directory (optional, by default /sapmedia_extract/NW/SWPM)
#netweaver_sapcar_exe = "your_sapcar_exe_file_path"
#netweaver_swpm_sar = "your_swpm_sar_file_path"
# Folder where needed SAR executables (sapexe, sapdbexe) are stored, relative to the netweaver_s3_bucket folder
#netweaver_sapexe_folder = "download_basket"
# Additional media archives or folders (added in start_dir.cd), relative to the netweaver_s3_bucket folder
#netweaver_additional_dvds = ["dvd1", "dvd2"]
Logs
Upload the deployment logs to make the root cause finding easier. The logs might have sensitive secrets exposed. Remove them before uploading anything here. Otherwise, contact @arbulu89 to send the logs privately.
These is the list of the required logs (each of the deployed machines will have all of them):
only salt-result.log exists like below,
ec2-user@ip-10-0-1-10:/var/log> cat salt-result.log
inactive
Warning: No repositories defined.
Use the 'zypper addrepo' command to add one or more repositories.
Additional logs might be required to deepen the analysis on HANA or NETWEAVER installation. They will be asked specifically in case of need.
Used cloud platform AWS
Used SLES4SAP version SLES15SP4
Used client machine OS Amazon Linux
Expected behaviour vs observed behaviour module.hana_node.module.hana_provision.null_resource.provision[1]: Provisioning with 'remote-exec'... module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Connecting to remote host via SSH... module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Host: 54.244.22.62 module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): User: ec2-user module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Password: false module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Private key: true module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Certificate: false module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): SSH Agent: false module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Checking Host Key: false module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Target Platform: unix module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): Connected! module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Connected! module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): inactive module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): inactive module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): Warning: No repositories defined. module.hana_node.module.hana_provision.null_resource.provision[0] (remote-exec): Use the 'zypper addrepo' command to add one or more repositories. module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Warning: No repositories defined. module.hana_node.module.hana_provision.null_resource.provision[1] (remote-exec): Use the 'zypper addrepo' command to add one or more repositories.
│ Error: remote-exec provisioner error │ │ with module.hana_node.module.hana_provision.null_resource.provision[1], │ on ../generic_modules/salt_provisioner/main.tf line 65, in resource "null_resource" "provision": │ 65: provisioner "remote-exec" { │ │ error executing "/tmp/terraform_596872772.sh": Process exited with status 1 ╵ ╷ │ Error: remote-exec provisioner error │ │ with module.hana_node.module.hana_provision.null_resource.provision[0], │ on ../generic_modules/salt_provisioner/main.tf line 65, in resource "null_resource" "provision": │ 65: provisioner "remote-exec" { │ │ error executing "/tmp/terraform_1552163070.sh": Process exited with status 1
How to reproduce Specify the step by step process to reproduce the issue. This usually would look like something like this:
terraform.tfvars
file based onterraform.tfvars.example
The usage of the
provisioning_log_level = "info"
option in theterraform.tfvars
file is interesting to get more information during the terraform commands execution. So it is suggested to run the deployment with this option to see what happens before opening any ticket.Used terraform.tfvars
Logs Upload the deployment logs to make the root cause finding easier. The logs might have sensitive secrets exposed. Remove them before uploading anything here. Otherwise, contact @arbulu89 to send the logs privately.
These is the list of the required logs (each of the deployed machines will have all of them): only salt-result.log exists like below,
Additional logs might be required to deepen the analysis on HANA or NETWEAVER installation. They will be asked specifically in case of need.