IBM / StorageScaleVagrant

Example scripts and configuration files to install and configure IBM Storage Scale in a Vagrant environment
Apache License 2.0
16 stars 19 forks source link

Updated Spectrum Scale version to 5.1.0.0 #12

Closed neikei closed 3 years ago

neikei commented 3 years ago

Changes

Installation log

$ vagrant up
Bringing machine 'm1' up with 'virtualbox' provider...
==> m1: Importing base box 'SpectrumScale_base'...
==> m1: Matching MAC address for NAT networking...
==> m1: Setting the name of the VM: virtualbox_m1_1605091423985_210
==> m1: Clearing any previously set network interfaces...
==> m1: Preparing network interfaces based on configuration...
    m1: Adapter 1: nat
    m1: Adapter 2: hostonly
    m1: Adapter 3: hostonly
==> m1: Forwarding ports...
    m1: 443 (guest) => 8888 (host) (adapter 1)
    m1: 22 (guest) => 2222 (host) (adapter 1)
==> m1: Running 'pre-boot' VM customizations...
==> m1: Booting VM...
==> m1: Waiting for machine to boot. This may take a few minutes...
    m1: SSH address: 127.0.0.1:2222
    m1: SSH username: vagrant
    m1: SSH auth method: private key
    m1: Warning: Connection reset. Retrying...
    m1: Warning: Connection aborted. Retrying...
    m1: Warning: Remote connection disconnect. Retrying...
    m1: Warning: Connection reset. Retrying...
    m1: Warning: Connection aborted. Retrying...
    m1: Warning: Connection reset. Retrying...
    m1: Warning: Connection aborted. Retrying...
    m1:
    m1: Vagrant insecure key detected. Vagrant will automatically replace
    m1: this with a newly generated keypair for better security.
    m1:
    m1: Inserting generated public key within guest...
    m1: Removing insecure key from the guest if it's present...
    m1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> m1: Machine booted and ready!
==> m1: Checking for guest additions in VM...
    m1: No guest additions were detected on the base box for this VM! Guest
    m1: additions are required for forwarded ports, shared folders, host only
    m1: networking, and more. If SSH fails on this machine, please install
    m1: the guest additions and repackage the box to continue.
    m1:
    m1: This is not an error message; everything may continue to work properly,
    m1: in which case you may ignore this message.
==> m1: Setting hostname...
==> m1: Configuring and enabling network interfaces...
==> m1: Rsyncing folder: /cygdrive/d/SpectrumScaleVagrant/setup/ => /vagrant
==> m1: Rsyncing folder: /cygdrive/d/SpectrumScaleVagrant/software/ => /software
==> m1: Running provisioner: Configure permissions for shell scripts (shell)...
    m1: Running: script: Configure permissions for shell scripts
==> m1: Running provisioner: Configure /etc/hosts for VirtualBox (shell)...
    m1: Running: script: Configure /etc/hosts for VirtualBox
==> m1: Running provisioner: Generate ssh keys for user root (shell)...
    m1: Running: script: Generate ssh keys for user root
==> m1: Running provisioner: Configure ssh host keys (shell)...
    m1: Running: script: Configure ssh host keys
    m1: # m1:22 SSH-2.0-OpenSSH_7.4
    m1: # m1:22 SSH-2.0-OpenSSH_7.4
    m1: # m1:22 SSH-2.0-OpenSSH_7.4
==> m1: Running provisioner: Get fingerprint for management IP address (shell)...
    m1: Running: script: Get fingerprint for management IP address
==> m1: Running provisioner: Add /usr/lpp/mmfs/bin to $PATH (shell)...
    m1: Running: script: Add /usr/lpp/mmfs/bin to $PATH
==> m1: Running provisioner: Add /usr/lpp/mmfs/bin to sudo secure_path (shell)...
    m1: Running: script: Add /usr/lpp/mmfs/bin to sudo secure_path
==> m1: Running provisioner: Install and configure single node Spectrum Scale cluster (shell)...
    m1: Running: script: Install and configure single node Spectrum Scale cluster
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script.sh
    m1: ===> Perform all steps to provision a Spectrum Scale cluster
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: + '[' 1 -ne 1 ']'
    m1: + case $1 in
    m1: + PROVIDER=VirtualBox
    m1: + /vagrant/install/script-01.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-01.sh
    m1: ===> Check that Spectrum Scale self-extracting install package exists
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: + install=/software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install
    m1: ===> Check for Spectrum Scale self-extracting installation package
    m1: + echo '===> Check for Spectrum Scale self-extracting installation package'
    m1: + '[' '!' -f /software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install ']'
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/install/script-02.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-02.sh
    m1: ===> Extract Spectrum Scale RPMs
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: + install=/software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install
    m1: ===> Check for Spectrum Scale directrory
    m1: + echo '===> Check for Spectrum Scale directrory'
    m1: + '[' -d /usr/lpp/mmfs ']'
    m1: + chmod 555 /software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install
    m1: ===> Extract Spectrum Scale PRMs
    m1: + echo '===> Extract Spectrum Scale PRMs'
    m1: + sudo /software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install --silent
    m1:
    m1: Extracting License Acceptance Process Tool to /usr/lpp/mmfs/5.1.0.0 ...
    m1: tail -n +641 /software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.1.0.0 -xvz --exclude=installer --exclude=*_rpms --exclude=*_debs --exclude=*rpm  --exclude=*tgz --exclude=*deb --exclude=*tools* 1> /dev/null
    m1:
    m1: Installing JRE ...
    m1:
    m1: If directory /usr/lpp/mmfs/5.1.0.0 has been created or was previously created during another extraction,
    m1: .rpm, .deb, and repository related files in it (if there were) will be removed to avoid conflicts with the ones being extracted.
    m1: tail -n +641 /software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.1.0.0 --wildcards -xvz  ibm-java*tgz 1> /dev/null
    m1: tar -C /usr/lpp/mmfs/5.1.0.0/ -xzf /usr/lpp/mmfs/5.1.0.0/ibm-java*tgz
    m1:
    m1: Invoking License Acceptance Process Tool ...
    m1: /usr/lpp/mmfs/5.1.0.0/ibm-java-x86_64-80/jre/bin/java -cp /usr/lpp/mmfs/5.1.0.0/LAP_HOME/LAPApp.jar com.ibm.lex.lapapp.LAP -l /usr/lpp/mmfs/5.1.0.0/LA_HOME -m /usr/lpp/mmfs/5.1.0.0 -s /usr/lpp/mmfs/5.1.0.0 -t 5
    m1:
    m1: License Agreement Terms accepted.
    m1: Extracting Product RPMs to /usr/lpp/mmfs/5.1.0.0 ...
    m1: tail -n +641 /software/Spectrum_Scale_Data_Management-5.1.0.0-x86_64-Linux-install | tar -C /usr/lpp/mmfs/5.1.0.0 --wildcards -xvz  Public_Keys installer hdfs_debs/ubuntu/hdfs_3.1.0.x hdfs_debs/ubuntu/hdfs_3.1.1.x hdfs_rpms/rhel7/hdfs_3.1.0.x hdfs_rpms/rhel7/hdfs_3.1.1.x ganesha_debs/ubuntu ganesha_rpms/rhel7 ganesha_rpms/rhel8 ganesha_rpms/sles15 gpfs_debs/ubuntu gpfs_rpms/rhel object_rpms/rhel8 smb_debs/ubuntu smb_rpms/rhel7 smb_rpms/rhel8 smb_rpms/sles15 tools/repo zimon_debs/ubuntu zimon_rpms/rhel7 zimon_rpms/rhel8 zimon_rpms/sles15 gpfs_debs gpfs_rpms manifest 1> /dev/null
    m1:    - Public_Keys
    m1:    - installer
    m1:    - hdfs_debs/ubuntu/hdfs_3.1.0.x
    m1:    - hdfs_debs/ubuntu/hdfs_3.1.1.x
    m1:    - hdfs_rpms/rhel7/hdfs_3.1.0.x
    m1:    - hdfs_rpms/rhel7/hdfs_3.1.1.x
    m1:    - ganesha_debs/ubuntu
    m1:    - ganesha_rpms/rhel7
    m1:    - ganesha_rpms/rhel8
    m1:    - ganesha_rpms/sles15
    m1:    - gpfs_debs/ubuntu
    m1:    - gpfs_rpms/rhel
    m1:    - object_rpms/rhel8
    m1:    - smb_debs/ubuntu
    m1:    - smb_rpms/rhel7
    m1:    - smb_rpms/rhel8
    m1:    - smb_rpms/sles15
    m1:    - tools/repo
    m1:    - zimon_debs/ubuntu
    m1:    - zimon_rpms/rhel7
    m1:    - zimon_rpms/rhel8
    m1:    - zimon_rpms/sles15
    m1:    - gpfs_debs
    m1:    - gpfs_rpms
    m1:    - manifest
    m1: Removing License Acceptance Process Tool from /usr/lpp/mmfs/5.1.0.0 ...
    m1: rm -rf  /usr/lpp/mmfs/5.1.0.0/LAP_HOME /usr/lpp/mmfs/5.1.0.0/LA_HOME
    m1:
    m1: Removing JRE from /usr/lpp/mmfs/5.1.0.0 ...
    m1: rm -rf /usr/lpp/mmfs/5.1.0.0/ibm-java*tgz
    m1: ==================================================================
    m1: Product packages successfully extracted to /usr/lpp/mmfs/5.1.0.0
    m1:
    m1:    Cluster installation and protocol deployment
    m1:       To install a cluster or deploy protocols with the Spectrum Scale Install Toolkit:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale -h
    m1:       To install a cluster manually:  Use the gpfs packages located within /usr/lpp/mmfs/5.1.0.0/gpfs_<rpms/debs>
    m1:
    m1:       To upgrade an existing cluster using the Spectrum Scale Install Toolkit:
    m1:       1) Copy your old clusterdefinition.txt file to the new /usr/lpp/mmfs/5.1.0.0/installer/configuration/ location
    m1:       2) Review and update the config:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale config update
    m1:       3) (Optional) Update the toolkit to reflect the current cluster config:
    m1:          /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale config populate -N <node>
    m1:       4) Run the upgrade:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale upgrade -h
    m1:
    m1:       To add nodes to an existing cluster using the Spectrum Scale Install Toolkit:
    m1:       1) Add nodes to the clusterdefinition.txt file:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale node add -h
    m1:       2) Install GPFS on the new nodes:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale install -h
    m1:       3) Deploy protocols on the new nodes:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale deploy -h
    m1:
    m1:       To add NSDs or file systems to an existing cluster using the Spectrum Scale Install Toolkit:
    m1:       1) Add nsds and/or filesystems with:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale nsd add -h
    m1:       2) Install the NSDs:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale install -h
    m1:       3) Deploy the new file system:  /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale deploy -h
    m1:
    m1:       To update the toolkit to reflect the current cluster config examples:
    m1:          /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale config populate -N <node>
    m1:       1) Manual updates outside of the install toolkit
    m1:       2) Sync the current cluster state to the install toolkit prior to upgrade
    m1:       3) Switching from a manually managed cluster to the install toolkit
    m1:
    m1: ==================================================================================
    m1: To get up and running quickly, consult the IBM Spectrum Scale Protocols Quick Overview:
    m1: https://www.ibm.com/support/knowledgecenter/STXKQY_5.1.0/com.ibm.spectrum.scale.v5r10.doc/pdf/scale_povr.pdf
    m1: ===================================================================================
    m1: + echo '===> Script completed successfully!'
    m1: ===> Script completed successfully!
    m1: + exit 0
    m1: + /vagrant/install/script-03.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-03.sh
    m1: ===> Setup management node (m1) as Spectrum Scale Install Node
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: + '[' 1 -ne 1 ']'
    m1: + case $1 in
    m1: + PROVIDER=VirtualBox
    m1: + '[' VirtualBox = AWS ']'
    m1: + '[' VirtualBox = VirtualBox ']'
    m1: + INSTALL_NODE=10.1.1.11
    m1: ===> Setup management node (m1) as Spectrum Scale Install Node
    m1: + echo '===> Setup management node (m1) as Spectrum Scale Install Node'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale setup -s 10.1.1.11
    m1: [ INFO  ] Installing prerequisites for install node
    m1: [ INFO  ] Chef successfully installed and configured
    m1: [ INFO  ] Your control node has been configured to use the IP 10.1.1.11 to communicate with other nodes.
    m1: [ INFO  ] Port 8889 will be used for chef communication.
    m1: [ INFO  ] Port 10080 will be used for package distribution.
    m1: [ INFO  ] Install Toolkit setup type is set to Spectrum Scale (default). If an ESS is in the cluster, run this command to set ESS mode: ./spectrumscale setup -s server_ip -st ess
    m1: [ INFO  ] SUCCESS
    m1: [ INFO  ] Tip : Designate protocol, nsd and admin nodes in your environment to use during install:./spectrumscale -v node add <node> -p  -a -n
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale node list
    m1: [ INFO  ] List of nodes in current configuration:
    m1: [ INFO  ] [Installer Node]
    m1: [ INFO  ] 10.1.1.11
    m1: [ INFO  ]
    m1: [ INFO  ] [Cluster Details]
    m1: [ INFO  ] No cluster name configured
    m1: [ INFO  ] Setup Type: Spectrum Scale
    m1: [ INFO  ]
    m1: [ INFO  ] [Extended Features]
    m1: [ INFO  ] File Audit logging     : Disabled
    m1: [ INFO  ] Watch folder           : Disabled
    m1: [ INFO  ] Management GUI         : Disabled
    m1: [ INFO  ] Performance Monitoring : Enabled
    m1: [ INFO  ] Callhome               : Enabled
    m1: [ INFO  ]
    m1: [ INFO  ] No nodes configured. Use 'spectrumscale node add' to add nodes.
    m1: [ INFO  ] If a cluster already exists use 'spectrumscale config populate -N node_in_cluster' to sync toolkit with existing cluster.
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/install/script-04.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-04.sh
    m1: ===> Specify Spectrum Scale Cluster
    m1: ===> Target configuration: Single Node Cluster
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: + '[' 1 -ne 1 ']'
    m1: + case $1 in
    m1: + PROVIDER=VirtualBox
    m1: ===> Specify cluster name
    m1: + echo '===> Specify cluster name'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale config gpfs -c demo
    m1: [ INFO  ] Setting GPFS cluster name to demo
    m1: ===> Specify to disable call home
    m1: + echo '===> Specify to disable call home'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale callhome disable
    m1: [ INFO  ] Disabling the callhome.
    m1: [ INFO  ] Configuration updated.
    m1: ===> Specify nodes and their roles
    m1: + echo '===> Specify nodes and their roles'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale node add -a -g -q -m -n m1
    m1: [ INFO  ] Adding node m1.example.com as a GPFS node.
    m1: [ INFO  ] Adding node m1.example.com as a quorum node.
    m1: [ INFO  ] Adding node m1.example.com as a manager node.
    m1: [ INFO  ] Adding node m1.example.com as an NSD server.
    m1: [ INFO  ] Configuration updated.
    m1: [ INFO  ] Tip :If all node designations are complete, add NSDs to your cluster definition and define required filessytems:./spectrumscale nsd add <device> -p <primary node> -s <secondary node> -fs <file system>
    m1: [ INFO  ] Setting m1.example.com as an admin node.
    m1: [ INFO  ] Configuration updated.
    m1: [ INFO  ] Tip : Designate protocol or nsd nodes in your environment to use during install:./spectrumscale node add <node> -p -n
    m1: [ INFO  ] Setting m1.example.com as a GUI server.
    m1: ===> Show cluster specification
    m1: + echo '===> Show cluster specification'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale node list
    m1: [ INFO  ] List of nodes in current configuration:
    m1: [ INFO  ] [Installer Node]
    m1: [ INFO  ] 10.1.1.11
    m1: [ INFO  ]
    m1: [ INFO  ] [Cluster Details]
    m1: [ INFO  ] Name: demo
    m1: [ INFO  ] Setup Type: Spectrum Scale
    m1: [ INFO  ]
    m1: [ INFO  ] [Extended Features]
    m1: [ INFO  ] File Audit logging     : Disabled
    m1: [ INFO  ] Watch folder           : Disabled
    m1: [ INFO  ] Management GUI         : Enabled
    m1: [ INFO  ] Performance Monitoring : Enabled
    m1: [ INFO  ] Callhome               : Disabled
    m1: [ INFO  ]
    m1: [ INFO  ] GPFS           Admin  Quorum  Manager   NSD   Protocol   GUI     OS   Arch
    m1: [ INFO  ] Node            Node   Node     Node   Server   Node    Server
    m1: [ INFO  ] m1.example.com   X       X       X       X                X    rhel7  x86_64
    m1: [ INFO  ]
    m1: [ INFO  ] [Export IP address]
    m1: [ INFO  ] No export IP addresses configured
    m1: + '[' VirtualBox = AWS ']'
    m1: + '[' VirtualBox = VirtualBox ']'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale nsd add -p m1.example.com -fs fs1 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
    m1: [ INFO  ] Connecting to m1.example.com to check devices and expand wildcards.
    m1: [ INFO  ] Looking up details of /dev/sdb.
    m1: [ INFO  ] The installer will create the new file system fs1 if it does not exist.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sdb.
    m1: [ INFO  ] Looking up details of /dev/sdc.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sdc.
    m1: [ INFO  ] Looking up details of /dev/sdd.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sdd.
    m1: [ INFO  ] Looking up details of /dev/sde.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sde.
    m1: [ INFO  ] Looking up details of /dev/sdf.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sdf.
    m1: [ INFO  ] Configuration updated
    m1: [ INFO  ] Tip : If all node designations and any required protocol configurations are complete, proceed to check the installation configuration: ./spectrumscale install --precheck
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale nsd add -p m1.example.com /dev/sdg /dev/sdh
    m1: [ INFO  ] Connecting to m1.example.com to check devices and expand wildcards.
    m1: [ INFO  ] Looking up details of /dev/sdg.
    m1: [ INFO  ] No filesystem has been specified for this NSD. The installer may create a default filesystem to assign this NSD if there is no filesystem specified in the cluster. For more information, see 'Creating file systems' under 'Defining configuration options for the spectrumscale installation toolkit'in IBM Spectrum Scale documentation on Knowledge Center.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sdg.
    m1: [ INFO  ] Looking up details of /dev/sdh.
    m1: [ INFO  ] No filesystem has been specified for this NSD. The installer may create a default filesystem to assign this NSD if there is no filesystem specified in the cluster. For more information, see 'Creating file systems' under 'Defining configuration options for the spectrumscale installation toolkit'in IBM Spectrum Scale documentation on Knowledge Center.
    m1: [ INFO  ] Adding NSD None on m1.example.com using device /dev/sdh.
    m1: [ INFO  ] Configuration updated
    m1: [ INFO  ] Tip : If all node designations and any required protocol configurations are complete, proceed to check the installation configuration: ./spectrumscale install --precheck
    m1: ===> Show NSD specification
    m1: + echo '===> Show NSD specification'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale nsd list
    m1: [ INFO  ] Name FS      Size(GB) Usage   FG Pool    Device   Servers
    m1: [ INFO  ] nsd1 fs1     2.0      Default 1  Default /dev/sdb [m1.example.com]
    m1: [ INFO  ] nsd2 fs1     2.0      Default 1  Default /dev/sdc [m1.example.com]
    m1: [ INFO  ] nsd3 fs1     2.0      Default 1  Default /dev/sdd [m1.example.com]
    m1: [ INFO  ] nsd4 fs1     2.0      Default 1  Default /dev/sde [m1.example.com]
    m1: [ INFO  ] nsd5 fs1     2.0      Default 1  Default /dev/sdf [m1.example.com]
    m1: [ INFO  ] nsd6 Default 10.0     Default 1  Default /dev/sdg [m1.example.com]
    m1: [ INFO  ] nsd7 Default 10.0     Default 1  Default /dev/sdh [m1.example.com]
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/install/script-05.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-05.sh
    m1: ===> Install Spectrum Scale and create a Spectrum Scale cluster
    m1: ===>
    m1: =========================================================================================
    m1: ===> Install Spectrum Scale and create Spectrum Scale cluster
    m1: + set -e
    m1: + echo '===> Install Spectrum Scale and create Spectrum Scale cluster'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale install
    m1: [ WARN  ] On fresh installations, you might get Chef related errors during installation toolkit deployment. To work around this problem, on fresh installations, before the installation toolkit prechecks, you must create passwordless SSH keys by using the 'ssh-keygen -m PEM' command.
    m1: [ INFO  ] Logging to file: /usr/lpp/mmfs/5.1.0.0/installer/logs/INSTALL-11-11-2020_10:48:48.log
    m1: [ INFO  ] Validating configuration
    m1: [ WARN  ] NTP is not set to be configured with the install toolkit.See './spectrumscale config ntp -h' to setup.
    m1: [ WARN  ] Only one GUI server specified. The Graphical User Interface will not be highly available.
    m1: [ INFO  ] Install toolkit will not configure file audit logging as it has been disabled.
    m1: [ INFO  ] Checking for knife bootstrap configuration...
    m1: [ INFO  ] Running pre-install checks
    m1: [ INFO  ] Running environment checks
    m1: [ INFO  ] Skipping license validation as no existing GPFS cluster detected.
    m1: [ WARN  ] With 1 node specified functionality will be limited
    m1: [ INFO  ] Checking pre-requisites for portability layer.
    m1: [ INFO  ] GPFS precheck OK
    m1: [ WARN  ] The NSD nsd1 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ WARN  ] The NSD nsd2 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ WARN  ] The NSD nsd3 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ WARN  ] The NSD nsd4 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ WARN  ] The NSD nsd5 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ WARN  ] The NSD nsd6 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ WARN  ] The NSD nsd7 only has one server configured. This may affect the ability to run concurrent maintenance on this cluster.
    m1: [ INFO  ] Running environment checks for Performance Monitoring
    m1: [ INFO  ] Running environment checks for file  Audit logging
    m1: [ INFO  ] Network check from admin node m1.example.com to all other nodes in the cluster passed
    m1: [ WARN  ] Ephemeral port range is not set. Please set valid ephemeral port range using the command ./spectrumscale config gpfs --ephemeral_port_range . You may set the default values as 60000-61000
    m1: [ INFO  ] The install toolkit will not configure call home as it is disabled. To enable call home, use the following CLI command: ./spectrumscale callhome enable
    m1: [ WARN  ] On the nodes: [['m1.example.com with OS RHEL7']], the Portmapper service (rpcbind) is found running and it is advised to disable it or verify with the operating system's administrator.
    m1: [ INFO  ] Preparing nodes for install
    m1: [ INFO  ] Installing Chef (deploy tool)
    m1: [ INFO  ] Installing Chef Client on nodes
    m1: [ INFO  ] Checking for chef-client and installing if required on m1.example.com
    m1: [ INFO  ] Chef Client 13.6.4 is on node m1.example.com
    m1: [ INFO  ] Installing GPFS
    m1: [ INFO  ] GPFS Packages to be installed: gpfs.base, gpfs.gpl, gpfs.msg.en_US, gpfs.docs, and gpfs.gskit
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:20] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:20] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:21] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:22] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:22] IBM SPECTRUM SCALE: Installing core GPFS packages (SS01)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:41] IBM SPECTRUM SCALE: Installing gpfs afm cos package (SS265)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:45] IBM SPECTRUM SCALE: Installing GPFS license package (SS230)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:47] IBM SPECTRUM SCALE: Installing performance monitoring packages (SS70)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:49:52] IBM SPECTRUM SCALE: Building portability layer (SS02)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:14] IBM SPECTRUM SCALE: Generating node description file for cluster configuration (SS03)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:14] IBM SPECTRUM SCALE: Creating GPFS cluster with default profile (SS04)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:20] IBM SPECTRUM SCALE: Setting client licenses (SS09)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:20] IBM SPECTRUM SCALE: Setting server licenses on manager nodes (SS12)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:22] IBM SPECTRUM SCALE: Setting server licenses on quorum nodes (SS11)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:22] IBM SPECTRUM SCALE: Setting server licenses on server nodes (SS11)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:23] IBM SPECTRUM SCALE: Setting server licenses on protocol nodes (SS11)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:23] IBM SPECTRUM SCALE: Setting server licenses on nsd nodes (SS11)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:23] IBM SPECTRUM SCALE: Setting ephemeral ports for GPFS daemon communication (SS13)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:24] IBM SPECTRUM SCALE: Starting GPFS (SS05)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:50:25] IBM SPECTRUM SCALE: Checking state of GPFS (SS103)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:07] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:08] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] Installing NSDs
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:24] IBM SPECTRUM SCALE: Generating stanza file for NSD creation (SS14)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:24] IBM SPECTRUM SCALE: Setting server licenses on NSD servers (SS15)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:25] IBM SPECTRUM SCALE: Creating NSDs (SS16)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:33] IBM SPECTRUM SCALE: Generating stanza file for NSD settings (SS17)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:33] IBM SPECTRUM SCALE: Updating NSD settings (SS18)
    m1: [ INFO  ] Installing Performance Monitoring
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:39] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:39] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:39] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:40] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:41] IBM SPECTRUM SCALE: Installing performance monitoring packages (SS70)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:48] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:49] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:54] IBM SPECTRUM SCALE: Starting performance monitoring collector (SS73)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:58] IBM SPECTRUM SCALE: Generating configuration for performance monitoring tools (SS83)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:52:58] IBM SPECTRUM SCALE: Modifying sensor configuration for performance monitoring tools (SS85)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:01] IBM SPECTRUM SCALE: Enabling GPFS Disk Capacity sensors (SS90)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:06] IBM SPECTRUM SCALE: Enabling performance monitoring sensors (SS84)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:11] IBM SPECTRUM SCALE: Restarting performance monitoring sensors (SS74)
    m1: [ INFO  ] Installing GUI
    m1: [ INFO  ] GUI packages to be installed: gpfs.gui
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:15] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:15] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:16] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:17] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:17] IBM SPECTRUM SCALE: Installing GPFS GUI package (SS19)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:53:27] IBM SPECTRUM SCALE: Starting the Graphical User Interface service (SS23)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:12] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:15] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] Installing FILE AUDIT LOGGING
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:25] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:25] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:26] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:27] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:54:30] IBM SPECTRUM SCALE: Installing gpfs librdkafka packages (SS265)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:55:23] IBM SPECTRUM SCALE: Installing gpfs librdkafka packages (SS265)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:55:27] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:55:28] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:55:35] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] Checking for a successful install
    m1: [ INFO  ] Checking state of Chef (deploy tool)
    m1: [ INFO  ] Chef (deploy tool) ACTIVE
    m1: [ INFO  ] Checking state of GPFS
    m1: [ INFO  ] Checking state of GPFS on all nodes
    m1: [ INFO  ] GPFS active on all nodes
    m1: [ INFO  ] GPFS ACTIVE
    m1: [ INFO  ] Checking state of NSDs
    m1: [ INFO  ] NSDs ACTIVE
    m1: [ INFO  ] Checking state of Performance Monitoring
    m1: [ INFO  ] Running Performance Monitoring post-install checks
    m1: [ INFO  ] pmcollector running on all nodes
    m1: [ INFO  ] pmsensors running on all nodes
    m1: [ INFO  ] Performance Monitoring ACTIVE
    m1: [ INFO  ] Checking state of GUI
    m1: [ INFO  ] Running Graphical User Interface  post-install checks
    m1: [ INFO  ] Graphical User Interface running on all GUI servers
    m1: [ INFO  ] Enter one of the following addresses into a web browser to access the Graphical User Interface: 10.0.2.15, 10.1.1.11, m1.example.com
    m1: [ INFO  ] GUI ACTIVE
    m1: [ INFO  ] SUCCESS
    m1: [ INFO  ] All services running
    m1: [ INFO  ] StanzaFile and NodeDesc file for NSD, filesystem, and cluster setup have been saved to /usr/lpp/mmfs folder on node: m1.example.com
    m1: [ INFO  ] Installation successful. 1 GPFS node active in cluster demo.example.com. Completed in 6 minutes 53 seconds.
    m1: [ INFO  ] Tip :If all node designations and any required protocol configurations are complete, proceed to check the deploy configuration:./spectrumscale deploy --precheck
    m1: ===> Show cluster configuration
    m1: + echo '===> Show cluster configuration'
    m1: + sudo mmlscluster
    m1:
    m1: GPFS cluster information
    m1: ========================
    m1:   GPFS cluster name:         demo.example.com
    m1:   GPFS cluster id:           4200744107496948200
    m1:   GPFS UID domain:           demo.example.com
    m1:   Remote shell command:      /usr/bin/ssh
    m1:   Remote file copy command:  /usr/bin/scp
    m1:   Repository type:           CCR
    m1:
    m1:  Node  Daemon node name  IP address  Admin node name  Designation
    m1: ------------------------------------------------------------------
    m1:    1   m1.example.com    10.1.2.11   m1.example.com   quorum-manager-perfmon
    m1: ===> Show node state
    m1: + echo '===> Show node state'
    m1: + sudo mmgetstate -a
    m1:
    m1:  Node number  Node name        GPFS state
    m1: -------------------------------------------
    m1:        1      m1               active
    m1: ===> Show cluster health
    m1: + echo '===> Show cluster health'
    m1: + sudo mmhealth cluster show
    m1:
    m1: Component           Total         Failed       Degraded        Healthy          Other
    m1: -------------------------------------------------------------------------------------
    m1: NODE                    1              0              0              0              1
    m1: GPFS                    1              0              0              0              1
    m1: NETWORK                 1              0              0              1              0
    m1: FILESYSTEM              0              0              0              0              0
    m1: DISK                    0              0              0              0              0
    m1: GUI                     1              0              0              1              0
    m1: PERFMON                 1              0              0              1              0
    m1: THRESHOLD               1              0              0              1              0
    m1: ===> Show node health
    m1: + echo '===> Show node health'
    m1: + sudo mmhealth node show
    m1:
    m1: Node name:      m1.example.com
    m1: Node status:    TIPS
    m1: Status Change:  3 min. ago
    m1:
    m1: Component      Status        Status Change     Reasons
    m1: ----------------------------------------------------------------------------------------
    m1: GPFS           TIPS          3 min. ago        callhome_not_enabled, gpfs_pagepool_small
    m1: NETWORK        HEALTHY       4 min. ago        -
    m1: FILESYSTEM     HEALTHY       4 min. ago        -
    m1: DISK           HEALTHY       2 min. ago        -
    m1: GUI            HEALTHY       1 min. ago        -
    m1: PERFMON        HEALTHY       2 min. ago        -
    m1: THRESHOLD      HEALTHY       Now               -
    m1: ===> Show NSDs
    m1: + echo '===> Show NSDs'
    m1: + sudo mmlsnsd
    m1:
    m1:  File system   Disk name       NSD servers
    m1: ------------------------------------------------------------------------------
    m1:  (free disk)   nsd1            m1.example.com
    m1:  (free disk)   nsd2            m1.example.com
    m1:  (free disk)   nsd3            m1.example.com
    m1:  (free disk)   nsd4            m1.example.com
    m1:  (free disk)   nsd5            m1.example.com
    m1:  (free disk)   nsd6            m1.example.com
    m1:  (free disk)   nsd7            m1.example.com
    m1: ===> Show GUI service
    m1: + echo '===> Show GUI service'
    m1: + service gpfsgui.service status
    m1: Redirecting to /bin/systemctl status gpfsgui.service
    m1: ● gpfsgui.service - IBM_Spectrum_Scale Administration GUI
    m1:    Loaded: loaded (/usr/lib/systemd/system/gpfsgui.service; enabled; vendor preset: disabled)
    m1:    Active: active (running) since Wed 2020-11-11 10:54:02 UTC; 1min 40s ago
    m1:  Main PID: 29297 (java)
    m1:    Status: "GSS/GPFS GUI started"
    m1:    Memory: 261.7M (limit: 2.0G)
    m1:    CGroup: /system.slice/gpfsgui.service
    m1:            ├─17767 /bin/bash -c set -m; grep -q -E 'ccrEnabled\s+no' /var/mmfs/gen/mmfs.cfg || sudo mmccr 'fget' '_gui.keystore_settings' '/var/lib/mmfs/gui/keystore_settings.json.e292af06-c217-4081-a309-add4e821b385'
    m1:            └─29297 /usr/lpp/mmfs/java/jre/bin/java -XX:+HeapDumpOnOutOfMemoryError -Dhttps.protocols=TLSv1.2,TLSv1.3 -Djava.library.path=/opt/ibm/wlp/usr/servers/gpfsgui/lib/ -javaagent:/opt/ibm/wlp/bin/tools/ws-javaagent.jar -jar /opt/ibm/wlp/bin/tools/ws-server.jar gpfsgui --clean
    m1:
    m1: Nov 11 10:55:13 m1 sudo[17306]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmccr fget mmsdrfs /var/lib/mmfs/gui/mmsdrfs.cct
    m1: Nov 11 10:55:13 m1 sudo[17311]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/gui/bin-sudo/chown_ccr_fget_file.sh mmsdrfs.cct
    m1: Nov 11 10:55:13 m1 sudo[17344]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmauth show
    m1: Nov 11 10:55:14 m1 sudo[17427]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/gui/bin-sudo/tsstatus.sh
    m1: Nov 11 10:55:14 m1 sudo[17448]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmhealth thresholds list -Y
    m1: Nov 11 10:55:14 m1 sudo[17463]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmcallhome capability list -Y
    m1: Nov 11 10:55:14 m1 sudo[17482]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmhealth node show --verbose -N m1 -Y
    m1: Nov 11 10:55:15 m1 sudo[17522]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmperfmon config show
    m1: Nov 11 10:55:16 m1 sudo[17724]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmhealth node eventlog -N m1 -Y
    m1: Nov 11 10:55:18 m1 sudo[17771]: scalemgmt : TTY=unknown ; PWD=/opt/ibm/wlp ; USER=root ; COMMAND=/usr/lpp/mmfs/bin/mmccr fget _gui.keystore_settings /var/lib/mmfs/gui/keystore_settings.json.e292af06-c217-4081-a309-add4e821b385
    m1: ===> Show Zimon Collector service
    m1: + echo '===> Show Zimon Collector service'
    m1: + service pmcollector status
    m1: Redirecting to /bin/systemctl status pmcollector.service
    m1: ● pmcollector.service - zimon collector daemon
    m1:    Loaded: loaded (/usr/lib/systemd/system/pmcollector.service; enabled; vendor preset: disabled)
    m1:    Active: active (running) since Wed 2020-11-11 10:53:01 UTC; 2min 42s ago
    m1:  Main PID: 25060 (pmcollector)
    m1:    Memory: 852.0K
    m1:    CGroup: /system.slice/pmcollector.service
    m1:            └─25060 /opt/IBM/zimon/sbin/pmcollector -C /opt/IBM/zimon/ZIMonCollector.cfg -R /var/run/perfmon
    m1:
    m1: Nov 11 10:53:01 m1 systemd[1]: Stopped zimon collector daemon.
    m1: Nov 11 10:53:01 m1 systemd[1]: Started zimon collector daemon.
    m1: ===> Show Zimon Sensors service
    m1: + echo '===> Show Zimon Sensors service'
    m1: + service pmsensors status
    m1: Redirecting to /bin/systemctl status pmsensors.service
    m1: ● pmsensors.service - zimon sensor daemon
    m1:    Loaded: loaded (/usr/lib/systemd/system/pmsensors.service; enabled; vendor preset: disabled)
    m1:    Active: active (running) since Wed 2020-11-11 10:53:07 UTC; 2min 35s ago
    m1:  Main PID: 27207 (pmsensors)
    m1:    Memory: 8.8M
    m1:    CGroup: /system.slice/pmsensors.service
    m1:            ├─27207 /opt/IBM/zimon/sbin/pmsensors -C /opt/IBM/zimon/ZIMonSensors.cfg -R /var/run/perfmon
    m1:            ├─27247 /opt/IBM/zimon/MMDFProxy
    m1:            ├─27249 /opt/IBM/zimon/MmpmonSockProxy
    m1:            └─27265 /opt/IBM/zimon/MMCmdProxy
    m1:
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSFileset registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSPool registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSWaiters registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSEventProducer registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSBufMgr registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSOpenFile registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSQoS registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: GPFSVFSX registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: SensorFactory: Blaster registered
    m1: Nov 11 10:53:07 m1 pmsensors[27207]: Nov-11 10:53:07  [Info   ]: Successfully read configuration from file /opt/IBM/zimon/ZIMonSensors.cfg
    m1: ==> Initialize Spectrum Scale GUI
    m1: + echo '==> Initialize Spectrum Scale GUI'
    m1: + sudo /usr/lpp/mmfs/gui/cli/initgui
    m1: EFSSA0634I The cluster demo.example.com has already been added as managed cluster.
    m1: EFSSG1000I The command completed successfully.
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/install/script-06.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-06.sh
    m1: ===> Create Spectrum Scale filesystems
    m1: ===>
    m1: =========================================================================================
    m1: ===> Show filesystem specification
    m1: + set -e
    m1: + echo '===> Show filesystem specification'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale filesystem list
    m1: [ INFO  ] Name   BlockSize   Mountpoint   NSDs Assigned  Default Data Replicas     Max Data Replicas     Default Metadata Replicas     Max Metadata Replicas
    m1: [ INFO  ] fs1    Default (4M)/ibm/fs1     5              1                         2                     1                             2
    m1: [ INFO  ]
    m1: ===> Create Spectrum Scale filesystems
    m1: + echo '===> Create Spectrum Scale filesystems'
    m1: + sudo /usr/lpp/mmfs/5.1.0.0/installer/spectrumscale deploy
    m1: [ WARN  ] On fresh installations, you might get Chef related errors during installation toolkit deployment. To work around this problem, on fresh installations, before the installation toolkit prechecks, you must create passwordless SSH keys by using the 'ssh-keygen -m PEM' command.
    m1: [ INFO  ] Logging to file: /usr/lpp/mmfs/5.1.0.0/installer/logs/DEPLOY-11-11-2020_10:55:47.log
    m1: [ INFO  ] Validating configuration
    m1: [ WARN  ] Only one GUI server specified. The Graphical User Interface will not be highly available.
    m1: [ INFO  ] Install toolkit will not configure file audit logging as it has been disabled.
    m1: [ INFO  ] Checking for knife bootstrap configuration...
    m1: [ INFO  ] Running pre-install checks
    m1: [ INFO  ] NSDs are in a valid state
    m1: [ INFO  ] Running environment checks for Performance Monitoring
    m1: [ INFO  ] Running environment checks for file  Audit logging
    m1: [ INFO  ] Network check from admin node m1.example.com to all other nodes in the cluster passed
    m1: [ WARN  ] Ephemeral port range is not set. Please set valid ephemeral port range using the command ./spectrumscale config gpfs --ephemeral_port_range . You may set the default values as 60000-61000
    m1: [ INFO  ] The install toolkit will not configure call home as it is disabled. To enable call home, use the following CLI command: ./spectrumscale callhome enable
    m1: [ WARN  ] On the nodes: [['m1.example.com with OS RHEL7']], the Portmapper service (rpcbind) is found running and it is advised to disable it or verify with the operating system's administrator.
    m1: [ INFO  ] Preparing nodes for install
    m1: [ INFO  ] Installing Chef (deploy tool)
    m1: [ INFO  ] Installing Chef Client on nodes
    m1: [ INFO  ] Checking for chef-client and installing if required on m1.example.com
    m1: [ INFO  ] Chef Client 13.6.4 is on node m1.example.com
    m1: [ INFO  ] Installing Filesystem
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:13] IBM SPECTRUM SCALE: Creating GPFS filesystem fs1 (SS17)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:25] IBM SPECTRUM SCALE: Mounting GPFS filesystem fs1 (SS18)
    m1: [ INFO  ] Installing Performance Monitoring
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:37] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:37] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:38] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:39] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:40] IBM SPECTRUM SCALE: Installing performance monitoring packages (SS70)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:47] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:48] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:54] IBM SPECTRUM SCALE: Starting performance monitoring collector (SS73)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:58] IBM SPECTRUM SCALE: Generating configuration for performance monitoring tools (SS83)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:59] IBM SPECTRUM SCALE: Modifying sensor configuration for performance monitoring tools (SS85)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:56:59] IBM SPECTRUM SCALE: Enabling GPFS Disk Capacity sensors (SS90)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:04] IBM SPECTRUM SCALE: Enabling performance monitoring sensors (SS84)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:13] IBM SPECTRUM SCALE: Restarting performance monitoring sensors (SS74)
    m1: [ INFO  ] Installing GUI
    m1: [ INFO  ] GUI packages to be installed: gpfs.gui
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:19] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:19] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:20] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:20] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:21] IBM SPECTRUM SCALE: Installing GPFS GUI package (SS19)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:24] IBM SPECTRUM SCALE: Starting the Graphical User Interface service (SS23)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:28] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:29] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] Installing FILE AUDIT LOGGING
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:35] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:35] IBM SPECTRUM SCALE: Creating core gpfs repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:36] IBM SPECTRUM SCALE: Creating gpfs librdkafka repository (SS00)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:37] IBM SPECTRUM SCALE: Configuring GPFS performance monitoring repository (SS31)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:39] IBM SPECTRUM SCALE: Installing gpfs librdkafka packages (SS265)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:39] IBM SPECTRUM SCALE: Installing gpfs librdkafka packages (SS265)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:43] IBM SPECTRUM SCALE: Tearing down core gpfs repository (SS06)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:44] IBM SPECTRUM SCALE: Tearing down GPFS performance monitoring repository (SS35)
    m1: [ INFO  ] [m1.example.com 11-11-2020 10:57:51] IBM SPECTRUM SCALE: Removing Yum cache repository (SS229)
    m1: [ INFO  ] Checking for a successful install
    m1: [ INFO  ] Checking state of Chef (deploy tool)
    m1: [ INFO  ] Chef (deploy tool) ACTIVE
    m1: [ INFO  ] Checking state of Filesystem
    m1: [ INFO  ] File systems have been created successfully
    m1: [ INFO  ] Filesystem ACTIVE
    m1: [ INFO  ] Checking state of Performance Monitoring
    m1: [ INFO  ] Running Performance Monitoring post-install checks
    m1: [ INFO  ] pmcollector running on all nodes
    m1: [ INFO  ] pmsensors running on all nodes
    m1: [ INFO  ] Performance Monitoring ACTIVE
    m1: [ INFO  ] Checking state of GUI
    m1: [ INFO  ] Running Graphical User Interface  post-install checks
    m1: [ INFO  ] Graphical User Interface running on all GUI servers
    m1: [ INFO  ] Enter one of the following addresses into a web browser to access the Graphical User Interface: 10.0.2.15, 10.1.1.11, m1.example.com
    m1: [ INFO  ] GUI ACTIVE
    m1: [ INFO  ] SUCCESS
    m1: [ INFO  ] All services running
    m1: [ INFO  ] StanzaFile and NodeDesc file for NSD, filesystem, and cluster setup have been saved to /usr/lpp/mmfs folder on node: m1.example.com
    m1: [ INFO  ] Successfully installed protocol packages on 0 protocol nodes. Components installed: Chef (deploy tool), Filesystem, Performance Monitoring, GUI, FILE AUDIT LOGGING. it took 2 minutes and 9 seconds.
    m1: ===> Enable quotas
    m1: ===> Note: Capacity reports in the GUI depend on enabled quotas
    m1: + echo '===> Enable quotas'
    m1: + echo '===> Note: Capacity reports in the GUI depend on enabled quotas'
    m1: + sudo mmchfs fs1 -Q yes
    m1: mmchfs: mmsdrfs propagation completed.
    m1: ===> Show Spectrum Scale filesystem configuration
    m1: + echo '===> Show Spectrum Scale filesystem configuration'
    m1: + sudo mmlsfs all
    m1:
    m1: File system attributes for /dev/fs1:
    m1: ====================================
    m1: flag                value                    description
    m1: ------------------- ------------------------ -----------------------------------
    m1:  -f                 8192                     Minimum fragment (subblock) size in bytes
    m1:  -i                 4096                     Inode size in bytes
    m1:  -I                 32768                    Indirect block size in bytes
    m1:  -m                 1                        Default number of metadata replicas
    m1:  -M                 2                        Maximum number of metadata replicas
    m1:  -r                 1                        Default number of data replicas
    m1:  -R                 2                        Maximum number of data replicas
    m1:  -j                 scatter                  Block allocation type
    m1:  -D                 nfs4                     File locking semantics in effect
    m1:  -k                 nfs4                     ACL semantics in effect
    m1:  -n                 100                      Estimated number of nodes that will mount file system
    m1:  -B                 4194304                  Block size
    m1:  -Q                 user;group;fileset       Quotas accounting enabled
    m1:                     user;group;fileset       Quotas enforced
    m1:                     none                     Default quotas enabled
    m1:  --perfileset-quota No                       Per-fileset quota enforcement
    m1:  --filesetdf        No                       Fileset df enabled?
    m1:  -V                 24.00 (5.1.0.0)          File system version
    m1:  --create-time      Wed Nov 11 10:56:25 2020 File system creation time
    m1:  -z                 No                       Is DMAPI enabled?
    m1:  -L                 33554432                 Logfile size
    m1:  -E                 Yes                      Exact mtime mount option
    m1:  -S                 relatime                 Suppress atime mount option
    m1:  -K                 whenpossible             Strict replica allocation option
    m1:  --fastea           Yes                      Fast external attributes enabled?
    m1:  --encryption       No                       Encryption enabled?
    m1:  --inode-limit      107520                   Maximum number of inodes
    m1:  --log-replicas     0                        Number of log replicas
    m1:  --is4KAligned      Yes                      is4KAligned?
    m1:  --rapid-repair     Yes                      rapidRepair enabled?
    m1:  --write-cache-threshold 0                   HAWC Threshold (max 65536)
    m1:  --subblocks-per-full-block 512              Number of subblocks per full block
    m1:  -P                 system                   Disk storage pools in file system
    m1:  --file-audit-log   No                       File Audit Logging enabled?
    m1:  --maintenance-mode No                       Maintenance Mode enabled?
    m1:  -d                 nsd1;nsd2;nsd3;nsd4;nsd5  Disks in file system
    m1:  -A                 yes
    m1: Automatic mount option
    m1:  -o                 none
    m1: Additional mount options
    m1:  -T                 /ibm/fs1
    m1: Default mount point
    m1:  --mount-priority   0
    m1: Mount priority
    m1: ===> Show Spectrum Scale filesystem usage
    m1: + echo '===> Show Spectrum Scale filesystem usage'
    m1: + sudo mmdf fs1
    m1: disk                disk size  failure holds    holds           free in KB          free in KB
    m1: name                    in KB    group metadata data        in full blocks        in fragments
    m1: --------------- ------------- -------- -------- ----- -------------------- -------------------
    m1: Disks in storage pool: system (Maximum disk size allowed is 29.12 GB)
    m1: nsd1                  2097152        1 Yes      Yes         1273856 ( 61%)         11384 ( 1%)
    m1: nsd2                  2097152        1 Yes      Yes         1269760 ( 61%)         11128 ( 1%)
    m1: nsd3                  2097152        1 Yes      Yes         1290240 ( 62%)         11128 ( 1%)
    m1: nsd4                  2097152        1 Yes      Yes         1286144 ( 61%)         11640 ( 1%)
    m1: nsd5                  2097152        1 Yes      Yes         1241088 ( 59%)         11640 ( 1%)
    m1:                 -------------                         -------------------- -------------------
    m1: (pool total)         10485760                               6361088 ( 61%)         56920 ( 1%)
    m1:
    m1:                 =============                         ==================== ===================
    m1: (total)              10485760                               6361088 ( 61%)         56920 ( 1%)
    m1:
    m1: Inode Information
    m1: -----------------
    m1: Number of used inodes:            4106
    m1: Number of free inodes:          103414
    m1: Number of allocated inodes:     107520
    m1: Maximum number of inodes:       107520
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/install/script-07.sh VirtualBox
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/install/script-07.sh
    m1: ===> Tune sensors for demo environment
    m1: ===>
    m1: =========================================================================================
    m1: ===> Tune sensors for demo environment
    m1: + set -e
    m1: + echo '===> Tune sensors for demo environment'
    m1: + sudo mmperfmon config update GPFSPool.restrict=m1.example.com GPFSFileset.restrict=m1.example.com DiskFree.period=300 GPFSFilesetQuota.period=300 GPFSDiskCap.period=300
    m1: mmperfmon: mmsdrfs propagation completed.
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
==> m1: Running provisioner: Configure Spectrum Scale for demo purposes (shell)...
    m1: Running: script: Configure Spectrum Scale for demo purposes
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/demo/script.sh
    m1: ===> Perform all steps to configure Spectrum Scale for demo purposes
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: + /vagrant/demo/script-01.sh
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/demo/script-01.sh
    m1: ===> Show Spectrum Scale filesystems
    m1: ===>
    m1: =========================================================================================
    m1: ===> Show the global mount status for the whole Spectrum Scale cluster
    m1: + set -e
    m1: + echo '===> Show the global mount status for the whole Spectrum Scale cluster'
    m1: + mmlsmount all
    m1: File system fs1 is mounted on 1 nodes.
    m1: ==> Show the default mount point managed by Spectrum Scale
    m1: + echo '==> Show the default mount point managed by Spectrum Scale'
    m1: + mmlsfs fs1 -T
    m1: flag                value                    description
    m1: ------------------- ------------------------ -----------------------------------
    m1:  -T                 /ibm/fs1
    m1: Default mount point
    m1: ===> Show the local mount status on the current node
    m1: + echo '===> Show the local mount status on the current node'
    m1: fs1 on /ibm/fs1 type gpfs (rw,relatime,seclabel)
    m1: ===> Show content of all Spectrum Scale filesystems
    m1: + mount
    m1: + grep /ibm/
    m1: + echo '===> Show content of all Spectrum Scale filesystems'
    m1: + find /ibm/
    m1: /ibm/
    m1: /ibm/fs1
    m1: /ibm/fs1/.snapshots
    m1: ==> Show all Spectrum Scale filesystems using the REST API
    m1: + echo '==> Show all Spectrum Scale filesystems using the REST API'
    m1: + curl -k -s -S -X GET --header 'Accept: application/json' -u admin:admin001 https://localhost/scalemgmt/v2/filesystems/
    m1: Error 401: SRVE0295E: Error reported: 401
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/demo/script-02.sh
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/demo/script-02.sh
    m1: ===> Add second storage pool to filesystem fs1
    m1: ===>
    m1: =========================================================================================
    m1: ===> Show storage pools of filesystem fs1
    m1: + set -e
    m1: + echo '===> Show storage pools of filesystem fs1'
    m1: + mmlspool fs1
    m1: Storage pools in file system at '/ibm/fs1':
    m1: Name                    Id   BlkSize Data Meta Total Data in (KB)   Free Data in (KB)   Total Meta in (KB)    Free Meta in (KB)
    m1: system                   0      4 MB  yes  yes       10485760        6361088 ( 61%)       10485760        6414336 ( 61%)
    m1: ===> Show usage of filesystem fs1
    m1: + echo '===> Show usage of filesystem fs1'
    m1: + mmdf fs1
    m1: disk                disk size  failure holds    holds           free in KB          free in KB
    m1: name                    in KB    group metadata data        in full blocks        in fragments
    m1: --------------- ------------- -------- -------- ----- -------------------- -------------------
    m1: Disks in storage pool: system (Maximum disk size allowed is 29.12 GB)
    m1: nsd1                  2097152        1 Yes      Yes         1273856 ( 61%)         11384 ( 1%)
    m1: nsd2                  2097152        1 Yes      Yes         1269760 ( 61%)         11128 ( 1%)
    m1: nsd3                  2097152        1 Yes      Yes         1290240 ( 62%)         11128 ( 1%)
    m1: nsd4                  2097152        1 Yes      Yes         1286144 ( 61%)         11640 ( 1%)
    m1: nsd5                  2097152        1 Yes      Yes         1241088 ( 59%)         11640 ( 1%)
    m1:                 -------------                         -------------------- -------------------
    m1: (pool total)         10485760                               6361088 ( 61%)         56920 ( 1%)
    m1:
    m1:                 =============                         ==================== ===================
    m1: (total)              10485760                               6361088 ( 61%)         56920 ( 1%)
    m1:
    m1: Inode Information
    m1: -----------------
    m1: Number of used inodes:            4106
    m1: Number of free inodes:          103414
    m1: Number of allocated inodes:     107520
    m1: Maximum number of inodes:       107520
    m1: ===> Show the stanza file that describe the new disks
    m1: + echo '===> Show the stanza file that describe the new disks'
    m1: + cat /vagrant/files/spectrumscale/stanza-fs1-capacity
    m1: %nsd: device=/dev/sdg
    m1: nsd=nsd6
    m1: servers=m1
    m1: usage=dataOnly
    m1: failureGroup=1
    m1: pool=capacity
    m1:
    m1: %nsd: device=/dev/sdh
    m1: nsd=nsd7
    m1: servers=m1
    m1: usage=dataOnly
    m1: failureGroup=1
    m1: pool=capacity
    m1: ===> Add NSDs to new capacity storage pool
    m1: + echo '===> Add NSDs to new capacity storage pool'
    m1: + sudo mmadddisk fs1 -F /vagrant/files/spectrumscale/stanza-fs1-capacity
    m1:
    m1: The following disks of fs1 will be formatted on node m1:
    m1:     nsd6: size 10240 MB
    m1:     nsd7: size 10240 MB
    m1: Extending Allocation Map
    m1: Creating Allocation Map for storage pool capacity
    m1: Flushing Allocation Map for storage pool capacity
    m1: Disks up to size 322.37 GB can be added to storage pool capacity.
    m1: Checking Allocation Map for storage pool capacity
    m1: Completed adding disks to file system fs1.
    m1: mmadddisk: mmsdrfs propagation completed.
    m1: ===> Show storage pools of filesystem fs1
    m1: + echo '===> Show storage pools of filesystem fs1'
    m1: + mmlspool fs1
    m1: Storage pools in file system at '/ibm/fs1':
    m1: Name                    Id   BlkSize Data Meta Total Data in (KB)   Free Data in (KB)   Total Meta in (KB)    Free Meta in (KB)
    m1: system                   0      4 MB  yes  yes       10485760        6348800 ( 61%)       10485760        6402048 ( 61%)
    m1: capacity             65537      4 MB  yes   no       20971520              0 (  0%)              0              0 (  0%)
    m1: ===> Show usage of filesystem fs1
    m1: + echo '===> Show usage of filesystem fs1'
    m1: + mmdf fs1
    m1: disk                disk size  failure holds    holds           free in KB          free in KB
    m1: name                    in KB    group metadata data        in full blocks        in fragments
    m1: --------------- ------------- -------- -------- ----- -------------------- -------------------
    m1: Disks in storage pool: system (Maximum disk size allowed is 29.12 GB)
    m1: nsd1                  2097152        1 Yes      Yes         1273856 ( 61%)         11384 ( 1%)
    m1: nsd2                  2097152        1 Yes      Yes         1269760 ( 61%)         11128 ( 1%)
    m1: nsd3                  2097152        1 Yes      Yes         1286144 ( 61%)         11128 ( 1%)
    m1: nsd4                  2097152        1 Yes      Yes         1282048 ( 61%)         11640 ( 1%)
    m1: nsd5                  2097152        1 Yes      Yes         1236992 ( 59%)         11640 ( 1%)
    m1:                 -------------                         -------------------- -------------------
    m1: (pool total)         10485760                               6348800 ( 61%)         56920 ( 1%)
    m1:
    m1: Disks in storage pool: capacity (Maximum disk size allowed is 322.37 GB)
    m1: nsd6                 10485760        1 No       Yes        10412032 ( 99%)          8056 ( 0%)
    m1: nsd7                 10485760        1 No       Yes        10412032 ( 99%)          8056 ( 0%)
    m1:                 -------------                         -------------------- -------------------
    m1: (pool total)         20971520                              20824064 ( 99%)         16112 ( 0%)
    m1:
    m1:                 =============                         ==================== ===================
    m1: (data)               31457280                              27172864 ( 86%)         73032 ( 0%)
    m1: (metadata)           10485760                               6348800 ( 61%)         56920 ( 1%)
    m1:                 =============                         ==================== ===================
    m1: (total)              31457280                              27172864 ( 86%)         73032 ( 0%)
    m1:
    m1: Inode Information
    m1: -----------------
    m1: Number of used inodes:            4106
    m1: Number of free inodes:          103414
    m1: Number of allocated inodes:     107520
    m1: Maximum number of inodes:       107520
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/demo/script-03.sh
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/demo/script-03.sh
    m1: ===> Create placement policy for filesystem fs1
    m1: ===>
    m1: =========================================================================================
    m1: ===> Show file that contains the placement rules
    m1: + set -e
    m1: + echo '===> Show file that contains the placement rules'
    m1: + cat /vagrant/files/spectrumscale/fs1-placement-policy
    m1: RULE 'Files ending with .hot to system pool'
    m1:     SET POOL 'system'
    m1:         WHERE (LOWER(NAME) LIKE '%.hot')
    m1:
    m1: RULE 'All other files to capacity pool'
    m1:     SET POOL 'capacity'
    m1: ===> Activate placement rules
    m1: + echo '===> Activate placement rules'
    m1: + sudo mmchpolicy fs1 /vagrant/files/spectrumscale/fs1-placement-policy
    m1: Validated policy 'fs1-placement-policy': Parsed 2 policy rules.
    m1: Policy `fs1-placement-policy' installed and broadcast to all nodes.
    m1: ===> Show all active rules
    m1: + echo '===> Show all active rules'
    m1: + mmlspolicy fs1
    m1: Policy for file system '/dev/fs1':
    m1:    Installed by vagrant@m1 on Wed Nov 11 10:58:22 2020.
    m1:    First line of policy 'fs1-placement-policy' is:
    m1: RULE 'Files ending with .hot to system pool'
    m1: ===> Create example directory to demonstrate placement rules
    m1: + echo '===> Create example directory to demonstrate placement rules'
    m1: + sudo mkdir -p /ibm/fs1/examples/placement_policy
    m1: ===> Create file that will be placed in the system storage pool
    m1: + echo '===> Create file that will be placed in the system storage pool'
    m1: + sudo bash -c 'echo "This file will be placed in the system storage pool" > /ibm/fs1/examples/placement_policy/file.hot'
    m1: ===> Create file that will be placed in the capacity storage pool
    m1: + echo '===> Create file that will be placed in the capacity storage pool'
    m1: + sudo bash -c 'echo "This file will be placed in the capacity storage pool" > /ibm/fs1/examples/placement_policy/file.txt'
    m1: ===> Show that hot file is placed in the system storage pool
    m1: + echo '===> Show that hot file is placed in the system storage pool'
    m1: + grep 'storage pool name'
    m1: + mmlsattr -L /ibm/fs1/examples/placement_policy/file.hot
    m1: storage pool name:    system
    m1: ===> Show that default file is placed in the capacity storage pool
    m1: + echo '===> Show that default file is placed in the capacity storage pool'
    m1: + grep 'storage pool name'
    m1: + mmlsattr -L /ibm/fs1/examples/placement_policy/file.txt
    m1: storage pool name:    capacity
    m1: ===> Show that Spectrum Scale storage pools are not visible to end users
    m1: + echo '===> Show that Spectrum Scale storage pools are not visible to end users'
    m1: + ls -la /ibm/fs1/examples/placement_policy
    m1: total 2
    m1: drwxr-xr-x. 2 root root 4096 Nov 11 10:58 .
    m1: drwxr-xr-x. 3 root root 4096 Nov 11 10:58 ..
    m1: -rw-r--r--. 1 root root   52 Nov 11 10:58 file.hot
    m1: -rw-r--r--. 1 root root   54 Nov 11 10:58 file.txt
    m1: + wc -l /ibm/fs1/examples/placement_policy/file.hot /ibm/fs1/examples/placement_policy/file.txt
    m1:   1 /ibm/fs1/examples/placement_policy/file.hot
    m1:   1 /ibm/fs1/examples/placement_policy/file.txt
    m1:   2 total
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/demo/script-05.sh
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/demo/script-05.sh
    m1: ===> Create filesets and some example data
    m1: ===>
    m1: =========================================================================================
    m1: ===> Create example groups
    m1: + set -e
    m1: + echo '===> Create example groups'
    m1: + sudo groupadd flowers
    m1: + sudo groupadd pets
    m1: ===> Creating example users
    m1: + echo '===> Creating example users'
    m1: + sudo useradd admin_flowers -g flowers
    m1: + sudo useradd daffodils -g flowers
    m1: + sudo useradd roses -g flowers
    m1: + sudo useradd tulips -g flowers
    m1: + sudo useradd admin_pets -g pets
    m1: + sudo useradd cats -g pets
    m1: + sudo useradd dogs -g pets
    m1: + sudo useradd hamsters -g pets
    m1: ===> Create Spectrum Scale Filesets
    m1: + echo '===> Create Spectrum Scale Filesets'
    m1: + sudo mmcrfileset fs1 pets -t 'Cute Pets'
    m1: Fileset pets created with id 1 root inode 76547.
    m1: + sudo mmcrfileset fs1 flowers -t 'Lovely Flowers'
    m1: Fileset flowers created with id 2 root inode 76548.
    m1: ===> Link Spectrum Scale Filesets
    m1: + echo '===> Link Spectrum Scale Filesets'
    m1: + sudo mmlinkfileset fs1 pets -J /ibm/fs1/pets
    m1: Fileset pets linked at /ibm/fs1/pets
    m1: + sudo mmlinkfileset fs1 flowers -J /ibm/fs1/flowers
    m1: Fileset flowers linked at /ibm/fs1/flowers
    m1: ===> Create directories for users
    m1: + echo '===> Create directories for users'
    m1: + sudo mkdir /ibm/fs1/flowers/daffodils
    m1: + sudo mkdir /ibm/fs1/flowers/roses
    m1: + sudo mkdir /ibm/fs1/flowers/tulips
    m1: + sudo mkdir /ibm/fs1/pets/cats
    m1: + sudo mkdir /ibm/fs1/pets/dogs
    m1: + sudo mkdir /ibm/fs1/pets/hamsters
    m1: ===> Create some files in each user directory
    m1: + echo '===> Create some files in each user directory'
    m1: + inc=3
    m1: + for dir in '/ibm/fs1/*/*'
    m1: + inc=4
    m1: + num_files=14
    m1: + cur_file=0
    m1: + '[' 0 -lt 14 ']'
    m1: + cur_file=1
    m1: + num_blocks=27
    m1: + sudo dd if=/dev/zero of=/ibm/fs1/examples/placement_policy/file1 bs=100K count=27
    m1: + '[' 1 -lt 14 ']'
    m1: + cur_file=2
    m1: + num_blocks=44
    m1: + sudo dd if=/dev/zero of=/ibm/fs1/examples/placement_policy/file2 bs=100K count=44
...
    m1: + sudo touch /ibm/fs1/examples/exceed_error_threshold/file950
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: + /vagrant/demo/script-99.sh
    m1: =========================================================================================
    m1: ===>
    m1: ===> Running /vagrant/demo/script-99.sh
    m1: ===> Update the capacitiy reports
    m1: ===>
    m1: =========================================================================================
    m1: + set -e
    m1: ===> Initialize quota database
    m1: + echo '===> Initialize quota database'
    m1: + sudo mmcheckquota fs1
    m1: fs1: Start quota check
    m1:   12 % complete on Wed Nov 11 11:21:41 2020
    m1:   24 % complete on Wed Nov 11 11:21:42 2020
    m1:   36 % complete on Wed Nov 11 11:21:42 2020
    m1:   68 % complete on Wed Nov 11 11:21:42 2020
    m1:  100 % complete on Wed Nov 11 11:21:42 2020
    m1: Finished scanning the inodes for fs1.
    m1: Merging results from scan.
    m1: mmcheckquota: Command completed.
    m1: ===> Update capacity reports
    m1: + echo '===> Update capacity reports'
    m1: + sudo /usr/lpp/mmfs/gui/cli/runtask QUOTA
    m1: EFSSG1000I The command completed successfully.
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0
    m1: ===> Script completed successfully!
    m1: + echo '===> Script completed successfully!'
    m1: + exit 0

==> m1: Machine 'm1' has a post `vagrant up` message. This is a message
==> m1: from the creator of the Vagrantfile, and not from Vagrant itself:
==> m1:
==> m1: --------------------------------------------------------------------------
==> m1:
==> m1: Created virtual environment for IBM Spectrum Scale.
==> m1:
==> m1: User Guide:
==> m1: https://github.com/IBM/SpectrumScaleVagrant/blob/master/README.md
==> m1:
==> m1: To logon on the management node execute:
==> m1: vagrant ssh
==> m1:
==> m1: To connect to the Spectrum Scale GUI, in a web browser:
==> m1: https://localhost:8888
==> m1:
==> m1: --------------------------------------------------------------------------
==> m1:
troppens commented 3 years ago

@neikei - Thanks for providing this enhancement!