TASK [common_baseline_check : Common | Check RAM size] **
TASK [common_baseline_check : Common | Check CPU architecture] **
TASK [common_baseline_check : Common | Validate OS distribution] ****
TASK [common_baseline_check : Common | (Optional) Check UTC Timezone] ***
TASK [common_baseline_check : Common | Make sure /data directory does not exist] ****
ok: [192.168.33.40]
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Make sure /host directory does not exist] ****
ok: [192.168.33.40]
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Make sure block device(s) exist on node] *****
ok: [192.168.33.40] => (item=/dev/sdb)
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Make sure block device(s) are at least 100GB] ****
TASK [common_baseline_check : Common | Make sure block device(s) are unpartitioned] *****
ok: [192.168.33.40] => (item=/dev/sdb)
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Check for listening layer 4 ports] ***
changed: [192.168.33.40]
TASK [common_baseline_check : Common | Report any conflicts with published ECS ports] ***
failed: [192.168.33.40] (item=[111, u'port 111/udp is listening on 0.0.0.0']) => {"changed": false, "failed": true, "item": [111, "port 111/udp is listening on 0.0.0.0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
failed: [192.168.33.40] (item=[111, u'port 111/tcp is listening on 0.0.0.0']) => {"changed": false, "failed": true, "item": [111, "port 111/tcp is listening on 0.0.0.0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
failed: [192.168.33.40] (item=[111, u'port 111/udp6 is listening on ::0']) => {"changed": false, "failed": true, "item": [111, "port 111/udp6 is listening on ::0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
failed: [192.168.33.40] (item=[111, u'port 111/tcp6 is listening on ::0']) => {"changed": false, "failed": true, "item": [111, "port 111/tcp6 is listening on ::0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
PLAY RECAP **
192.168.33.40 : ok=11 changed=1 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 0 minutes, 7 seconds
Operation failed.
Expected Behavior
Step 1 deployment is failing
Actual Behavior
[root@localhost ECS-CommunityEdition]# cat deploy.yml
deploy.yml reference implementation v2.8.0
[Optional]
By changing the license_accepted boolean value to "true" you are
declaring your agreement to the terms of the license agreement
contained in the license.txt file included with this software
distribution.
licensing: license_accepted: false
autonames:
custom:
- ecs01
- ecs02
- ecs03
- ecs04
- ecs05
- ecs06
[Required]
Deployment facts reference
facts:
[Required]
Node IP or resolvable hostname from which installations will be launched
The only supported configuration is to install from the same node as the
bootstrap.sh script is run.
NOTE: if the install node is to be migrated into an island environment,
the hostname or IP address listed here should be the one in the
island environment.
install_node: 192.168.33.40
[Required]
IPs of machines that will be whitelisted in the firewall and allowed
to access management ports of all nodes. If this is set to the
wildcard (0.0.0.0/0) then anyone can access management ports.
management_clients:
0.0.0.0/0
[Required]
These credentials must be the same across all nodes. Ansible uses these credentials to
gain initial access to each node in the deployment and set up ssh public key authentication.
If these are not correct, the deployment will fail.
ssh_defaults:
[Required]
Username to use when logging in to nodes
ssh_username: admin
[Required]
Password to use with SSH login
Set to same value as ssh_username to enable SSH public key authentication
ssh_password: ChangeMe
[Required when enabling SSH public key authentication]
Password to give to sudo when gaining root access.
ansible_become_pass: ChangeMe
[Required]
Select the type of crypto to use when dealing with ssh public key
authentication. Valid values here are:
- "rsa" (Default)
- "ed25519"
ssh_crypto: rsa
[Required]
Environment configuration for this deployment.
node_defaults: dns_domain: local dns_servers:
[Optional]
VFS path to source of randomness
Defaults to /dev/urandom for speed considerations. If you prefer /dev/random, put that here.
If you have a /dev/srandom implementation or special entropy hardware, you may use that too
so long as it implements a /dev/random type device.
entropy_source: /dev/urandom #
[Optional]
Picklist for node names.
Available options:
- "moons" (ECS CE default)
- "cities" (ECS SKU-flavored)
- "custom" (uncomment and use the top-level autonames block to define these)
autonaming: custom
#
[Optional]
If your ECS comes with differing default credentials, you can specify those here
ecs_root_user: root
ecs_root_pass: ChangeMe
[Optional]
Storage pool defaults. Configure to your liking.
All block devices that will be consumed by ECS on ALL nodes must be listed under the
ecs_block_devices option. This can be overridden by the storage pool configuration.
At least ONE (1) block device is REQUIRED for a successful install. More is better.
storage_pool_defaults: is_cold_storage_enabled: false is_protected: false description: Default storage pool description ecs_block_devices:
[Required]
Storage pool layout. You MUST have at least ONE (1) storage pool for a successful install.
storage_pools:
name: sp1 members:
[Optional]
VDC defaults. Configure to your liking.
virtual_data_center_defaults: description: Default virtual data center description
[Required]
Virtual data center layout. You MUST have at least ONE (1) VDC for a successful install.
Multi-VDC deployments are not yet implemented
virtual_data_centers:
name: vdc1 members:
[Optional]
Replication group defaults. Configure to your liking.
replication_group_defaults: description: Default replication group description enable_rebalancing: true allow_all_namespaces: true is_full_rep: false
[Optional, required for namespaces]
Replication group layout. You MUST have at least ONE (1) RG to provision namespaces.
replication_groups:
name: rg1 members:
[Optional]
Management User defaults
management_user_defaults: is_system_admin: false is_system_monitor: false
[Optional]
Management Users
management_users:
username: monitor1 password: ChangeMe options: is_system_monitor: true
[Optional]
Namespace defaults
namespace_defaults: is_stale_allowed: false is_compliance_enabled: false
[Optional]
Namespace layout
namespaces:
name: ns1 replication_group: rg1 administrators:
[Optional]
Object User defaults
object_user_defaults:
Comma-separated list of Swift authorization groups
swift_groups_list:
Lifetime of S3 secret key in minutes
s3_expiry_time: 2592000
[Optional]
Object Users
object_users:
username: object_user1 namespace: ns1 options: swift_password: ChangeMe s3_secret_key: ChangeMeChangeMeChangeMeChangeMeChangeMe
[Optional]
Bucket defaults
bucket_defaults: namespace: ns1 replication_group: rg1 head_type: s3 filesystem_enabled: False stale_allowed: False encryption_enabled: False owner: object_admin1
[Optional]
Bucket layout (optional)
buckets:
==================================
[root@localhost ECS-CommunityEdition]# step1
PLAY [Common | Ping data nodes before doing anything else] **
TASK [ping] ***** ok: [192.168.33.40]
PLAY [Installer | Gather facts and slice into OS groups] ****
TASK [group_by] ***** ok: [192.168.33.40]
PLAY [CentOS 7 | Configure access] **
TASK [CentOS_7_configure_ssh : CentOS 7 | Distribute ed25519 ssh key] ***
TASK [CentOS_7_configure_ssh : CentOS 7 | Distribute rsa ssh key] ***
TASK [CentOS_7_configure_ssh : CentOS 7 | Disable SSH UseDNS] *** ok: [192.168.33.40]
TASK [CentOS_7_configure_ssh : CentOS 7 | Disable requiretty] *** ok: [192.168.33.40]
TASK [CentOS_7_configure_ssh : CentOS 7 | Disable sudo password reverification for admin group] ***** ok: [192.168.33.40]
TASK [CentOS_7_configure_ssh : CentOS 7 | Disable sudo password reverification for wheel group] ***** ok: [192.168.33.40]
TASK [firewalld_configure_access : Firewalld | Ensure service is started] *** changed: [192.168.33.40]
TASK [firewalld_configure_access : Firewalld | Add install node to firewalld trusted zone] ** ok: [192.168.33.40]
TASK [firewalld_configure_access : Firewalld | Add all data nodes to firewalld trusted zone] **** ok: [192.168.33.40] => (item=10.0.2.15) ok: [192.168.33.40] => (item=192.168.33.40) ok: [192.168.33.40] => (item=172.17.0.1)
TASK [firewalld_configure_access : Firewalld | Whitelist management prefixes] *** ok: [192.168.33.40] => (item=0.0.0.0/0)
TASK [firewalld_configure_access : Firewalld | Add all public service ports to firewalld public zone] *** ok: [192.168.33.40] => (item=3218/tcp) ok: [192.168.33.40] => (item=9020-9025/tcp) ok: [192.168.33.40] => (item=9040/tcp)
TASK [firewalld_configure_access : Firewalld | Ensure service is started] *** changed: [192.168.33.40]
PLAY [Common | Configure hostnames] *****
TASK [common_set_hostname : include_vars] *** ok: [192.168.33.40]
TASK [common_set_hostname : Common | Find node hostname] **** ok: [192.168.33.40] => (item=(0, u'192.168.33.40'))
TASK [common_set_hostname : Common | Set node hostname] ***** ok: [192.168.33.40]
PLAY [Common | Configure /etc/hosts] ****
TASK [common_etc_hosts : Common | Add install node to /etc/hosts] *** ok: [192.168.33.40] => (item=192.168.33.40)
TASK [common_etc_hosts : Common | Add data nodes to /etc/hosts] ***** ok: [192.168.33.40] => (item=192.168.33.40)
PLAY [Common | Test inter-node access] **
TASK [common_access_test : Common | Check node connectivity by IP] ** ok: [192.168.33.40] => (item=10.0.2.15) ok: [192.168.33.40] => (item=192.168.33.40) ok: [192.168.33.40] => (item=172.17.0.1)
TASK [common_access_test : Common | Check node connectivity by short name] ** ok: [192.168.33.40] => (item=luna)
TASK [common_access_test : Common | Check node connectivity by fqdn] **** ok: [192.168.33.40] => (item=luna)
PLAY RECAP ** 192.168.33.40 : ok=20 changed=2 unreachable=0 failed=0
Playbook run took 0 days, 0 hours, 0 minutes, 13 seconds
PLAY [Common | Ping data nodes before doing anything else] **
TASK [ping] ***** ok: [192.168.33.40]
PLAY [Installer | Slice nodes into OS groups] ***
TASK [group_by] ***** ok: [192.168.33.40]
PLAY [Installer | Perform preflight check] **
TASK [common_collect_facts : Common | Create custom facts directory] **** ok: [192.168.33.40]
TASK [common_collect_facts : Common | Insert data_node.fact file] *** ok: [192.168.33.40]
TASK [common_collect_facts : Common | Reload facts to pick up new items] **** ok: [192.168.33.40]
TASK [common_baseline_check : include_vars] ***** ok: [192.168.33.40]
TASK [common_baseline_check : Common | Check RAM size] **
TASK [common_baseline_check : Common | Check CPU architecture] **
TASK [common_baseline_check : Common | Validate OS distribution] ****
TASK [common_baseline_check : Common | (Optional) Check UTC Timezone] ***
TASK [common_baseline_check : Common | Make sure /data directory does not exist] **** ok: [192.168.33.40]
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Make sure /host directory does not exist] **** ok: [192.168.33.40]
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Make sure block device(s) exist on node] ***** ok: [192.168.33.40] => (item=/dev/sdb)
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Make sure block device(s) are at least 100GB] ****
TASK [common_baseline_check : Common | Make sure block device(s) are unpartitioned] ***** ok: [192.168.33.40] => (item=/dev/sdb)
TASK [common_baseline_check : fail] *****
TASK [common_baseline_check : Common | Check for listening layer 4 ports] *** changed: [192.168.33.40]
TASK [common_baseline_check : Common | Report any conflicts with published ECS ports] *** failed: [192.168.33.40] (item=[111, u'port 111/udp is listening on 0.0.0.0']) => {"changed": false, "failed": true, "item": [111, "port 111/udp is listening on 0.0.0.0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"} failed: [192.168.33.40] (item=[111, u'port 111/tcp is listening on 0.0.0.0']) => {"changed": false, "failed": true, "item": [111, "port 111/tcp is listening on 0.0.0.0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"} failed: [192.168.33.40] (item=[111, u'port 111/udp6 is listening on ::0']) => {"changed": false, "failed": true, "item": [111, "port 111/udp6 is listening on ::0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"} failed: [192.168.33.40] (item=[111, u'port 111/tcp6 is listening on ::0']) => {"changed": false, "failed": true, "item": [111, "port 111/tcp6 is listening on ::0"], "msg": "Port conflict with published ECS port: 111 ECS NFS | description: ECS must be the sole NFS provider on this system | support URL: https://github.com/EMCECS/ECS-CommunityEdition/issues/179"}
PLAY RECAP ** 192.168.33.40 : ok=11 changed=1 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 0 minutes, 7 seconds Operation failed.
[root@localhost ECS-CommunityEdition]#