shiftstack / dev-install

13 stars 16 forks source link

overcloud-resource-registry-puppet.yaml : 'dict object' has no attribute 'name' #85

Closed flavio-fernandes closed 3 years ago

flavio-fernandes commented 3 years ago

While deploying on a Centos 8 system.

Following the instructions, but it is still not working.

Does the host we are deploying into have to have a DNS name?

$ cat inventory.yaml
all:
  hosts:
    standalone:
      ansible_host: 10.18.57.27
      ansible_user: root

Lenovo ~/bigvm/dev-install.git on bigvm.wip.2*
$ time make osp_full
...
TASK [tripleo.operator.tripleo_deploy : Standalone deploy] ***********************************************************************************************************
fatal: [standalone]: FAILED! => {"ansible_job_id": "260165930829.8223", "changed": false, "cmd": "sudo openstack tripleo deploy  --templates $DEPLOY_TEMPLATES --standalone  --yes --output-dir $DEPLOY_OUTPUT_DIR  --stack $DEPLOY_STACK --standalone-role $DEPLOY_STANDALONE_ROLE --timeout $DEPLOY_TIMEOUT_ARG -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -e /home/stack/containers-prepare-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/standalone_parameters.yaml -r $DEPLOY_ROLES_FILE      --deployment-user $DEPLOY_DEPLOYMENT_USER  --local-ip $DEPLOY_LOCAL_IP --control-virtual-ip $DEPLOY_CONTROL_VIP     --keep-running    >/home/stack/standalone_deploy.log 2>&1", "delta": "0:01:09.302738", "end": "2021-05-24 10:12:51.288898", "finished": 1, "msg": "non-zero return code", "rc": 1, "start": "2021-05-24 10:11:41.986160", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

PLAY RECAP ***********************************************************************************************************************************************************
standalone                 : ok=25   changed=8    unreachable=0    failed=1    skipped=27   rescued=0    ignored=0

Makefile:69: recipe for target 'install_stack' failed
** Handling template files **
jinja2 rendering normal template overcloud-resource-registry-puppet.j2.yaml
rendering j2 template to file: /home/stack/tripleo-heat-installer-templates/./overcloud-resource-registry-puppet.yaml
Error rendering template /home/stack/tripleo-heat-installer-templates/./overcloud-resource-registry-puppet.yaml : 'dict object' has no attribute 'name'
Traceback (most recent call last):
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 98, in _j2_render_to_file
    r_template = template.render(**j2_data)
  File "/usr/lib/python3.6/site-packages/jinja2/asyncsupport.py", line 76, in render
    return original_render(self, *args, **kwargs)
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 1008, in render
    return self.environment.handle_exception(exc_info, True)
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 780, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/lib/python3.6/site-packages/jinja2/_compat.py", line 37, in reraise
    raise value.with_traceback(tb)
  File "<template>", line 14, in top-level template code
  File "/usr/lib/python3.6/site-packages/jinja2/environment.py", line 430, in getattr
    return getattr(obj, attribute)
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'name'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 413, in <module>
    network_data_path, (not opts.safe), opts.dry_run)
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 317, in process_templates
    overwrite, dry_run)
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 103, in _j2_render_to_file
    raise Exception(error_msg)
Exception: Error rendering template /home/stack/tripleo-heat-installer-templates/./overcloud-resource-registry-puppet.yaml : 'dict object' has no attribute 'name'
Problems generating templates.
Not cleaning working directory /home/stack/tripleo-heat-installer-templates
Not cleaning ansible directory /home/stack/standalone-ansible-1mo7pteb
Install artifact is located at /home/stack/standalone-install-20210524101147.tar.bzip2

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Deployment Failed!

ERROR: Heat log files: /var/log/heat-launcher/undercloud_deploy-2jbepa30

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Exception: Problems generating templates.
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1443, in take_action
    self._standalone_deploy(parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1276, in _standalone_deploy
    parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 754, in _deploy_tripleo_heat_templates
    roles_file_path, networks_file_path, parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 614, in _setup_heat_environments
    output_dir=self.tht_render)
  File "/usr/lib/python3.6/site-packages/tripleoclient/utils.py", line 1796, in jinja_render_files
    raise exceptions.DeploymentError(msg)
tripleoclient.exceptions.DeploymentError: Problems generating templates.
None
Problems generating templates.
Could not clean up: 'ClientManager' object has no attribute 'sdk_connection'
[root@standalone stack]#
flavio-fernandes commented 3 years ago

Hi @EmilienM et all! Can you tell from that if I'm doing something wrong ^^ Maybe I'm having bad luck with he tripleo version that dev-install is using?

mdbooth commented 3 years ago

You're not doing anything obviously wrong.

If you want to try working round whatever problem you're hitting, is it worth trying a RHEL/OSP installation? This doesn't require any entitlements if you're deploying internally.

flavio-fernandes commented 3 years ago

You're not doing anything obviously wrong.

If you want to try working round whatever problem you're hitting, is it worth trying a RHEL/OSP installation? This doesn't require any entitlements if you're deploying internally.

I tried rhel8... different failure, same outcome:

...
2021-05-24 19:47:27.952 7671 INFO migrate.versioning.api [-] 82 -> 83... ^[[00m
2021-05-24 19:47:27.954 7671 INFO migrate.versioning.api [-] done^[[00m
2021-05-24 19:47:27.954 7671 INFO migrate.versioning.api [-] 83 -> 84... ^[[00m
2021-05-24 19:47:27.956 7671 INFO migrate.versioning.api [-] done^[[00m
2021-05-24 19:47:27.956 7671 INFO migrate.versioning.api [-] 84 -> 85... ^[[00m
2021-05-24 19:47:27.958 7671 INFO migrate.versioning.api [-] done^[[00m
2021-05-24 19:47:27.959 7671 INFO migrate.versioning.api [-] 85 -> 86... ^[[00m
2021-05-24 19:47:27.984 7671 INFO migrate.versioning.api [-] done^[[00m
** Handling template files **
Traceback (most recent call last):
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 366, in <module>
    network_data_path, (not opts.safe), opts.dry_run)
  File "/usr/share/openstack-tripleo-heat-templates/tools/process-templates.py", line 113, in process_templates
    role_data = yaml.safe_load(role_data_file)
  File "/usr/lib64/python3.6/site-packages/yaml/__init__.py", line 94, in safe_load
    return load(stream, SafeLoader)
  File "/usr/lib64/python3.6/site-packages/yaml/__init__.py", line 72, in load
    return loader.get_single_data()
  File "/usr/lib64/python3.6/site-packages/yaml/constructor.py", line 35, in get_single_data
    node = self.get_single_node()
  File "/usr/lib64/python3.6/site-packages/yaml/composer.py", line 35, in get_single_node
    if not self.check_event(StreamEndEvent):
  File "/usr/lib64/python3.6/site-packages/yaml/parser.py", line 98, in check_event
    self.current_event = self.state()
  File "/usr/lib64/python3.6/site-packages/yaml/parser.py", line 143, in parse_implicit_document_start
    StreamEndToken):
  File "/usr/lib64/python3.6/site-packages/yaml/scanner.py", line 116, in check_token
    self.fetch_more_tokens()
  File "/usr/lib64/python3.6/site-packages/yaml/scanner.py", line 252, in fetch_more_tokens
    return self.fetch_plain()
  File "/usr/lib64/python3.6/site-packages/yaml/scanner.py", line 676, in fetch_plain
    self.tokens.append(self.scan_plain())
  File "/usr/lib64/python3.6/site-packages/yaml/scanner.py", line 1299, in scan_plain
    "Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.")
yaml.scanner.ScannerError: while scanning a plain scalar
  in "/home/stack/tripleo_standalone_role.yaml", line 1, column 347
found unexpected ':'
  in "/home/stack/tripleo_standalone_role.yaml", line 1, column 351
Please check http://pyyaml.org/wiki/YAMLColonInFlowContext for details.
Problems generating templates.
Exception: Problems generating templates.
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1282, in _standalone_deploy
    parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 761, in _deploy_tripleo_heat_templates
    roles_file_path, networks_file_path, parsed_args)
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 658, in _setup_heat_environments
    raise exceptions.DeploymentError(msg)
tripleoclient.exceptions.DeploymentError: Problems generating templates.
None
Not cleaning working directory /home/stack/tripleo-heat-installer-templates
Not cleaning ansible directory /home/stack/standalone-ansible-2gay9p_k
Install artifact is located at /home/stack/standalone-install-20210524194729.tar.bzip2

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Deployment Failed!

ERROR: Heat log files: /var/log/heat-launcher/undercloud_deploy-_out6csw

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Deployment failed.

I took a look at /home/stack/tripleo_standalone_role.yaml but could not see anything obvious on col 347.

 [{u'CountDefault': 1, u'description': u"A standalone role that a minimal set of services. This can be used for\ntesting in a single node configuration with the\n'openstack tripleo deploy --standalone' command or via an Undercloud using\n'openstack overcloud deploy'.\n", u'tags': [u'primary', u'controller', u'standalone'], u'ServicesDefault': [u'OS::TripleO::Services::Aide', u'OS::TripleO::Services::AodhApi', u'OS::TripleO::Services::AodhEvaluator', u'OS::TripleO::Services::AodhListener', u'OS::TripleO::Services::AodhNotifier', u'OS::TripleO::Services::AuditD', u'OS::TripleO::Services::BarbicanApi', u'OS::TripleO::Services::BarbicanBackendDogtag', u'OS::TripleO::Services::BarbicanBackendKmip', u'OS::TripleO::Services::BarbicanBackendPkcs11Crypto', u'OS::TripleO::Services::BarbicanBackendSimpleCrypto', u'OS::TripleO::Services::CACerts', u'OS::TripleO::Services::CeilometerAgentCentral', u'OS::TripleO::Services::CeilometerAgentNotification', u'OS::TripleO::Services::CephClient', u'OS::TripleO::Services::CephExternal', u'OS::TripleO::Services::CephGrafana', u'OS::TripleO::Services::CephMds', u'OS::TripleO::Services::CephMgr', u'OS::TripleO::Services::CephMon', u'OS::TripleO::Services::CephNfs', u'OS::TripleO::Services::CephRbdMirror', u'OS::TripleO::Services::CephRgw', u'OS::TripleO::Services::CephOSD', u'OS::TripleO::Services::CertmongerUser', u'OS::TripleO::Services::CinderApi', u'OS::TripleO::Services::CinderBackendDellEMCPowerFlex', u'OS::TripleO::Services::CinderBackendDellEMCPowermax', u'OS::TripleO::Services::CinderBackendDellEMCPowerStore', u'OS::TripleO::Services::CinderBackendDellEMCSc', u'OS::TripleO::Services::CinderBackendDellEMCUnity', u'OS::TripleO::Services::CinderBackendDellEMCVMAXISCSI', u'OS::TripleO::Services::CinderBackendDellEMCVNX', u'OS::TripleO::Services::CinderBackendDellEMCVxFlexOS', u'OS::TripleO::Services::CinderBackendDellEMCXtremio', u'OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI', u'OS::TripleO::Services::CinderBackendDellPs', u'OS::TripleO::Services::CinderBackendDellSc', u'OS::TripleO::Services::CinderBackendNVMeOF', u'OS::TripleO::Services::CinderBackendPure', u'OS::TripleO::Services::CinderBackendNetApp', u'OS::TripleO::Services::CinderBackendScaleIO', u'OS::TripleO::Services::CinderBackendVRTSHyperScale', u'OS::TripleO::Services::CinderBackup', u'OS::TripleO::Services::CinderHPELeftHandISCSI', u'OS::TripleO::Services::CinderScheduler', u'OS::TripleO::Services::CinderVolume', u'OS::TripleO::Services::Clustercheck', u'OS::TripleO::Services::Collectd', u'OS::TripleO::Services::ComputeCeilometerAgent', u'OS::TripleO::Services::ContainerImagePrepare', u'OS::TripleO::Services::ContainersLogrotateCrond', u'OS::TripleO::Services::DesignateApi', u'OS::TripleO::Services::DesignateCentral', u'OS::TripleO::Services::DesignateMDNS', u'OS::TripleO::Services::DesignateProducer', u'OS::TripleO::Services::DesignateSink', u'OS::TripleO::Services::DesignateWorker', u'OS::TripleO::Services::Docker', u'OS::TripleO::Services::DockerRegistry', u'OS::TripleO::Services::Ec2Api', u'OS::TripleO::Services::Etcd', u'OS::TripleO::Services::ExternalSwiftProxy', u'OS::TripleO::Services::GlanceApi', u'OS::TripleO::Services::GnocchiApi', u'OS::TripleO::Services::GnocchiMetricd', u'OS::TripleO::Services::GnocchiStatsd', u'OS::TripleO::Services::HAproxy', u'OS::TripleO::Services::HeatApi', u'OS::TripleO::Services::HeatApiCfn', u'OS::TripleO::Services::HeatApiCloudwatch', u'OS::TripleO::Services::HeatEngine', u'OS::TripleO::Services::Horizon', u'OS::TripleO::Services::IpaClient', u'OS::TripleO::Services::Ipsec', u'OS::TripleO::Services::IronicApi', u'OS::TripleO::Services::IronicConductor', u'OS::TripleO::Services::IronicInspector', u'OS::TripleO::Services::IronicNeutronAgent', u'OS::TripleO::Services::IronicPxe', u'OS::TripleO::Services::Iscsid', u'OS::TripleO::Services::Keepalived', u'OS::TripleO::Services::Kernel', u'OS::TripleO::Services::Keystone', u'OS::TripleO::Services::LoginDefs', u'OS::TripleO::Services::ManilaApi', u'OS::TripleO::Services::ManilaBackendCephFs', u'OS::TripleO::Services::ManilaBackendIsilon', u'OS::TripleO::Services::ManilaBackendNetapp', u'OS::TripleO::Services::ManilaBackendUnity', u'OS::TripleO::Services::ManilaBackendVMAX', u'OS::TripleO::Services::ManilaBackendVNX', u'OS::TripleO::Services::ManilaScheduler', u'OS::TripleO::Services::ManilaShare', u'OS::TripleO::Services::MasqueradeNetworks', u'OS::TripleO::Services::Memcached', u'OS::TripleO::Services::MetricsQdr', u'OS::TripleO::Services::MistralApi', u'OS::TripleO::Services::MistralEngine', u'OS::TripleO::Services::MistralEventEngine', u'OS::TripleO::Services::MistralExecutor', u'OS::TripleO::Services::Multipathd', u'OS::TripleO::Services::MySQL', u'OS::TripleO::Services::MySQLClient', u'OS::TripleO::Services::NeutronApi', u'OS::TripleO::Services::NeutronBgpVpnApi', u'OS::TripleO::Services::NeutronBgpVpnBagpipe', u'OS::TripleO::Services::NeutronCorePlugin', u'OS::TripleO::Services::NeutronDhcpAgent', u'OS::TripleO::Services::NeutronL2gwAgent', u'OS::TripleO::Services::NeutronL2gwApi', u'OS::TripleO::Services::NeutronL3Agent', u'OS::TripleO::Services::NeutronLinuxbridgeAgent', u'OS::TripleO::Services::NeutronML2FujitsuCfab', u'OS::TripleO::Services::NeutronML2FujitsuFossw', u'OS::TripleO::Services::NeutronMetadataAgent', u'OS::TripleO::Services::NeutronOvsAgent', u'OS::TripleO::Services::NeutronSfcApi', u'OS::TripleO::Services::NeutronVppAgent', u'OS::TripleO::Services::NovaApi', u'OS::TripleO::Services::NovaCompute', u'OS::TripleO::Services::NovaConductor', u'OS::TripleO::Services::NovaIronic', u'OS::TripleO::Services::NovaLibvirt', u'OS::TripleO::Services::NovaMetadata', u'OS::TripleO::Services::NovaMigrationTarget', u'OS::TripleO::Services::NovaScheduler', u'OS::TripleO::Services::NovaVncProxy', u'OS::TripleO::Services::OVNController', u'OS::TripleO::Services::OVNDBs', u'OS::TripleO::Services::OVNMetadataAgent', u'OS::TripleO::Services::OctaviaApi', u'OS::TripleO::Services::OctaviaDeploymentConfig', u'OS::TripleO::Services::OctaviaHealthManager', u'OS::TripleO::Services::OctaviaHousekeeping', u'OS::TripleO::Services::OctaviaWorker', u'OS::TripleO::Services::OpenStackClients', u'OS::TripleO::Services::OsloMessagingNotify', u'OS::TripleO::Services::OsloMessagingRpc', u'OS::TripleO::Services::Pacemaker', u'OS::TripleO::Services::PankoApi', u'OS::TripleO::Services::PlacementApi', u'OS::TripleO::Services::Podman', u'OS::TripleO::Services::Rear', u'OS::TripleO::Services::Redis', u'OS::TripleO::Services::Rhsm', u'OS::TripleO::Services::Rsyslog', u'OS::TripleO::Services::RsyslogSidecar', u'OS::TripleO::Services::SaharaApi', u'OS::TripleO::Services::SaharaEngine', u'OS::TripleO::Services::Securetty', u'OS::TripleO::Services::Snmp', u'OS::TripleO::Services::Sshd', u'OS::TripleO::Services::SwiftDispersion', u'OS::TripleO::Services::SwiftProxy', u'OS::TripleO::Services::SwiftRingBuilder', u'OS::TripleO::Services::SwiftStorage', u'OS::TripleO::Services::Timesync', u'OS::TripleO::Services::Timezone', u'OS::TripleO::Services::Tmpwatch', u'OS::TripleO::Services::TripleoFirewall', u'OS::TripleO::Services::TripleoPackages', u'OS::TripleO::Services::Tuned', u'OS::TripleO::Services::Vpp', u'OS::TripleO::Services::Zaqar'], u'networks': {u'InternalApi': {u'subnet': u'internal_api_subnet'}, u'Storage': {u'subnet': u'storage_subnet'}, u'StorageMgmt': {u'subnet': u'storage_mgmt_subnet'}, u'External': {u'subnet': u'external_subnet'}, u'StorageNFS': {u'subnet': u'storage_nfs_subnet'}, u'Tenant': {u'subnet': u'tenant_subnet'}}, u'name': u'Standalone'}]
mdbooth commented 3 years ago

What's in your local-overrides.yaml? Do you have any other local changes?

Also the tripleo_standalone_role.yaml above looks unusual: it contains python string literals. Is that post-processed for posting here, or does it literally contain u prefixes, e.g.:

...u'CountDefault':...
EmilienM commented 3 years ago

yeah I suspect that you override standalone_role_overrides and the data might be wrong. Please share all the steps that you did to deploy. Thanks

flavio-fernandes commented 3 years ago

yeah I suspect that you override standalone_role_overrides and the data might be wrong. Please share all the steps that you did to deploy. Thanks

Thank you @EmilienM and @mdbooth !

The files I used are below / attached. Yes, the ...u'CountDefault':... seems to be part of the file generated.

standalone_deploy.log tripleo_standalone_role.yaml

I create a rhel8 system with 32Gb ram + 100Gb disk (via Vagrant). I enable nested virt and use a 'public' interface that gets a 10.x.x.x address. I added your GH pub keys to the system, so I am hoping you can access that VM.

As for launching it, I simply get the yaml files using the commands you mention in the readme file:

Info on the Ansibe I'm using from my devel machine. I just realized it is using py2; I wonder if that is an issue.

$ ansible --version
ansible 2.9.21
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/home/ffernand/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.17 (default, Feb 27 2021, 15:10:58) [GCC 7.5.0]

local-overrides.yaml

standalone_host: dut.flaviof.dev
public_api: 10.18.57.25
control_plane_ip: 10.18.57.25

inventory.yaml

all:
  hosts:
    standalone:
      ansible_host: dut.flaviof.dev
      ansible_user: root
EmilienM commented 3 years ago

python version = 2.7.17 Something makes me think that you deployed RHEL 7.x Like documented, you need RHEL 8.4.

mdbooth commented 3 years ago

Here's mine (RHEL 8.3):

[stack@standalone ~]$ ansible --versionlone_role.yaml
ansible 2.9.21
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/stack/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.6/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
flavio-fernandes commented 3 years ago

python version = 2.7.17 Something makes me think that you deployed RHEL 7.x Like documented, you need RHEL 8.4.

hmm.... looks like the latest vm image I can use with Vagrant is RHEL 8.3 Definitely not using 7. Are you referring to the dut or the system where I type 'make config host=dut.flaviof.dev' ?

mdbooth commented 3 years ago

RHEL 8.3 is fine. If you're inside the RH VPN then rhos-release will just upgrade it to whatever is required anyway.

flavio-fernandes commented 3 years ago

RHEL 8.3 is fine. If you're inside the RH VPN then rhos-release will just upgrade it to whatever is required anyway.

$ ssh dut.flaviof.dev Warning: Permanently added 'dut.flaviof.dev,10.18.57.25' (RSA) to the list of known hosts. Last login: Tue May 25 16:04:13 2021 from 10.18.57.23 [root@standalone ~]# ansible --version ansible 2.9.21 config file = /etc/ansible/ansible.cfg configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3.6/site-packages/ansible executable location = /usr/bin/ansible python version = 3.6.8 (default, Aug 18 2020, 08:33:21) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)] [root@standalone ~]# cat /etc/redhat-release Red Hat Enterprise Linux release 8.3 (Ootpa) [root@standalone ~]#

mdbooth commented 3 years ago

Ah... I wonder if this relates to your local ansible! What are you running on your workstation?

flavio-fernandes commented 3 years ago

Ah... I wonder if this relates to your local ansible! What are you running on your workstation?

yeah, I am. Let me spin up a c8 vm with stream enabled and try launching it from there. Thank you for your pointers! I will let you know how that goes soon.

mdbooth commented 3 years ago

Please write up how you get on either way, btw. Vagrant is a really interesting use case!

mdbooth commented 3 years ago

One more thing: you can't make public_api the same as control_plane_ip any more. That was actually a bug. Probably easiest to leave control_plane_ip unset.

If you can assign multiple IPs to a single interface in vagrant then for convenience do that and use the second one as control_plane_ip.

flavio-fernandes commented 3 years ago

Hi @EmilienM and @mdbooth . I made a little further after using a c8 vm as the system where I invoke make osp_full.

With that, tripleo_standalone_role.yaml no longer has the u prefixes like before. Woot! ;)

However, I hit another snag : libfacter was not found. Please make sure it was installed to the expected location.

Any chance that is a known issue, or a step I'm missing in the prep of the standalone host?

Your GH pub ssh is there, in case you find it useful to ssh root@dut.flaviof.dev

standalone_deploy.log

2021-05-26 01:22:54.611602 |  The following node(s) had failures: standalone
2021-05-26 01:22:54.611868 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[WARNING]: Failure using method (v2_playbook_on_stats) in callback plugin
(<ansible.plugins.callback.validation_json.CallbackModule object at
0x7f3d07630c50>): [Errno 2] No such file or directory: 'None/52540091-5f5e-308b
-c1f6-000000000008_deploy_steps_playbook_2021-05-26T01:21:37.098972Z.json'
Exception: Deployment failed
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/tripleoclient/v1/tripleo_deploy.py", line 1345, in _standalone_deploy
    raise exceptions.DeploymentError('Deployment failed')
tripleoclient.exceptions.DeploymentError: Deployment failed
None
** Found ansible errors for standalone deployment! **
[
 [
  "Run puppet on the host to apply IPtables rules",
  {
   "msg": "non-zero return code",
   "cmd": "set +e\npuppet apply  --detailed-exitcodes --summarize --color=false    --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'tripleo::firewall::rule' -e 'if hiera('enable_load_balancer', true) { class {'::tripleo::haproxy': use_internal_certificates => false, manage_firewall => hiera('tripleo::firewall::manage_firewall', true), }}'\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n    exit 0\nfi\nexit $rc\n",
   "stdout": "",
   "stderr": "libfacter was not found. Please make sure it was installed to the expected location.",
   "rc": 1,
   "start": "2021-05-26 01:22:54.396331",
   "end": "2021-05-26 01:22:54.454158",
   "delta": "0:00:00.057827",
   "changed": true,
   "invocation": {
    "module_args": {
     "_raw_params": "set +e\npuppet apply  --detailed-exitcodes --summarize --color=false    --modulepath '/etc/puppet/modules:/opt/stack/puppet-modules:/usr/share/openstack-puppet/modules' --tags 'tripleo::firewall::rule' -e 'if hiera('enable_load_balancer', true) { class {'::tripleo::haproxy': use_internal_certificates => false, manage_firewall => hiera('tripleo::firewall::manage_firewall', true), }}'\nrc=$?\nset -e\nset +ux\nif [ $rc -eq 2 -o $rc -eq 0 ]; then\n    exit 0\nfi\nexit $rc\n",
     "_uses_shell": true,
     "warn": true,
     "stdin_add_newline": true,
     "strip_empty_ends": true,
     "argv": null,
     "chdir": null,
     "executable": null,
     "creates": null,
     "removes": null,
     "stdin": null
    }
   },
   "stdout_lines": [],
   "stderr_lines": [
    "libfacter was not found. Please make sure it was installed to the expected location."
   ],
   "_ansible_no_log": false
  }
 ]
]
Not cleaning working directory /home/stack/tripleo-heat-installer-templates
Not cleaning ansible directory /home/stack/standalone-ansible-qu7jzhi_
Install artifact is located at /home/stack/standalone-install-20210526012254.tar.bzip2
EmilienM commented 3 years ago

@flavio-fernandes you'll need CentOS 8 Stream, can you confirm that's what you have?

mdbooth commented 3 years ago

IIUC, Flavio is installing on RHEL 8.3, executing the ansible from a CentOS8 VM?

flavio-fernandes commented 3 years ago

IIUC, Flavio is installing on RHEL 8.3, executing the ansible from a CentOS8 VM?

Yes, I am executing the make command from a c8 stream and the standalone box is a 8.3. That is because there is no 8.4 yet from https://app.vagrantup.com/generic/boxes/rhel8

As @EmilienM suggested, I switched the stadalone to c8 stream and that did take me further. However, 110Gb disk is not enough:

/dev/vda1 111G 111G 38M 100% /

So I will have to increase that and retry. To be continued.... ;)

EmilienM commented 3 years ago

you can also disable Ceph so you don't need all that storage. ceph_enabled: false

EmilienM commented 3 years ago

@mdbooth btw this is why i wasn't in favour of enabling Ceph by default ^^^^^ πŸ˜πŸ˜‰

mdbooth commented 3 years ago

@mdbooth btw this is why i wasn't in favour of enabling Ceph by default ^^^^^ πŸ˜πŸ˜‰

Je ne regrette rien 😜