Closed piotrzarzycki21 closed 4 months ago
Some additional comments from @MarkProminic
demo-tasks is now frozen and will be archived by end of this week.
It is replaced by: https://github.com/STARTcloud/hcl_domino_standalone_provisioner
Demo tasks was split up, and the above repo is dependent on the following repos: https://github.com/STARTcloud/core_provisioner https://github.com/STARTcloud/hcl_roles https://github.com/STARTcloud/startcloud_roles
The artifact from the releases page you want is: hcl_domino_standalone_provisioner.zip
I don't currently have the Hosts.template.yml created yet during the github actions that will be added shortly
I forsee the first useable version for you to test with SHI is: v0.1.24
Since we don't have a immediate need to update demo-tasks
, I suspect the next work for this will be to switch to hcl_domino_standalone_provisioner. There will be some updates required to support the new template, including:
show_console
haproxy_ssl_redirect
- We may just use a default value for this.hcl_
prefix to the Domino provisionersI was talking with @MarkProminic about updating the provisioners, but I see that we haven't yet updated to v0.1.22 for the provisioners, so I won't be able to update directly. In the meantime, I can make changes to the copy of the v0.1.20 provisioners in this repository.
@piotrzarzycki21, we can discuss the priorities on this issue in the meeting tomorrow.
We should also start thinking about how we can test updates to the provisioners. Generally these should not require changes to the SHI forms.
Also we need to think about how to automate the builds so that it is easy for the new https://github.com/STARTcloud/hcl_domino_standalone_provisioner releases to be added to the development builds for Super.Human.Installer.
These may be separate issues, but we should think about it as we make the updates for this issue.
@MarkProminic @JoelProminic It looks like without meeting I won't be able to move forward with this. I don't understand most of the changes which you did.
1) In below screenshot I have on the left 0.1.22 and on the right 0.1.20. Why folder structures changed ? Should I go with new one or create manually "scripts", "templates" and copy everything to "scripts" from 0.1.22?
2) Where is Hosts.template.yml ?
2. Where is Hosts.template.yml ?
I theory I can actually look into each ansible role and figure out what should be in my hosts.yml file
Ok No way I can figure out what should be in Hosts.yml - I need help with that to move on forward.
@MarkProminic, we found a Hosts.yml template in the 0.1.22 release at provisioners/ansible/templates/Hosts.template.yml.j2
, but we found this was not updated with the latest role names (and there may be other missing changes). Is there a different template we should be using for this?
Some links:
Super.Human.Installer: demo-tasks 0.1.20 https://github.com/Moonshine-IDE/Super.Human.Installer/blob/master/Assets/provisioners/demo-tasks/0.1.20/templates/Hosts.template.yml
hcl_domino_standalone_provisioner 0.1.22 (has not changed since release) https://github.com/STARTcloud/hcl_domino_standalone_provisioner/blob/main/hcl_domino_standalone_provisioner/provisioners/ansible/templates/Hosts.template.yml.j2
We discussed this issue in the meeting today.
I went through the Hosts.template.yml.j2 and created this updated copy of Hosts.template.yml.j2:
#jinja2:lstrip_blocks: True
# core_provisioner_version: {{ core_provisioner_version }}
# provisioner_name: {{ provisioner_name }}
# provisioner_version: {{ provisioner_version }}
---
hosts:
-
settings:
hostname: ::SERVER_HOSTNAME:: # demo
domain: ::SERVER_DOMAIN:: # startcloud.com
server_id: '::SERVER_ID::' # Auto-generated
vcpus: ::RESOURCES_CPU:: # 2
memory: ::RESOURCES_RAM:: # 8G
box: 'STARTcloud/debian12-server'
box_version: 0.0.4
os_type: 'Debian_64'
provider-type: virtualbox
firmware_type: UEFI
consoleport: ::SERVER_ID:: # Auto-generated
consolehost: 0.0.0.0
setup_wait: 300
vagrant_user_private_key_path: ./id_rsa
vagrant_user: startcloud
vagrant_user_pass: 'STARTcloud22@!'
vagrant_insert_key: true
ssh_forward_agent: true
networks:
- type: external
address: ::NETWORK_ADDRESS:: # 192.168.2.15, This is ignored when dhcp4 is set to true, Provide user option
netmask: ::NETWORK_NETMASK:: # 255.255.255.0, This is ignored when dhcp4 is set to true, Provide user option
gateway: ::NETWORK_GATEWAY:: # 192.168.2.1, This is ignored when dhcp4 is set to true, Provide user option
dhcp4: ::NETWORK_DHCP4:: # true, Provide user option in case they want static ip
dhcp6: false # false
bridge: ::NETWORK_BRIDGE:: # Blank, Provide user option
mac: auto
dns:
- nameserver: ::NETWORK_DNS_NAMESERVER_1:: # 9.9.9.9
- nameserver: ::NETWORK_DNS_NAMESERVER_2:: # 149.112.112.112
#disks:
# boot:
# size: ::BOOT_DISK_SIZE::
# additional_disks:
# - volume_name: disk1
# size: ::ADDITIONAL_DISK_SIZE::
# port: 5
# Moved to Hosts.rb, Here to document how to override, will be removed in future version once documented in README
#vbox:
# directives:
# - directive: vrde
# value: 'on'
provisioning:
ansible.builtin.shell:
enabled: false
scripts:
- './scripts/aliases.sh'
ansible:
enabled: true
scripts:
- local:
- script: ansible/generate-playbook.yml
ansible_python_interpreter: /usr/bin/python3
compatibility_mode: 2.0
install_mode: pip
ssh_pipelining: true
verbose: false
- script: ansible/playbook.yml
ansible_python_interpreter: /usr/bin/python3
compatibility_mode: 2.0
install_mode: pip
ssh_pipelining: true
verbose: false
folders:
- map: .
to: /vagrant
type: virtualbox
disabled: true
automount: true
description: "Disable VBoxSF"
- map: ./ansible/
to: /vagrant/ansible/
type: rsync
args:
- '--verbose'
- '--archive'
- '--delete'
- '-z'
- '--copy-links'
- map: ./installers/
to: /vagrant/installers/
type: rsync
- map: ./ssls/
to: /secure/
type: rsync
- map: ./safe-id-to-cross-certify/
to: /safe-id-to-cross-certify/
type: rsync
vars:
## You can set global role variables here, look in the defaults folders for hints as to variables used by roles
# Domino Configuration Variables
domino_organization: ::SERVER_ORGANIZATION:: #STARTcloud
safe_notes_id: ::USER_SAFE_ID:: # SAFE.ids
domino_admin_notes_id_password: "password"
domino_server_clustermates: ::DOMINO_SERVER_CLUSTERMATES:: # 0
# Additional server options
#is_additional_server: ::DOMINO_IS_ADDITIONAL_INSTANCE:: false
#use_existing_server_id: ::DOMINO_SERVER_CLUSTERMATE_ID_USE:: false
#existing_server_id: ::DOMINO_SERVER_CLUSTERMATES_ID:: "demo1.id"
#existing_server: ::DOMINO_SERVER_CLUSTERMATE_SERVER:: "demo0.startcloud.com"
#existing_server_ip: ::DOMINO_SERVER_CLUSTERMATE_IP:: "192.168.2.227"
## When using the default: demo.startcloud.com as the hostname and domain, we use the default-signed.crt certificates to provide a valid SSL
## If the hostname and domain, ie demo.startcloud.com do not match the certificate we provide (ie demo.startcloud.com in default-signed.crt), some services may not start (ie nomadweb)
## If a user does not mind using a self signed certificate for their development testing for their own domain or are unable to replace the default-signed.crt files
## they would set the below value to true so that the vm creates a SSL crt with the valid hostname, so that when the service compares the hostname it is to listen on and
## the hostname the certificate is signed for it matches.
haproxy_ssl_redirect: true
selfsigned_enabled: ::CERT_SELFSIGNED:: # false
debug_all: true
# Genesis Variables
genesis_packages:
- netmonitor
- SuperHumanPortal
# Domino Installer Variables
#domino_hash: ::DOMINO_HASH:: # "4153dfbb571b1284ac424824aa0e25e4"
domino_major_version: ::DOMINO_INSTALLER_MAJOR_VERSION:: # "12"
domino_minor_version: ::DOMINO_INSTALLER_MINOR_VERSION:: # "0"
domino_patch_version: ::DOMINO_INSTALLER_PATCH_VERSION:: # "2"
# Domino fixpack Variables
#domino_fp_hash: ::DOMINO_FP_HASH:: # "124153dfbb571b1284ac4248"
#domino_server_installer_tar: ::DOMINO_INSTALLER:: # "Domino_12.0.2_Linux_English.tar"
#domino_installer_fixpack_install: ::DOMINO_INSTALLER_FIXPACK_INSTALL:: # false
#domino_fixpack_version: ::DOMINO_INSTALLER_FIXPACK_VERSION:: # FP1
#domino_server_fixpack_tar: ::DOMINO_INSTALLER_FIXPACK:: # "Domino_1201FP1_Linux.tar"
# Domino Hotfix Variables
#domino_hf_hash: ::DOMINO_HF_HASH:: # "14153dfbb571b1284ac42482"
domino_installer_hotfix_install: ::DOMINO_INSTALLER_HOTFIX_INSTALL:: # false
domino_hotfix_version: ::DOMINO_INSTALLER_HOTFIX_VERSION:: # HF50
domino_server_hotfix_tar: ::DOMINO_INSTALLER_HOTFIX:: # "1201HF50-linux64.tar"
# Leap Variables
#leap_hash: ::LEAP_HASH:: # "080235c0f0cce7cc3446e01ffccf0046"
leap_archive: ::LEAP_INSTALLER:: # Leap-1.0.5.zip
leap_version: ::LEAP_INSTALLER_VERSION:: # 1.0.5
# Nomad Web Variables
#nomadweb_hash: ::NOMADWEB_HASH:: # "044c7a71598f41cd3ddb88c5b4c9b403"
nomadweb_archive: ::NOMADWEB_INSTALLER:: # nomad-server-1.0.8-for-domino-1202-linux.tgz
nomadweb_version: ::NOMADWEB_VERSION:: # 1.0.8
# Traveler Variables
#traveler_hash: ::TRAVELER_HASH:: # "4a195e3282536de175a2979def40527d"
traveler_archive: ::TRAVELER_INSTALLER:: # Traveler_12.0.2_Linux_ML.tar.gz
traveler_base_version: ::TRAVELER_INSTALLER_VERSION:: # base
traveler_fixpack_archive: ::TRAVELER_FP_INSTALLER:: # Future
traveler_fixpack_version: ::TRAVELER_FP_INSTALLER_VERSION:: # Future
# Verse Variables
#verse_hash: ::VERSE_HASH:: # "dfad6854171e964427550454c5f006ee"
verse_archive: ::VERSE_INSTALLER:: # HCL_Verse_3.0.0.zip
verse_base_version: ::VERSE_INSTALLER_VERSION:: # 3.0.0
# AppDev Web Pack Variables
#appdevpack_hash: ::APPDEVPACK_HASH:: # "b84248ae22a57efe19dac360bd2aafc2"
appdevpack_archive: ::APPDEVPACK_INSTALLER:: # domino-appdev-pack-1.0.15.tgz
appdevpack_version: ::APPDEVPACK_INSTALLER_VERSION:: # 1.0.15
# Domino Rest API Variables
#domino_rest_api_hash: ::DOMINO_REST_API_HASH:: # "fa990f9bac800726f917cd0ca857f220"
domino_rest_api_version: ::DOMINO_REST_API_INSTALLER_VERSION:: # 1
domino_rest_api_archive: ::DOMINO_REST_API_INSTALLER:: # Domino_REST_API_V1_Installer.tar.gz
roles:
- name: startcloud_setup
- name: startcloud_networking
- name: startcloud_hostname
- name: startcloud_dependencies
- name: startcloud_service_user
- name: startcloud_ssl
# missing sdkman_ from template
- name: sdkman_install
- name: sdkman_java
- name: sdkman_maven
- name: sdkman_gradle
- name: hcl_domino_reset
- name: hcl_domino_install
- name: hcl_domino_vagrant_rest_api
- name: hcl_domino_service_nash
- name: hcl_domino_java_config
- name: hcl_domino_java_tools
- name: hcl_domino_updatesite
- name: hcl_domino_config
- name: hcl_domino_genesis
- name: hcl_domino_genesis_applications
- name: hcl_domino_cross_certify
::ROLE_LEAP:: # hcl_domino_leap
::ROLE_NOMADWEB:: # hcl_domino_nomadweb
::ROLE_TRAVELER:: # hcl_domino_traveler
::ROLE_TRAVELER_HTMO:: # hcl_domino_traveler_htmo
::ROLE_VERSE:: # hcl_domino_verse
::ROLE_APPDEVPACK:: # hcl_domino_appdevpack
::ROLE_RESTAPI:: # hcl_domino_rest_api
::ROLE_VOLTMX:: # hcl_voltmx
- name: hcl_domino_vagrant_readme
::ROLE_STARTCLOUD_QUICK_START:: # startcloud_quick_start
::ROLE_STARTCLOUD_HAPROXY:: # startcloud_haproxy
::ROLE_STARTCLOUD_VAGRANT_README:: # startcloud_vagrant_readme
Some notes:
ROLE_*
insertion parameters in SHIdomino_minor_version
and DOMINO_PATCH_VERSION
I created an issue here with the plan to get this template updated for hcl_domino_standalone_provisioner.
@piotrzarzycki21 also expressed confusion about the how to update the scripts part of the SHI template.
By my understanding, scripts generally corresponds the file structure that will be generated for a new server, and then the Vagrantfile will be used for the vagrant
commands. Similarly, the contents of the hcl_domino_standalone_provisioner release zip corresponds to the Vagrant file structure, with Vagrant file needing to go in the same location. So, I think we need to extract the zip directly into 0.1.22/scripts
, and move the templates to 0.1.22/templates
. The file structure will look significantly different, but hopefully this shouldn't require too many changes on the SHI side (these changes should be focused on the generated Hosts.yml).
Note that there may be other SHI-specific files in this directory, like scripts/scripts
which will need to be copied or moved to continue supporting the SHI actions.
@piotrzarzycki21
The entire scripts
directory will be copied to the server's directory regardless of its contents (as is). So whatever is in this directory, it'll be copied to the server's dir without further verification, modification, whatsoever. The vagrant machine is working in this server directory, so it needs all the files that are the part of the provisioner.
The templates
directory is solely created to contain Hosts.template.yml
as it was always a manually generated file, and never part of the original demo-tasks provisioner zip, hence it directly does not need to be copied to the server's directory (copied/saved only after the data is filled with the user's server settings)
There's no need to manually extract template files from the provisioner and put it in templates, keep the entire file structure in scripts
@piotrzarzycki21
The entire
scripts
directory will be copied to the server's directory regardless of its contents (as is). So whatever is in this directory, it'll be copied to the server's dir without further verification, modification, whatsoever. The vagrant machine is working in this server directory, so it needs all the files that are the part of the provisioner.The
templates
directory is solely created to containHosts.template.yml
as it was always a manually generated file, and never part of the original demo-tasks provisioner zip, hence it directly does not need to be copied to the server's directory (copied/saved only after the data is filled with the user's server settings)There's no need to manually extract template files from the provisioner and put it in templates, keep the entire file structure in
scripts
Thanks @Igazine It's very helpful!
@MarkProminic demo-tasks 0.1.20 has version.rb file inside scripts/version.rb, it was moved to core/version.rb in demo-tasks 0.1.22 - I see that it has version number v0.2.3 - Is it mistake ?
@MarkProminic I have found more issues. In Hosts.template.yml file provided by Joel I see:
Issue 1:
vagrant_user_private_key_path: ./id_rsa
is incorrect I have changed it to vagrant_user_private_key_path: ./core/ssh_keys/id_rsa
Issue 2: I have run server and I'm getting following error:
[KProgress: 90%
[K==> 1399--moon.startcloud.com: Matching MAC address for NAT networking...
==> 1399--moon.startcloud.com: Checking if box 'STARTcloud/debian12-server' version '0.0.4' is up to date...
==> 1399--moon.startcloud.com: Setting the name of the VM: 1399--moon.startcloud.com
==> 1399--moon.startcloud.com: Clearing any previously set network interfaces...
==> 1399--moon.startcloud.com: Preparing network interfaces based on configuration...
1399--moon.startcloud.com: Adapter 1: nat
1399--moon.startcloud.com: Adapter 2: bridged
==> 1399--moon.startcloud.com: Forwarding ports...
1399--moon.startcloud.com: 22 (guest) => 2222 (host) (adapter 1)
==> 1399--moon.startcloud.com: Running 'pre-boot' VM customizations...
==> 1399--moon.startcloud.com: Booting VM...
==> 1399--moon.startcloud.com: Waiting for machine to boot. This may take a few minutes...
1399--moon.startcloud.com: SSH address: 127.0.0.1:2222
1399--moon.startcloud.com: SSH username: startcloud
1399--moon.startcloud.com: SSH auth method: private key
==> 1399--moon.startcloud.com: Machine booted and ready!
==> 1399--moon.startcloud.com: Checking for guest additions in VM...
1399--moon.startcloud.com: The guest additions on this VM do not match the installed version of
1399--moon.startcloud.com: VirtualBox! In most cases this is fine, but in rare cases it can
1399--moon.startcloud.com: prevent things such as shared folders from working properly. If you see
1399--moon.startcloud.com: shared folder errors, please make sure the guest additions within the
1399--moon.startcloud.com: virtual machine match the version of VirtualBox you have installed on
1399--moon.startcloud.com: your host and reload your VM.
1399--moon.startcloud.com:
1399--moon.startcloud.com: Guest Additions Version: 6.0.0 r127566
1399--moon.startcloud.com: VirtualBox Version: 7.0
==> 1399--moon.startcloud.com: Rsyncing folder: /Users/piotrzarzycki/Library/Application Support/SuperHumanInstallerDev/servers/demo-tasks/1399/ansible/ => /vagrant/ansible
==> 1399--moon.startcloud.com: Rsyncing folder: /Users/piotrzarzycki/Library/Application Support/SuperHumanInstallerDev/servers/demo-tasks/1399/installers/ => /vagrant/installers
==> 1399--moon.startcloud.com: Rsyncing folder: /Users/piotrzarzycki/Library/Application Support/SuperHumanInstallerDev/servers/demo-tasks/1399/ssls/ => /secure
==> 1399--moon.startcloud.com: Rsyncing folder: /Users/piotrzarzycki/Library/Application Support/SuperHumanInstallerDev/servers/demo-tasks/1399/safe-id-to-cross-certify/ => /safe-id-to-cross-certify
==> 1399--moon.startcloud.com: Running provisioner: ansible_local...
`playbook` does not exist on the guest: /vagrant/ansible/generate-playbook.yml
[SHI]: 'vagrant up' stopped with exit code: 1, elapsed time: 00:01:06
[SHI]: Server destroyed
@JoelProminic I have made changes for demo tasks 0.1.22 - which @MarkProminic pointed out. On my machine server successfully started. I'm building development SHI 0.9.5 - you should get an update in about 30-40 minutes if you wanted to try. Note that demo-tasks 0.1.20 may not work anymore - I did not test that after my updates.
@JoelProminic Dev build need to wait till tomorrow. Build just failed on Windows with some errors in nsis script. @Aszusz look into that tomorrow morning first thing.
Yeah, I noticed that as well. I tried to rerun the build to see if it was a temporary issue, but it failed again. This is not a blocker on my work, but it would be nice to switch over to the newer provisioners before we start making changes like this.
NSIS was failing because in 0.1.22 we went over the limit for maximum relative path NSIS was able to handle. Note, this was not windows max path limit, but internal NSIS limit for path manipulation. I worked around it by changing NSIS execution directory first and then stripping away folders from the beginning of the path. So this path:
/Templates/Installer/../../Export/Development/windows/bin/\assets\provisioners\demo-tasks\0.1.22\scripts\provisioners\ansible\roles\hcl_domino_java_app_example\templates\build-notesjava-standalone-example-apps.bsh
became this path:
Export/Development/windows/bin/\assets\provisioners\demo-tasks\0.1.22\scripts\provisioners\ansible\roles\hcl_domino_java_app_example\templates\build-notesjava-standalone-example-apps.bsh
and this is now again within the limit
Perhaps that fundamental would also help in Moonshine-IDE. @JoelProminic you remember we had path-size problem during Moonshine build, and you required to shorten the paths with newer build plan "Moonshine GitHub (Windows, Short)", on Bamboo.
I was able to spin new server with updated demo-tasks 0.1.22. However something has changed probably because welcome.html page no longer working or I don't know what is the right url. previously we had https://ipaddress/welcome.html
. @MarkProminic can you shed some light on that matter ?
I was also able to spin up a server with 0.1.22. I enabled all the applications in my test.
We investigated the welcome page with @MarkProminic, and we found that the default backend was changed to domino
. We can fix this by updating this line from domino
to downloads
.
I noticed a couple issues when I first tested the 0.1.22 instance. However, they cleared automatically on my later tests. I didn't restart the VM or the Domino server (but I did start the nomad task), and the provisioners had finished by the time I was doing my tests. I'm tracking these in case other people notice similar problems:
Remaining issues:
I did a test with Domino 12.0.1 and SHI Development 0.9.8, and confirmed that the resulting server worked as expected for FormBuilder:
I'll do a Domino 12.0.2 test as a sanity check, but otherwise the Domino 12.0.1 updates look good.
@JoelProminic I have pushed SHI 0.9.9 - it contains changes which allows usage demo-tasks: 0.1.20. With next update I will remove older demo-tasks than 0.1.20.
0.1.20 failed for me too (on our Solar machine) with timeout error:
TASK [domino_vagrant_rest_api : Checking Vagrant CRUD Rest API is listening on port 8080] ***
fatal: [1414--old1201.planets.com]: FAILED! => {"changed": false, "elapsed": 60, "msg": "Timeout waiting for 8080 to respond"}
PLAY RECAP *********************************************************************
1414--old1201.planets.com : ok=99 changed=64 unreachable=0 failed=1 skipped=27 rescued=0 ignored=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
[SHI]: 'vagrant up' stopped with exit code: 1, elapsed time: 00:20:12
Attaching complete log log-0.1.20.txt
I tested v0.1.20 on macOS with Domino 12.0.1, and I was not able to reproduce this problem. If you continue seeing problems, you can try my debugging commands from here.
This has been working well on macOS. There are remaining problems on Windows, but 0.1.23 should help with these, so we don't want to spend more time debugging this on 0.1.22
Release of demo-tasks v0.1.22
UPDATE: The provisioners have been moved here. We will probably release v0.1.23 with my recent official updates related to #110.