Closed donsizemore closed 5 years ago
Payara isn't starting because you are patching payara with old jars. TASK [dataverse : remove old weld jar] TASK [dataverse : get patched weld jar] TASK [dataverse : remove old grizzly jar] TASK [dataverse : get patched grizzly jar]
@smillidge ah, that make sense. You're talking about...
- name: remove old weld jar
file: name={{ glassfish_dir }}/glassfish/modules/weld-osgi-bundle.jar state=absent
- name: get patched weld jar
get_url: url=http://central.maven.org/maven2/org/jboss/weld/weld-osgi-bundle/2.2.10.SP1/weld-osgi-bundle-2.2.10.SP1-glassfish4.jar
dest={{ glassfish_dir }}/glassfish/modules owner=root group=root mode=0644
- name: remove old grizzly jar
file: name={{ glassfish_dir }}/glassfish/modules/glassfish-grizzly-extra-all.jar state=absent
- name: get patched grizzly jar
get_url: url=http://guides.dataverse.org/en/latest/_static/installation/files/issues/2180/grizzly-patch/glassfish-grizzly-extra-all.jar
dest={{ glassfish_dir }}/glassfish/modules owner=root group=root mode=0644
@donsizemore maybe I could just delete these lines? You'd probably do something fancier and tie it to the version... whether the user is trying to use Glassfish 4.1 (where we absolutely need these patches) and Payara 5 (where we absolutely don't want them). I'm not very good at Ansbile so maybe I'll just try deleting those lines, in a branch, of course.
@pdurbin yes you’re welcome to delete them. i probably won’t make it back to work today, but i can add a switch on monday?
@donsizemore thanks, I'll see what I can do. Take care today and have a great weekend!
I just draft pull request #69 to delete those weld and grizzly lines.
It might be nice to delete this "suppress grizzly ajp warnings" stuff too but I'll hold off:
$ ack -i griz
tasks/dataverse-postinstall.yml
32:- name: suppress grizzly ajp warnings
35: shell: "{{ glassfish_dir }}/bin/asadmin set-log-levels org.glassfish.grizzly.http.server.util.RequestUtils=SEVERE"
I'll go hack on my ec2 create script some more to switch to the new branch. At some point we should either open a issue about this at https://github.com/IQSS/dataverse/issues to add a flag to the ec2 create script for this. Or we should create an issue to move the whole script from "dataverse" to "dataverse-ansible" so we can iterate on it faster. It depends on dataverse-ansible a ton already.
i fixed this with a bunch of when: dataverse.glassfish.zipurl is match(".*glassfish-4.1.zip")
checks.
Now that @smillidge himself is making pull requests, such as https://github.com/IQSS/dataverse/pull/5894
I thought I'd play around with the new "dir" and "zipurl" settings added in 534624e. Here's a screenshot of those settings:
Here's my main.yml where I'm using the "dir" and "zipurl" settings below: main.yml.txt
Otherwise, my main.yml is the same as https://github.com/IQSS/dataverse-ansible/blob/d1dcc4cd83db536de38bdc1c4ae04abb2741eed5/defaults/main.yml
I have a slightly modified version of ec2-create-instance.sh to work around an Ansible problem I'm having. Here's my workaround: https://github.com/IQSS/dataverse-ansible/issues/68#issuecomment-497514031 . My ec2 create script is otherwise the same as https://github.com/IQSS/dataverse/blob/1a9808beb317a1092711e4b379af0eefca7f9c4d/scripts/installer/ec2-create-instance.sh
I just tried this:
ec2-create-instance.sh -g main.yml -r https://github.com/smillidge/dataverse.git -b 5893-make-flyway-ejb-valid
The run ended with this error:
"stdout": "Waiting for domain1 to start .....Command start-domain failed.", "stdout_lines": ["Waiting for domain1 to start .....Command start-domain failed."]} to retry, use: --limit @/home/centos/dataverse/dataverse.retry
PLAY RECAP ***** localhost : ok=50 changed=34 unreachable=0 failed=1
[WARNING]: Module remote_tmp /home/glassfish/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Here's the output from the full run: payara5test.d1dcc4c.txt