Closed khaledk2 closed 2 years ago
After copying the vars files to match the name of the group and removing the variable pointing to a local path
(idr-ansible) (base) sbesson@ls30630:ansible ((db001bb...)) $ diff group_vars/searchengine_vars.yml group_vars/management-hosts.yml
11d10
< ansible_python_interpreter: path/to/bin/python
the playbook executed until
TASK [configure elasticsearch for docker searchengine] **********************************************************************************************************************************************
fatal: [test104-management]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "msg": "Docker SDK for Python version is 1.10.6 (test104-management.novalocal's Python /usr/bin/python). Minimum version required is 2.1.0 to set auto_remove option. Try `pip uninstall docker-py` followed by `pip install docker`."}
PLAY RECAP *******************************************************************************************************************************************************************************************
test104-management : ok=12 changed=11 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Possible options to move forward are:
auto_remove
option for now (does any behavior depend on it?)docker
moduleauto_remove instructs to delete the container after it runs. I think we may comment on it for the time being, what do you think?
Agreed, let's comment it out and come back to it later in the testing.
Sorry, I should mention before that I have commented on auto_remove and push the playbooks yesterday.
Added a minimal configuration allowing to proxy the 5567 port under the /searchengine
endpoint
TASK [ome.nginx_proxy : nginx | proxy config] *************************************************************************************************
--- before: /etc/nginx/conf.d/proxy-default.conf
+++ after: /Users/sbesson/.ansible/tmp/ansible-local-26677fmuyzkn/tmp_tkpwlmz/nginx-confd-proxy.j2
@@ -253,15 +253,6 @@
}
- location ^~ /searchengine {
- proxy_pass http://searchengine/;
- proxy_redirect http://searchengine $scheme://$server_name;
-
-
- proxy_ignore_headers "Set-Cookie" "Vary" "Expires";
- proxy_hide_header Set-Cookie;
- }
-
add_header Access-Control-Allow-Origin $allow_origin;
changed: [test104-proxy] => (item={'nginx_proxy_is_default': True, 'nginx_proxy_additional_directives': ['add_header Access-Control-Allow-Origin $allow_origin']})
ok: [test104-proxy] => (item={'nginx_proxy_server_name': 'cachebuster', 'nginx_proxy_listen_http': 0, 'nginx_proxy_ssl': False, 'nginx_proxy_cachebuster_enabled': True, 'nginx_proxy_backends': [{'name': 'omerocached', 'location': '~ /webclient/metadata_*|/webclient/render_*|/webclient/get_thumbnail*|/webgateway/metadata_*|/webgateway/render_*|/webgateway/get_thumbnail*|/webclient/api/*|/webclient/search/*|/api/*|/webclient/img_detail/*|/iviewer/*|/figure/*|/gallery-api/*|/mapr/*', 'server': 'http://omeroreadwrite', 'cache_validity': '1d', 'read_timeout': 900}, {'name': 'omerostatic', 'location': '~ /static/*', 'server': 'http://omeroreadwrite', 'cache_validity': '1d'}, {'name': 'omero', 'location': '/', 'server': 'http://omeroreadwrite'}]})
ok: [test104-proxy] => (item={'nginx_proxy_server_name': 'idr-demo.openmicroscopy.org', 'nginx_proxy_ssl': True, 'nginx_proxy_redirect_map_locations': [], 'nginx_proxy_direct_locations': [{'location': '/', 'redirect301': '$scheme://idr.openmicroscopy.org$request_uri'}], 'nginx_proxy_backends': []})
TASK [ome.nginx_proxy : nginx | proxy upstream servers] ***************************************************************************************
--- before: /etc/nginx/conf.d/proxy-upstream.conf
+++ after: /Users/sbesson/.ansible/tmp/ansible-local-26677fmuyzkn/tmplufdxnu2/nginx-confd-proxy-upstream.j2
@@ -13,6 +13,3 @@
upstream omeroreadwrite {
server 192.168.3.22;
}
-upstream searchengine {
- server 192.168.3.120:5567;
-}
Following this morning's discussion, we are currently running into two issues:
management
VM currently used for the deployment is relatively small with 8GB RAM and 4 VCPUs. When moving to a production state, we might consider hardening this configuration and provisioning the searchengine VM with more resources similarly to the OMERO ro/rw VMS.proxy_set_header Host $host/searchengine
I have pushed changes to run on the searchengine-hosts group and removed hdf5 caching service as all the cached data now is saved in Elasticsearch.
I have renamed the files and increased cache_rows to 50000, I think we can increase it more than that.
I have reverted renaming dockermanager-hosts.yml and renamed the files
These are the modifications to fix the issues of displaying swagger documents using the searchengineapi url
I have added a variable to the "searchengine-hosts.yml"; its value equals the URL prefix part (searchengineapi)
searchengineurlprefix: "searchengineapi"
It will be used to set the script_name when running the gunicorn
Also, I have changed the Nginx configuration at the searchengineapi section
location ^~ /searchengineapi { proxy_pass http://searchengineapi/searchengineapi; proxy_redirect http://searchengineapi/searchengineapi $scheme://$server_name; }
I have added playbooks to deploy searchengine, searchengine client and ELasticsearch. "management-searchengine.yml" is used to configure and run all three applications. There is a variables file (searchengine_vars.yml) that the user needs to customize before running the playbook After deploying the apps using the playbook, it is needed to run another playbook (run_searchengine_index_cache_services.yml) for caching and indexing As the caching and indexing processes take a long time, there are another two playbooks that enable the user to check if they have finished or not i.e. check_indexing_service.yml and check_caching_service.yml.