bgruening / docker-galaxy

:whale::bar_chart::books: Docker Images tracking the stable Galaxy releases.
http://bgruening.github.io/docker-galaxy
MIT License
226 stars 134 forks source link

Docker local tool does not detect available dataset #510

Closed mhabsaoui closed 3 days ago

mhabsaoui commented 5 years ago

Hi @bgruening,

I followed those exact steps to add a non-tool-shed to my galaxy instance.

But, when I select my local tool on galaxy web interface I see no file selector icon next to my tool input in order to select my dataset available in history...

image

Debug Info:

image

image

$ docker run --rm -p 8080:80 -p 9002:9002 -e "GALAXY_LOGGING=full" -e GALAXY_CONFIG_TOOL_CONFIG_FILE=config/tool_conf.xml.sample,config/shed_tool_conf.xml.sample,/local_tools/tools_conf.xml -v /home/user/Documents/galaxy-tools/:/local_tools --name galaxy  bgruening/galaxy-stable 
Enable Galaxy reports authentification                                                                                                                                                                                                                                         
tmpfs on /proc/kcore type tmpfs (rw,nosuid,size=65536k,mode=755)                                                                                                                                                                                                               
Checking /export...                                                                                                                                                                                                                                                            
Disable Galaxy Interactive Environments. Start with --privileged to enable IE's.                                                                                                                                                                                               
Starting postgres                                                                                                                                                                                                                                                              
postgresql: started                                                                                                                                                                                                                                                            
Checking if database is up and running                                                                                                                                                                                                                                         
Database connected                                                                                                                                                                                                                                                             
Starting cron                                                                                                                                                                                                                                                                  
cron: started                                                                                                                                                                                                                                                                  
Starting ProFTP                                                                                                                                                                                                                                                                
proftpd: started                                                                                                                                                                                                                                                               
Starting Galaxy reports webapp                                                                                                                                                                                                                                                 
reports: started                                                                                                                                                                                                                                                               
Starting nodejs                                                                                                                                                                                                                                                                
galaxy:galaxy_nodejs_proxy: ERROR (spawn error)                                                                                                                                                                                                                                
Starting condor                                                                                                                                                                                                                                                                
condor: started
Starting slurmctld
Starting slurmd
Creating admin user admin@galaxy.org with key admin and password admin if not existing
Traceback (most recent call last):
  File "/usr/local/bin/create_galaxy_user.py", line 60, in <module>
    add_user(sa_session, security_agent, options.user, options.password, key=options.key, username=options.username)
  File "/usr/local/bin/create_galaxy_user.py", line 36, in add_user
    sa_session.flush()
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 153, in do
    return getattr(self.registry(), name)(*args, **kwargs)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2313, in flush
    self._flush(objects)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2440, in _flush
    transaction.rollback(_capture_exception=True)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2404, in _flush
    flush_context.execute()
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 395, in execute
    rec.execute(self)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 560, in execute
    uow
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
    mapper, table, insert)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 872, in _emit_insert_statements
    execute(statement, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "ix_api_keys_key"
DETAIL:  Key (key)=(admin) already exists.
 [SQL: 'INSERT INTO api_keys (create_time, user_id, key) VALUES (%(create_time)s, %(user_id)s, %(key)s) RETURNING api_keys.id'] [parameters: {'create_time': datetime.datetime(2019, 6, 6, 17, 10, 23, 32269), 'user_id': 2, 'key': 'admin'}] (Background on this error at: http://sqlalche.me/e/gkpj)
==> /var/log/supervisor/condor-stdout---supervisor-2HBijp.log <==
06/06/19 17:10:15 SharedPortEndpoint: failed to open /var/lock/condor/shared_port_ad: No such file or directory
06/06/19 17:10:15 SharedPortEndpoint: did not successfully find SharedPortServer address. Will retry in 60s.
06/06/19 17:10:15 DaemonCore: private command socket at <172.17.0.2:0?sock=340_ca4a>
06/06/19 17:10:15 Warning: Collector information was not found in the configuration file. ClassAds will not be sent to the collector and this daemon will not join a larger Condor pool.
06/06/19 17:10:15 Adding SHARED_PORT to DAEMON_LIST, because USE_SHARED_PORT=true (to disable this, set AUTO_INCLUDE_SHARED_PORT_IN_DAEMON_LIST=False)
06/06/19 17:10:15 Master restart (GRACEFUL) is watching /usr/sbin/condor_master (mtime:1550554622)
06/06/19 17:10:15 Collector port not defined, will use default: 9618
06/06/19 17:10:16 Started DaemonCore process "/usr/lib/condor/libexec/condor_shared_port", pid and pgroup = 399
06/06/19 17:10:16 Waiting for /var/lock/condor/shared_port_ad to appear.
06/06/19 17:10:17 Found /var/lock/condor/shared_port_ad.

==> /var/log/supervisor/cron-stderr---supervisor-LUlj_7.log <==

==> /var/log/supervisor/cron-stdout---supervisor-YhISq2.log <==

==> /var/log/supervisor/docker-stdout---supervisor-K5eIO5.log <==

==> /var/log/supervisor/galaxy_nodejs_proxy-stdout---supervisor-GhEvEB.log <==
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:580:15)
    at Function.Module._load (internal/modules/cjs/loader.js:506:25)
    at Module.require (internal/modules/cjs/loader.js:636:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/galaxy-central/lib/galaxy/web/proxy/js/node_modules/sqlite3/lib/sqlite3.js:4:15)
    at Module._compile (internal/modules/cjs/loader.js:688:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:699:10)
    at Module.load (internal/modules/cjs/loader.js:598:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:537:12)
    at Function.Module._load (internal/modules/cjs/loader.js:529:3)
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/condor-stdout---supervisor-2HBijp.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/cron-stderr---supervisor-LUlj_7.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/cron-stdout---supervisor-YhISq2.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/docker-stdout---supervisor-K5eIO5.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/galaxy_nodejs_proxy-stdout---supervisor-GhEvEB.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/galaxy_web-stderr---supervisor-w4TPBQ.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/galaxy_web-stdout---supervisor-9o9_EK.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/galaxy_web-stderr---supervisor-w4TPBQ.log <==
[uWSGI] getting YAML configuration from /etc/galaxy/galaxy.yml
[uwsgi-static] added mapping for /static/style => static/style/blue
[uwsgi-static] added mapping for /static => static
[uwsgi-static] added mapping for /favicon.ico => static/favicon.ico

==> /var/log/supervisor/galaxy_web-stdout---supervisor-9o9_EK.log <==
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/handler0-stderr---supervisor-wyuaSf.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/handler0-stdout---supervisor-sLEQTW.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/handler0-stderr---supervisor-wyuaSf.log <==
DEBUG:galaxy.app:python path is: /galaxy-central/scripts, /galaxy-central/lib, /galaxy_venv/lib/python27.zip, /galaxy_venv/lib/python2.7, /galaxy_venv/lib/python2.7/plat-linux2, /galaxy_venv/lib/python2.7/lib-tk, /galaxy_venv/lib/python2.7/lib-old, /galaxy_venv/lib/python2.7/lib-dynload, /tool_deps/_conda/lib/python2.7, /tool_deps/_conda/lib/python2.7/plat-linux2, /tool_deps/_conda/lib/python2.7/lib-tk, /galaxy_venv/lib/python2.7/site-packages
DEBUG:galaxy.containers:config file './config/containers_conf.yml' does not exist, running with default config

==> /var/log/supervisor/handler0-stdout---supervisor-sLEQTW.log <==

==> /var/log/supervisor/handler1-stderr---supervisor-uL5l5C.log <==
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
    cursor.execute(statement, parameters)
IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "pg_type_typname_nsp_index"
DETAIL:  Key (typname, typnamespace)=(queue_id_sequence, 2200) already exists.
 [SQL: 'CREATE SEQUENCE queue_id_sequence'] (Background on this error at: http://sqlalche.me/e/gkpj)

==> /var/log/supervisor/handler1-stdout---supervisor-xncVmW.log <==
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/handler1-stderr---supervisor-uL5l5C.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/handler1-stdout---supervisor-xncVmW.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/nginx-stderr---supervisor-m4hzJF.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/nginx-stderr---supervisor-m4hzJF.log <==
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/nginx-stdout---supervisor-8zevOx.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/nginx-stdout---supervisor-8zevOx.log <==

==> /var/log/supervisor/postgresql-stdout---supervisor-_Zz26a.log <==
2019-06-06 17:09:34 UTC LOG:  database system is ready to accept connections
2019-06-06 17:09:34 UTC LOG:  autovacuum launcher started
2019-06-06 17:09:40 UTC LOG:  could not receive data from client: Connection reset by peer
2019-06-06 17:09:55 UTC ERROR:  duplicate key value violates unique constraint "pg_type_typname_nsp_index"
2019-06-06 17:09:55 UTC DETAIL:  Key (typname, typnamespace)=(queue_id_sequence, 2200) already exists.
2019-06-06 17:09:55 UTC STATEMENT:  CREATE SEQUENCE queue_id_sequence
2019-06-06 17:10:23 UTC ERROR:  duplicate key value violates unique constraint "ix_api_keys_key"
2019-06-06 17:10:23 UTC DETAIL:  Key (key)=(admin) already exists.
2019-06-06 17:10:23 UTC STATEMENT:  INSERT INTO api_keys (create_time, user_id, key) VALUES ('2019-06-06T17:10:23.032269'::timestamp, 2, 'admin') RETURNING api_keys.id
2019-06-06 17:10:23 UTC LOG:  could not receive data from client: Connection reset by peer
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/postgresql-stdout---supervisor-_Zz26a.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/proftpd-stderr---supervisor-oU5lnu.log <==
2019-06-06 17:09:47,315 b50606937d7e proftpd[207] b50606937d7e: ProFTPD 1.3.5rc3 (devel) (built Fri Feb 17 2017 19:15:12 UTC) standalone mode STARTUP

==> /var/log/supervisor/proftpd-stdout---supervisor-r8AikD.log <==
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/proftpd-stderr---supervisor-oU5lnu.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/proftpd-stdout---supervisor-r8AikD.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/rabbitmq-stderr---supervisor-ty9yTM.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/rabbitmq-stdout---supervisor-85gICp.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/reports-stderr---supervisor-fnPclk.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/rabbitmq-stderr---supervisor-ty9yTM.log <==
/usr/local/bin/rabbitmq.sh: 41: .: Can't open /usr/local/bin/rabbitmq-defaults
/usr/local/bin/rabbitmq.sh: 41: .: Can't open /usr/local/bin/rabbitmq-defaults
/usr/local/bin/rabbitmq.sh: 41: .: Can't open /usr/local/bin/rabbitmq-defaults
/usr/local/bin/rabbitmq.sh: 41: .: Can't open /usr/local/bin/rabbitmq-defaults

==> /var/log/supervisor/rabbitmq-stdout---supervisor-85gICp.log <==

==> /var/log/supervisor/reports-stderr---supervisor-fnPclk.log <==
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/reports-stdout---supervisor-AaAGSi.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/supervisor/reports-stdout---supervisor-AaAGSi.log <==

==> /var/log/supervisor/supervisord.log <==
2019-06-06 17:10:10,215 INFO exited: galaxy_nodejs_proxy (exit status 1; not expected)
2019-06-06 17:10:11,239 INFO spawned: 'galaxy_nodejs_proxy' with pid 330
2019-06-06 17:10:11,417 INFO exited: galaxy_nodejs_proxy (exit status 1; not expected)
2019-06-06 17:10:12,628 INFO spawned: 'condor' with pid 340
2019-06-06 17:10:13,503 INFO spawned: 'galaxy_nodejs_proxy' with pid 341
2019-06-06 17:10:13,674 INFO success: condor entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2019-06-06 17:10:13,676 INFO exited: galaxy_nodejs_proxy (exit status 1; not expected)
2019-06-06 17:10:16,886 INFO spawned: 'galaxy_nodejs_proxy' with pid 400
2019-06-06 17:10:17,022 INFO exited: galaxy_nodejs_proxy (exit status 1; not expected)
2019-06-06 17:10:17,961 INFO gave up: galaxy_nodejs_proxy entered FATAL state, too many start retries too quickly
tail: unrecognized file system type 0x794c7630 for ‘/var/log/supervisor/supervisord.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/nginx/access.log <==
tail: unrecognized file system type 0x794c7630 for ‘/var/log/nginx/access.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/var/log/nginx/error.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /var/log/nginx/error.log <==
2019/04/02 21:22:56 [notice] 1470#0: signal process started
2019/04/02 21:22:56 [error] 1470#0: open() "/run/nginx.pid" failed (2: No such file or directory)
2019/04/02 21:27:00 [notice] 4727#0: signal process started
2019/04/02 21:27:00 [error] 4727#0: open() "/run/nginx.pid" failed (2: No such file or directory)
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/handler0.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/handler0.log <==
galaxy.jobs.handler DEBUG 2019-06-06 17:09:54,885 Loaded job runners plugins: local:slurm
galaxy.jobs.handler INFO 2019-06-06 17:09:54,886 job handler stop queue started
galaxy.jobs.handler DEBUG 2019-06-06 17:09:54,886 Handler queue starting for jobs assigned to handler: handler0
galaxy.web.stack.message DEBUG 2019-06-06 17:09:55,095 Bound default message handler 'JobHandlerMessage.default_handler' to <bound method JobHandlerQueue.default_handler of <galaxy.jobs.handler.JobHandlerQueue object at 0x7fb0b3281790>>
galaxy.jobs.handler INFO 2019-06-06 17:09:55,096 job handler queue started
galaxy.jobs.handler INFO 2019-06-06 17:09:55,101 job handler stop queue started
galaxy.workflow.scheduling_manager DEBUG 2019-06-06 17:09:55,105 Starting workflow schedulers
galaxy.app INFO 2019-06-06 17:09:55,117 Galaxy app startup finished (11552.599 ms)
galaxy.queue_worker INFO 2019-06-06 17:09:55,117 Binding and starting galaxy control worker for handler0
galaxy.web.stack INFO 2019-06-06 17:09:55,129 Galaxy server instance 'handler0' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/handler1.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/handler1.log <==
galaxy.jobs.handler DEBUG 2019-06-06 17:09:54,885 Loaded job runners plugins: local:slurm
galaxy.jobs.handler INFO 2019-06-06 17:09:54,886 job handler stop queue started
galaxy.jobs.handler DEBUG 2019-06-06 17:09:54,886 Handler queue starting for jobs assigned to handler: handler1
galaxy.web.stack.message DEBUG 2019-06-06 17:09:55,095 Bound default message handler 'JobHandlerMessage.default_handler' to <bound method JobHandlerQueue.default_handler of <galaxy.jobs.handler.JobHandlerQueue object at 0x7f2bc8587fd0>>
galaxy.jobs.handler INFO 2019-06-06 17:09:55,097 job handler queue started
galaxy.jobs.handler INFO 2019-06-06 17:09:55,100 job handler stop queue started
galaxy.workflow.scheduling_manager DEBUG 2019-06-06 17:09:55,103 Starting workflow schedulers
galaxy.app INFO 2019-06-06 17:09:55,117 Galaxy app startup finished (7800.415 ms)
galaxy.queue_worker INFO 2019-06-06 17:09:55,117 Binding and starting galaxy control worker for handler1
galaxy.web.stack INFO 2019-06-06 17:09:55,131 Galaxy server instance 'handler1' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/reports.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/slurmctld.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/slurmd.log’. please report this to bug-coreutils@gnu.org. reverting to polling
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/uwsgi.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/reports.log <==
galaxy.web.framework.base DEBUG 2019-06-06 17:09:49,505 Enabling 'root' controller, class: Report
galaxy.web.framework.base DEBUG 2019-06-06 17:09:49,508 Enabling 'history' controller, class: History
galaxy.web.framework.base DEBUG 2019-06-06 17:09:49,513 Enabling 'workflows' controller, class: Workflows
galaxy.webapps.util DEBUG 2019-06-06 17:09:49,515 Enabling 'paste.httpexceptions' middleware
galaxy.webapps.util DEBUG 2019-06-06 17:09:49,541 Enabling 'RecursiveMiddleware' middleware
galaxy.webapps.util DEBUG 2019-06-06 17:09:49,672 Enabling 'ErrorMiddleware' middleware
galaxy.webapps.util DEBUG 2019-06-06 17:09:49,684 Enabling 'TransLogger' middleware
galaxy.webapps.util DEBUG 2019-06-06 17:09:49,695 Enabling 'XForwardedHostMiddleware' middleware
Starting server in PID 215.
serving on http://127.0.0.1:9001

==> /home/galaxy/logs/slurmctld.log <==
[2019-06-06T17:10:14.957] error: Could not open trigger state file /tmp/slurm/trigger_state: No such file or directory
[2019-06-06T17:10:14.957] error: NOTE: Trying backup state save file. Triggers may be lost!
[2019-06-06T17:10:14.957] No trigger state file (/tmp/slurm/trigger_state.old) to recover
[2019-06-06T17:10:14.957] error: Incomplete trigger data checkpoint file
[2019-06-06T17:10:14.957] read_slurm_conf: backup_controller not specified.
[2019-06-06T17:10:14.958] Reinitializing job accounting state
[2019-06-06T17:10:14.958] cons_res: select_p_reconfigure
[2019-06-06T17:10:14.958] cons_res: select_p_node_init
[2019-06-06T17:10:14.958] cons_res: preparing for 1 partitions
[2019-06-06T17:10:14.958] Running as primary controller

==> /home/galaxy/logs/slurmd.log <==
[2019-06-06T17:10:14.475] Node configuration differs from hardware: CPUs=8:8(hw) Boards=1:1(hw) SocketsPerBoard=8:1(hw) CoresPerSocket=1:4(hw) ThreadsPerCore=1:2(hw)
[2019-06-06T17:10:14.487] Gathering cpu frequency information for 8 cpus
[2019-06-06T17:10:14.505] slurmd version 2.6.5 started
[2019-06-06T17:10:14.608] slurmd started on Thu, 06 Jun 2019 17:10:14 +0000
[2019-06-06T17:10:14.608] CPUs=8 Boards=1 Sockets=8 Cores=1 Threads=1 Memory=7894 TmpDisk=929900 Uptime=957677

==> /home/galaxy/logs/uwsgi.log <==
galaxy.web.stack DEBUG 2019-06-06 17:09:59,081 [p:292,w:1,m:0] [MainThread] Calling postfork function: <function postfork_setup at 0x7f4ade352b18>
galaxy.queue_worker INFO 2019-06-06 17:09:59,082 [p:292,w:1,m:0] [MainThread] Binding and starting galaxy control worker for main.web.1
galaxy.web.stack DEBUG 2019-06-06 17:09:59,081 [p:295,w:2,m:0] [MainThread] Calling postfork function: <function register at 0x7f4ae944eed8>
galaxy.web.stack DEBUG 2019-06-06 17:09:59,083 [p:295,w:2,m:0] [MainThread] Calling postfork function: <function postfork_setup at 0x7f4ade352b18>
galaxy.queue_worker INFO 2019-06-06 17:09:59,083 [p:295,w:2,m:0] [MainThread] Binding and starting galaxy control worker for main.web.2
galaxy.web.stack INFO 2019-06-06 17:09:59,096 [p:292,w:1,m:0] [MainThread] Galaxy server instance 'main.web.1' is running
Starting server in PID 116.
serving on http://127.0.0.1:8080
serving on uwsgi://127.0.0.1:4001
galaxy.web.stack INFO 2019-06-06 17:09:59,101 [p:295,w:2,m:0] [MainThread] Galaxy server instance 'main.web.2' is running

Any suggestion is welcome...

Thanks

bgruening commented 3 days ago

Thanks to @jyotipm29 we are back on track :smile: The new 24.1 image contains a lot of changes and reflects the latest developments in Galaxy. I would like to close this PR, but please feel free to reopen and rebase against the latest version.

Please give it a try:

docker run -p 8080:80 -p 8021:21 -p 4002:4002 --privileged=true -e "GALAXY_DESTINATIONS_DEFAULT=slurm_cluster_docker"   -v /tmp/galaxy-data/:/export/ quay.io/bgruening/galaxy:24.1

... or any other combination. The readme has been updated. Please add any useful tip to it.

For a list of changes, see the Changelog.