bgruening / docker-galaxy-exome-seq

Galaxy Docker Image for Exome sequencing
MIT License
2 stars 5 forks source link

Using in-built reference genomes #13

Closed markdunning closed 5 years ago

markdunning commented 5 years ago

Hi,

I'm really keen to use this image for some of my teaching. The plan is to install the image on an Amazon instance that the students can access rather than relying on the public Galaxy.

According to these instructions, I should be able to use in-built reference genomes with the --privileged flag:-

https://github.com/bgruening/docker-galaxy-stable/blob/master/README.md#cvmfs

However, with the following docker command I'm not seeing any reference genomes under the Select reference genome option of bwa

docker run -d -p 8080:80 -p 8021:21 -p 8022:22 --privileged bgruening/galaxy-exome-seq

Is there something else I need to do?

bgruening commented 5 years ago

Hi Mark,

I just build a 18.09 version, so you might want to try this one. Can you please go into the container and do a ls -l /cvmfs/data.galaxyproject.org. Does this work? Do you see:

(base) root@96a2bfa16668:/galaxy-central# ls -l /cvmfs/data.galaxyproject.org
total 9
drwxr-xr-x 207 cvmfs cvmfs 4096 Apr  2  2018 byhand
drwxr-xr-x  15 cvmfs cvmfs 4096 Aug 23 19:12 managed
bgruening commented 5 years ago

@markdunning I just tested it with the latest 18.09 version and it seems to work.

docker run -i -t -p 8080:80 --privileged --rm bgruening/galaxy-exome-seq:18.09

grafik

ping also @wm75

markdunning commented 5 years ago

Hi, Thanks for the new build. It's still not working I'm afraid. I used the command you suggested. However the reference genomes are still not showing in the dropdown and listing /cvmfs didn't show anything. I'm running docker on Windows 10 if that makes a difference?

Here are the message I got when starting the container Thanks a lot!

Enable Galaxy reports authentification 
Enable Galaxy Interactive Environments.
Starting postgres
postgresql: started
Checking if database is up and running
Database connected
Starting cron
cron: started
Starting ProFTP
proftpd: started
Starting Galaxy reports webapp
reports: started
Starting nodejs
galaxy:galaxy_nodejs_proxy: started
Starting condor
condor: started
Starting slurmctld
Starting slurmd
docker: ERROR (spawn error)
Creating admin user admin@galaxy.org with key admin and password admin if not existing
Traceback (most recent call last):
  File "/usr/local/bin/create_galaxy_user.py", line 60, in <module>
    add_user(sa_session, security_agent, options.user, options.password, key=options.key, username=options.username)
  File "/usr/local/bin/create_galaxy_user.py", line 36, in add_user
    sa_session.flush()
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 153, in do
    return getattr(self.registry(), name)(*args, **kwargs)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2254, in flush
    self._flush(objects)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2380, in _flush
    transaction.rollback(_capture_exception=True)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2344, in _flush
    flush_context.execute()
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 391, in execute
    rec.execute(self)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 556, in execute
    uow
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
    mapper, table, insert)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 866, in _emit_insert_statements
    execute(statement, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "ix_api_keys_key"
DETAIL:  Key (key)=(admin) already exists.
 [SQL: 'INSERT INTO api_keys (create_time, user_id, key) VALUES (%(create_time)s, %(user_id)s, %(key)s) RETURNING api_keys.id'] [parameters: {'create_time': datetime.datetime(2019, 1, 28, 10, 1, 1, 278768), 'user_id': 2, 'key': 'admin'}] (Background on this error at: http://sqlalche.me/e/gkpj)
==> /home/galaxy/logs/handler0.log <==
galaxy.jobs.handler DEBUG 2019-01-28 10:00:40,589 Loaded job runners plugins: local:slurm
galaxy.jobs.handler INFO 2019-01-28 10:00:40,591 job handler stop queue started
galaxy.jobs.handler DEBUG 2019-01-28 10:00:40,591 Handler queue starting for jobs assigned to handler: handler0
galaxy.web.stack.message DEBUG 2019-01-28 10:00:40,599 Bound default message handler 'JobHandlerMessage.default_handler' to <bound method JobHandlerQueue.default_handler of <galaxy.jobs.handler.JobHandlerQueue object at 0x7fd0c9924110>>
galaxy.jobs.handler INFO 2019-01-28 10:00:40,599 job handler queue started
galaxy.jobs.handler INFO 2019-01-28 10:00:40,600 job handler stop queue started
galaxy.workflow.scheduling_manager DEBUG 2019-01-28 10:00:40,601 Starting workflow schedulers
galaxy.app INFO 2019-01-28 10:00:40,633 Galaxy app startup finished (16965.471 ms)
galaxy.queue_worker INFO 2019-01-28 10:00:40,633 Binding and starting galaxy control worker for handler0
galaxy.web.stack INFO 2019-01-28 10:00:40,658 Galaxy server instance 'handler0' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/handler0.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/handler1.log <==
galaxy.jobs.handler DEBUG 2019-01-28 10:00:40,585 Loaded job runners plugins: local:slurm
galaxy.jobs.handler INFO 2019-01-28 10:00:40,586 job handler stop queue started
galaxy.jobs.handler DEBUG 2019-01-28 10:00:40,586 Handler queue starting for jobs assigned to handler: handler1
galaxy.web.stack.message DEBUG 2019-01-28 10:00:40,608 Bound default message handler 'JobHandlerMessage.default_handler' to <bound method JobHandlerQueue.default_handler of <galaxy.jobs.handler.JobHandlerQueue object at 0x7f313d345d90>>
galaxy.jobs.handler INFO 2019-01-28 10:00:40,610 job handler queue started
galaxy.jobs.handler INFO 2019-01-28 10:00:40,613 job handler stop queue started
galaxy.workflow.scheduling_manager DEBUG 2019-01-28 10:00:40,615 Starting workflow schedulers
galaxy.app INFO 2019-01-28 10:00:40,629 Galaxy app startup finished (16971.659 ms)
galaxy.queue_worker INFO 2019-01-28 10:00:40,629 Binding and starting galaxy control worker for handler1
galaxy.web.stack INFO 2019-01-28 10:00:40,653 Galaxy server instance 'handler1' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/handler1.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/reports.log <==
galaxy.web.framework.base DEBUG 2019-01-28 10:00:33,628 Enabling 'root' controller, class: Report
galaxy.web.framework.base DEBUG 2019-01-28 10:00:33,631 Enabling 'system' controller, class: System
galaxy.web.framework.base DEBUG 2019-01-28 10:00:33,634 Enabling 'users' controller, class: Users
galaxy.webapps.util DEBUG 2019-01-28 10:00:33,635 Enabling 'paste.httpexceptions' middleware
galaxy.webapps.util DEBUG 2019-01-28 10:00:33,638 Enabling 'RecursiveMiddleware' middleware
galaxy.webapps.util DEBUG 2019-01-28 10:00:33,651 Enabling 'ErrorMiddleware' middleware
galaxy.webapps.util DEBUG 2019-01-28 10:00:33,651 Enabling 'TransLogger' middleware
galaxy.webapps.util DEBUG 2019-01-28 10:00:33,655 Enabling 'XForwardedHostMiddleware' middleware
Starting server in PID 243.
serving on http://127.0.0.1:9001
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/reports.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/slurmctld.log <==
[2019-01-28T10:00:57.950] No trigger state file (/tmp/slurm/trigger_state.old) to recover
[2019-01-28T10:00:57.950] error: Incomplete trigger data checkpoint file
[2019-01-28T10:00:57.950] read_slurm_conf: backup_controller not specified.
[2019-01-28T10:00:57.950] Reinitializing job accounting state
[2019-01-28T10:00:57.950] cons_res: select_p_reconfigure
[2019-01-28T10:00:57.950] cons_res: select_p_node_init
[2019-01-28T10:00:57.950] cons_res: preparing for 1 partitions
[2019-01-28T10:00:57.950] Running as primary controller
[2019-01-28T10:00:57.955] error: slurm_receive_msg: Zero Bytes were transmitted or received
[2019-01-28T10:00:57.965] error: slurm_receive_msg: Zero Bytes were transmitted or received
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/slurmctld.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/slurmd.log <==
[2019-01-28T10:00:57.932] Node configuration differs from hardware: CPUs=2:2(hw) Boards=1:1(hw) SocketsPerBoard=2:1(hw) CoresPerSocket=1:2(hw) ThreadsPerCore=1:1(hw)
[2019-01-28T10:00:57.933] CPU frequency setting not configured for this node
[2019-01-28T10:00:57.944] slurmd version 2.6.5 started
[2019-01-28T10:00:57.946] slurmd started on Mon, 28 Jan 2019 10:00:57 +0000
[2019-01-28T10:00:57.947] CPUs=2 Boards=1 Sockets=2 Cores=1 Threads=1 Memory=1980 TmpDisk=990 Uptime=2265
[2019-01-28T10:00:57.954] error: Munge encode failed: Failed to access "/var/run/munge/munge.socket.2": No such file or directory (retrying ...)
[2019-01-28T10:00:57.954] error: Munge encode failed: Failed to access "/var/run/munge/munge.socket.2": No such file or directory (retrying ...)
[2019-01-28T10:00:57.954] error: Munge encode failed: Failed to access "/var/run/munge/munge.socket.2": No such file or directory
[2019-01-28T10:00:57.954] error: authentication: Socket communication error
[2019-01-28T10:00:57.954] error: Unable to register: Protocol authentication error
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/slurmd.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/uwsgi.log <==
galaxy.web.stack INFO 2019-01-28 10:00:41,349 [p:287,w:2,m:0] [MainThread] Galaxy server instance 'main.web.2' is running
galaxy.web.stack DEBUG 2019-01-28 10:00:41,349 [p:284,w:1,m:0] [MainThread] Calling postfork function: <bound method JobManager.start of <galaxy.jobs.manager.JobManager object at 0x7fa38d725590>>
galaxy.web.stack DEBUG 2019-01-28 10:00:41,349 [p:284,w:1,m:0] [MainThread] Calling postfork function: <bound method UWSGIApplicationStack.start of <galaxy.web.stack.UWSGIApplicationStack object at 0x7fa3a8e3dc50>>
galaxy.web.stack DEBUG 2019-01-28 10:00:41,349 [p:284,w:1,m:0] [MainThread] Calling postfork function: <function register at 0x7fa3ae082ed8>
galaxy.web.stack DEBUG 2019-01-28 10:00:41,349 [p:284,w:1,m:0] [MainThread] Calling postfork function: <function postfork_setup at 0x7fa394bb58c0>
galaxy.queue_worker INFO 2019-01-28 10:00:41,349 [p:284,w:1,m:0] [MainThread] Binding and starting galaxy control worker for main.web.1
galaxy.web.stack INFO 2019-01-28 10:00:41,356 [p:284,w:1,m:0] [MainThread] Galaxy server instance 'main.web.1' is running
Starting server in PID 190.
serving on http://127.0.0.1:8080
serving on uwsgi://127.0.0.1:4001
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/uwsgi.log’. please report this to bug-coreutils@gnu.org. reverting to polling
bgruening commented 5 years ago

Mh, I don't have access to any Windows. Is there a chance you can test it under Linux?

markdunning commented 5 years ago

It seems to work under Linux,....although I still get some error messages

Enable Galaxy reports authentification 
Enable Galaxy Interactive Environments.
Starting postgres
postgresql: started
Checking if database is up and running
Database connected
Starting cron
cron: started
Starting ProFTP
proftpd: started
Starting Galaxy reports webapp
reports: started
Starting nodejs
galaxy:galaxy_nodejs_proxy: started
Starting condor
condor: started
Starting slurmctld
Starting slurmd
docker: ERROR (spawn error)
Creating admin user admin@galaxy.org with key admin and password admin if not existing
Traceback (most recent call last):
  File "/usr/local/bin/create_galaxy_user.py", line 60, in <module>
    add_user(sa_session, security_agent, options.user, options.password, key=options.key, username=options.username)
  File "/usr/local/bin/create_galaxy_user.py", line 36, in add_user
    sa_session.flush()
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/scoping.py", line 153, in do
    return getattr(self.registry(), name)(*args, **kwargs)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2254, in flush
    self._flush(objects)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2380, in _flush
    transaction.rollback(_capture_exception=True)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
    compat.reraise(exc_type, exc_value, exc_tb)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/session.py", line 2344, in _flush
    flush_context.execute()
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 391, in execute
    rec.execute(self)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/unitofwork.py", line 556, in execute
    uow
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
    mapper, table, insert)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/orm/persistence.py", line 866, in _emit_insert_statements
    execute(statement, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
    return meth(self, multiparams, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
    exc_info
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
    context)
  File "/galaxy_venv/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "ix_api_keys_key"
DETAIL:  Key (key)=(admin) already exists.
 [SQL: 'INSERT INTO api_keys (create_time, user_id, key) VALUES (%(create_time)s, %(user_id)s, %(key)s) RETURNING api_keys.id'] [parameters: {'create_time': datetime.datetime(2019, 1, 28, 13, 47, 2, 421479), 'user_id': 2, 'key': 'admin'}] (Background on this error at: http://sqlalche.me/e/gkpj)
==> /home/galaxy/logs/handler0.log <==
galaxy.jobs.handler DEBUG 2019-01-28 13:46:58,067 Loaded job runners plugins: local:slurm
galaxy.jobs.handler INFO 2019-01-28 13:46:58,068 job handler stop queue started
galaxy.jobs.handler DEBUG 2019-01-28 13:46:58,068 Handler queue starting for jobs assigned to handler: handler0
galaxy.web.stack.message DEBUG 2019-01-28 13:46:58,084 Bound default message handler 'JobHandlerMessage.default_handler' to <bound method JobHandlerQueue.default_handler of <galaxy.jobs.handler.JobHandlerQueue object at 0x7fee58b7ad10>>
galaxy.jobs.handler INFO 2019-01-28 13:46:58,084 job handler queue started
galaxy.jobs.handler INFO 2019-01-28 13:46:58,099 job handler stop queue started
galaxy.workflow.scheduling_manager DEBUG 2019-01-28 13:46:58,100 Starting workflow schedulers
galaxy.app INFO 2019-01-28 13:46:58,105 Galaxy app startup finished (30716.767 ms)
galaxy.queue_worker INFO 2019-01-28 13:46:58,106 Binding and starting galaxy control worker for handler0
galaxy.web.stack INFO 2019-01-28 13:46:58,111 Galaxy server instance 'handler0' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/handler0.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/handler1.log <==
galaxy.jobs.handler DEBUG 2019-01-28 13:46:58,111 Loaded job runners plugins: local:slurm
galaxy.jobs.handler INFO 2019-01-28 13:46:58,112 job handler stop queue started
galaxy.jobs.handler DEBUG 2019-01-28 13:46:58,112 Handler queue starting for jobs assigned to handler: handler1
galaxy.web.stack.message DEBUG 2019-01-28 13:46:58,127 Bound default message handler 'JobHandlerMessage.default_handler' to <bound method JobHandlerQueue.default_handler of <galaxy.jobs.handler.JobHandlerQueue object at 0x7f553b6e8890>>
galaxy.jobs.handler INFO 2019-01-28 13:46:58,127 job handler queue started
galaxy.jobs.handler INFO 2019-01-28 13:46:58,130 job handler stop queue started
galaxy.workflow.scheduling_manager DEBUG 2019-01-28 13:46:58,136 Starting workflow schedulers
galaxy.app INFO 2019-01-28 13:46:58,150 Galaxy app startup finished (30727.241 ms)
galaxy.queue_worker INFO 2019-01-28 13:46:58,150 Binding and starting galaxy control worker for handler1
galaxy.web.stack INFO 2019-01-28 13:46:58,157 Galaxy server instance 'handler1' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/handler1.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/reports.log <==
galaxy.web.framework.base DEBUG 2019-01-28 13:46:33,535 Enabling 'users' controller, class: Users
galaxy.web.framework.base DEBUG 2019-01-28 13:46:33,535 Enabling 'jobs' controller, class: Jobs
galaxy.web.framework.base DEBUG 2019-01-28 13:46:33,537 Enabling 'tools' controller, class: Tools
galaxy.webapps.util DEBUG 2019-01-28 13:46:33,538 Enabling 'paste.httpexceptions' middleware
galaxy.webapps.util DEBUG 2019-01-28 13:46:33,539 Enabling 'RecursiveMiddleware' middleware
galaxy.webapps.util DEBUG 2019-01-28 13:46:33,541 Enabling 'ErrorMiddleware' middleware
galaxy.webapps.util DEBUG 2019-01-28 13:46:33,541 Enabling 'TransLogger' middleware
galaxy.webapps.util DEBUG 2019-01-28 13:46:33,542 Enabling 'XForwardedHostMiddleware' middleware
Starting server in PID 301.
serving on http://127.0.0.1:9001
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/reports.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/slurmctld.log <==
[2019-01-28T13:46:59.637] No trigger state file (/tmp/slurm/trigger_state.old) to recover
[2019-01-28T13:46:59.637] error: Incomplete trigger data checkpoint file
[2019-01-28T13:46:59.637] read_slurm_conf: backup_controller not specified.
[2019-01-28T13:46:59.637] Reinitializing job accounting state
[2019-01-28T13:46:59.637] cons_res: select_p_reconfigure
[2019-01-28T13:46:59.637] cons_res: select_p_node_init
[2019-01-28T13:46:59.637] cons_res: preparing for 1 partitions
[2019-01-28T13:46:59.637] Running as primary controller
[2019-01-28T13:46:59.640] error: slurm_receive_msg: Zero Bytes were transmitted or received
[2019-01-28T13:46:59.650] error: slurm_receive_msg: Zero Bytes were transmitted or received
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/slurmctld.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/slurmd.log <==
[2019-01-28T13:46:59.638] Node configuration differs from hardware: CPUs=4:4(hw) Boards=1:1(hw) SocketsPerBoard=4:1(hw) CoresPerSocket=1:4(hw) ThreadsPerCore=1:1(hw)
[2019-01-28T13:46:59.638] Gathering cpu frequency information for 4 cpus
[2019-01-28T13:46:59.639] slurmd version 2.6.5 started
[2019-01-28T13:46:59.639] slurmd started on Mon, 28 Jan 2019 13:46:59 +0000
[2019-01-28T13:46:59.639] CPUs=4 Boards=1 Sockets=4 Cores=1 Threads=1 Memory=7858 TmpDisk=3929 Uptime=1077
[2019-01-28T13:46:59.640] error: Munge encode failed: Failed to access "/var/run/munge/munge.socket.2": No such file or directory (retrying ...)
[2019-01-28T13:46:59.640] error: Munge encode failed: Failed to access "/var/run/munge/munge.socket.2": No such file or directory (retrying ...)
[2019-01-28T13:46:59.640] error: Munge encode failed: Failed to access "/var/run/munge/munge.socket.2": No such file or directory
[2019-01-28T13:46:59.640] error: authentication: Socket communication error
[2019-01-28T13:46:59.640] error: Unable to register: Protocol authentication error
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/slurmd.log’. please report this to bug-coreutils@gnu.org. reverting to polling

==> /home/galaxy/logs/uwsgi.log <==
serving on http://127.0.0.1:8080
serving on uwsgi://127.0.0.1:4001
galaxy.web.stack DEBUG 2019-01-28 13:46:58,503 [p:440,w:2,m:0] [MainThread] Calling postfork function: <bound method Thread.start of <Thread(ToolConfWatcher.thread, initial daemon)>>
galaxy.web.stack DEBUG 2019-01-28 13:46:58,503 [p:440,w:2,m:0] [MainThread] Calling postfork function: <bound method Thread.start of <Thread(ToolConfWatcher.thread, initial daemon)>>
galaxy.web.stack DEBUG 2019-01-28 13:46:58,504 [p:440,w:2,m:0] [MainThread] Calling postfork function: <bound method JobManager.start of <galaxy.jobs.manager.JobManager object at 0x7fc051e57850>>
galaxy.web.stack DEBUG 2019-01-28 13:46:58,504 [p:440,w:2,m:0] [MainThread] Calling postfork function: <bound method UWSGIApplicationStack.start of <galaxy.web.stack.UWSGIApplicationStack object at 0x7fc07288cc50>>
galaxy.web.stack DEBUG 2019-01-28 13:46:58,504 [p:440,w:2,m:0] [MainThread] Calling postfork function: <function register at 0x7fc077ad1ed8>
galaxy.web.stack DEBUG 2019-01-28 13:46:58,504 [p:440,w:2,m:0] [MainThread] Calling postfork function: <function postfork_setup at 0x7fc05967f7d0>
galaxy.queue_worker INFO 2019-01-28 13:46:58,504 [p:440,w:2,m:0] [MainThread] Binding and starting galaxy control worker for main.web.2
galaxy.web.stack INFO 2019-01-28 13:46:58,505 [p:440,w:2,m:0] [MainThread] Galaxy server instance 'main.web.2' is running
tail: unrecognized file system type 0x794c7630 for ‘/home/galaxy/logs/uwsgi.log’. please report this to bug-coreutils@gnu.org. reverting to polling
bgruening commented 5 years ago

That looks ok. So it's working now? Please let me know how the teaching was - this is super interesting for me :)

Thanks @markdunning!

markdunning commented 5 years ago

Yes, it seems to be. Thanks for the fix. The course is a few months away but will keep you posted.

However, in a few weeks time I'll be doing an RNA-seq course hopefully using the galaxy-rna-workbench and that image seems to have the same issue of not having reference genomes. I'll open an issue in that repo...

bgruening commented 5 years ago

However, in a few weeks time I'll be doing an RNA-seq course hopefully using the galaxy-rna-workbench and that image seems to have the same issue of not having reference genomes. I'll open an issue in that repo...

Which version are you using, currently working on it. Please also have a look at http://rna.usegalaxy.eu & https://galaxyproject.eu/tiaas.

markdunning commented 5 years ago

Thanks! I'll definitely check out the links. It sounds like a fantastic resource