Closed frohro closed 5 years ago
It doesn't seem to be a matter of the course. I see the same thing in all courses.
Here is the whole log since restarting the server log_lnks_not_working.log a few minutes ago, which did no good.
I tried make update and make reindex-courses, and the latter failed with these errors:
frohro@fweb:~/openedx-docker/deploy/local$ make stop
docker-compose rm --stop --force
Stopping local_lms_worker_1 ... done
Stopping local_nginx_1 ... done
Stopping local_cms_worker_1 ... done
Stopping local_lms_1 ... done
Stopping local_cms_1 ... done
Stopping local_forum_1 ... done
Stopping local_xqueue_consumer_1 ... done
Stopping local_xqueue_1 ... done
Stopping local_notes_1 ... done
Stopping local_memcached_1 ... done
Stopping local_elasticsearch_1 ... done
Stopping local_mongodb_1 ... done
Stopping local_mysql_1 ... done
Stopping local_smtp_1 ... done
Stopping local_rabbitmq_1 ... done
Going to remove local_lms_worker_1, local_nginx_1, local_cms_worker_1, local_lms_1, local_cms_1, local_forum_1, local_xqueue_consumer_1, local_xqueue_1, local_notes_1, local_memcached_1, local_openedx-assets_1, local_elasticsearch_1, local_mongodb_1, local_mysql_1, local_smtp_1, local_rabbitmq_1
Removing local_lms_worker_1 ... done
Removing local_nginx_1 ... done
Removing local_cms_worker_1 ... done
Removing local_lms_1 ... done
Removing local_cms_1 ... done
Removing local_forum_1 ... done
Removing local_xqueue_consumer_1 ... done
Removing local_xqueue_1 ... done
Removing local_notes_1 ... done
Removing local_memcached_1 ... done
Removing local_openedx-assets_1 ... done
Removing local_elasticsearch_1 ... done
Removing local_mongodb_1 ... done
Removing local_mysql_1 ... done
Removing local_smtp_1 ... done
Removing local_rabbitmq_1 ... done
frohro@fweb:~/openedx-docker/deploy/local$ make update
docker-compose pull
Pulling memcached ... done
Pulling mongodb ... done
Pulling mysql ... done
Pulling elasticsearch ... done
Pulling openedx-assets ... done
Pulling rabbitmq ... done
Pulling smtp ... done
Pulling forum ... done
Pulling lms ... done
Pulling cms ... done
Pulling lms_worker ... done
Pulling cms_worker ... done
Pulling notes ... done
Pulling nginx ... done
Pulling xqueue ... done
Pulling xqueue_consumer ... done
frohro@fweb:~/openedx-docker/deploy/local$ make reindex-courses
docker-compose run --rm cms ./manage.py cms reindex_course --all --setup
Creating local_smtp_1 ... done
Creating local_rabbitmq_1 ... done
Creating local_memcached_1 ... done
Creating local_mongodb_1 ... done
Creating local_mysql_1 ... done
2019-02-04 06:28:51,968 WARNING 1 [elasticsearch] base.py:82 - HEAD http://elasticsearch:9200/courseware_index [status:N/A request:0.015s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 78, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 897, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 859, in send
self.connect()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 196, in connect
conn = self._new_conn()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f513abdc5d0>: Failed to establish a new connection: [Errno -2] Name or service not known
2019-02-04 06:28:51,973 WARNING 1 [elasticsearch] base.py:82 - HEAD http://elasticsearch:9200/courseware_index [status:N/A request:0.003s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 78, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 897, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 859, in send
self.connect()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 196, in connect
conn = self._new_conn()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f513abdc610>: Failed to establish a new connection: [Errno -2] Name or service not known
2019-02-04 06:28:51,977 WARNING 1 [elasticsearch] base.py:82 - HEAD http://elasticsearch:9200/courseware_index [status:N/A request:0.003s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 78, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 897, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 859, in send
self.connect()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 196, in connect
conn = self._new_conn()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f513abdc590>: Failed to establish a new connection: [Errno -2] Name or service not known
2019-02-04 06:28:51,980 WARNING 1 [elasticsearch] base.py:82 - HEAD http://elasticsearch:9200/courseware_index [status:N/A request:0.003s]
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 78, in perform_request
response = self.pool.urlopen(method, url, body, retries=False, headers=self.headers, **kw)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 638, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/local/lib/python2.7/dist-packages/urllib3/util/retry.py", line 343, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 600, in urlopen
chunked=chunked)
File "/usr/local/lib/python2.7/dist-packages/urllib3/connectionpool.py", line 354, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/usr/lib/python2.7/httplib.py", line 1057, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.7/httplib.py", line 1097, in _send_request
self.endheaders(body)
File "/usr/lib/python2.7/httplib.py", line 1053, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 897, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 859, in send
self.connect()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 196, in connect
conn = self._new_conn()
File "/usr/local/lib/python2.7/dist-packages/urllib3/connection.py", line 180, in _new_conn
self, "Failed to establish a new connection: %s" % e)
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f513abdc5d0>: Failed to establish a new connection: [Errno -2] Name or service not known
2019-02-04 06:28:51,981 ERROR 1 [root] reindex_course.py:78 - Search Engine error - ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f513abdc5d0>: Failed to establish a new connection: [Errno -2] Name or service not known) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f513abdc5d0>: Failed to establish a new connection: [Errno -2] Name or service not known)
Traceback (most recent call last):
File "/openedx/edx-platform/cms/djangoapps/contentstore/management/commands/reindex_course.py", line 76, in handle
searcher = SearchEngine.get_search_engine(index_name)
File "/usr/local/lib/python2.7/dist-packages/search/search_engine_base.py", line 50, in get_search_engine
return search_engine_class(index=index) if search_engine_class else None
File "/usr/local/lib/python2.7/dist-packages/search/elastic.py", line 276, in __init__
if not self._es.indices.exists(index=self.index_name):
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 69, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/indices.py", line 224, in exists
params=params)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_urllib3.py", line 89, in perform_request
raise ConnectionError('N/A', str(e), e)
ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x7f513abdc5d0>: Failed to establish a new connection: [Errno -2] Name or service not known) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x7f513abdc5d0>: Failed to establish a new connection: [Errno -2] Name or service not known)
frohro@fweb:~/openedx-docker/deploy/local$ make daemonize
docker-compose up -d
local_rabbitmq_1 is up-to-date
Creating local_elasticsearch_1 ...
local_smtp_1 is up-to-date
Creating local_openedx-assets_1 ...
local_mysql_1 is up-to-date
Creating local_elasticsearch_1 ... done
Creating local_openedx-assets_1 ... done
Creating local_xqueue_1 ... done
Creating local_notes_1 ... done
Creating local_xqueue_consumer_1 ... done
Creating local_cms_1 ... done
Creating local_forum_1 ... done
Creating local_cms_worker_1 ... done
Creating local_lms_1 ... done
Creating local_nginx_1 ... done
Creating local_lms_worker_1 ... done
Daemon is up and running
frohro@fweb:~/openedx-docker/deploy/local$
Though the make daemonize seems to work, the problem persists. Rob
i should also mention that make assets did not help.
Hmmmm it seems to me that Elasticsearch is overwhelmed by the grade generation task. Can you please:
make stop
ENABLE_GRADE_DOWNLOADS
feature from env/openedx/config/lms.env.json
make daemon
I did that with no effect last night.
Thanks!
Rob
Get Outlook for Androidhttps://aka.ms/ghei36
From: Régis Behmo notifications@github.com Sent: Sunday, February 3, 2019 11:17:33 PM To: regisb/tutor Cc: Rob Frohne; Author Subject: Re: [regisb/tutor] Links within LMS not responding (#148)
Hmmmm it seems to me that Elasticsearch is overwhelmed by the grade generation task. Can you please:
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/regisb/tutor/issues/148#issuecomment-460151291, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AA8tBXKC9YE7OO4K4WSywJjVLeUK879Pks5vJ96NgaJpZM4aggLC.
Ok, let's go with the nuclear option.
make stop
sudo rm -rf data/rabbitmq/mnesia
make daemon
I stopped, and moved the mnesia directory to a backup place, and made daemonize, but no effect.
Thanks for looking at this for me. I did upload my course material to edunext.co, but not the data. At least the students have something to look at, and can even answer the questions, saving them with screenshots for the time when I get the regular site up and going again. :-)
what is the output of docker-compose logs elasticsearch
?
$ docker-compose logs -f elasticsearch
Attaching to local_elasticsearch_1
elasticsearch_1 | [2019-02-04 15:04:18,671][INFO ][node ] [Evilhawk] version[1.5.2], pid[1], build[62ff986/2015-04-27T09:21:06Z]
elasticsearch_1 | [2019-02-04 15:04:18,672][INFO ][node ] [Evilhawk] initializing ...
elasticsearch_1 | [2019-02-04 15:04:18,739][INFO ][plugins ] [Evilhawk] loaded [], sites []
elasticsearch_1 | [2019-02-04 15:04:25,347][INFO ][node ] [Evilhawk] initialized
elasticsearch_1 | [2019-02-04 15:04:25,348][INFO ][node ] [Evilhawk] starting ...
elasticsearch_1 | [2019-02-04 15:04:25,542][INFO ][transport ] [Evilhawk] bound_address {inet[/0.0.0.0:9300]}, publish_address {inet[/172.18.0.5:9300]}
elasticsearch_1 | [2019-02-04 15:04:25,632][INFO ][discovery ] [Evilhawk] elasticsearch/4ew6fuADSz6fUxbPVWbfgg
elasticsearch_1 | [2019-02-04 15:04:29,533][INFO ][cluster.service ] [Evilhawk] new_master [Evilhawk][4ew6fuADSz6fUxbPVWbfgg][3d4a3c7ec534][inet[/172.18.0.5:9300]], reason: zen-disco-join (elected_as_master)
elasticsearch_1 | [2019-02-04 15:04:29,680][INFO ][http ] [Evilhawk] bound_address {inet[/0.0.0.0:9200]}, publish_address {inet[/172.18.0.5:9200]}
elasticsearch_1 | [2019-02-04 15:04:29,680][INFO ][node ] [Evilhawk] started
elasticsearch_1 | [2019-02-04 15:04:31,166][INFO ][gateway ] [Evilhawk] recovered [13] indices into cluster_state
The CMS service was not configured to depend on elasticsearch, which causes the elasticsearch service not to be started on reindex-courses
. Please run:
make stop
git pull
make env
make reindex-courses
make daemon
You can experience the issue at the lms at https://edx.fweb.wallawalla.edu/ if that helps. :-) You probably have to sign up to see it.
No change. Still unresponsive to links, after the make stop, git pull, make env, make reindex-courses, and make daemon.
The make reindex-courses worked this time though!
The root problem is that some js files cannot be properly loaded. For instance, https://edx.fweb.wallawalla.edu/static/bundles/commons.d3a94a87e2e57a61d594.7ce261227eb4.js results in a 404 error. IMHO, this has nothing to do with the grades download feature.
Please update the docker images with:
make stop
make update
make daemon
I did that before, and just now again, but no help. Links still don't respond.
What is the result of ls -lh /path/to/tutor/data/openedx/staticfiles/bundles/commons*
? (replace /path/to/tutor
with the right value)
`frohro@fweb:~/openedx-docker$ ls -lah data/openedx/staticfiles/bundles/commons*
-rw-r--r-- 1 root root 1.6M Feb 4 08:36 data/openedx/staticfiles/bundles/commons.d60dcd98c024881d011e.c835e91d09f6.js
-rw-r--r-- 1 root root 1.6M Feb 4 08:36 data/openedx/staticfiles/bundles/commons.d60dcd98c024881d011e.js
-rw-r--r-- 1 root root 1.6M Feb 4 08:36 data/openedx/staticfiles/bundles/commons.f59dc10149c0.js
-rw-r--r-- 1 root root 1.6M Feb 4 08:36 data/openedx/staticfiles/bundles/commons.js
frohro@fweb:~/openedx-docker$ `
I noticed that the mnesia and this directory are owned by root. I suppose that is normal, but I don't know.
Thanks for the assistance!
Rob
frohro@fweb:~/openedx-docker$ ls -lah data
total 64K
drwxrwxr-x 15 frohro frohro 4.0K Jan 4 15:38 .
drwxrwxr-x 10 frohro frohro 4.0K Feb 3 14:33 ..
drwxr-xr-x 2 root root 4.0K Nov 7 09:22 android
drwxr-xr-x 6 frohro root 4.0K Oct 2 09:57 cms
drwxr-xr-x 3 uuidd netdev 4.0K Oct 2 09:13 elasticsearch
-rw-rw-r-- 1 frohro frohro 125 Jan 4 13:48 .gitignore
drwxr-xr-x 9 root root 4.0K Nov 11 10:35 letsencrypt
drwxr-xr-x 7 frohro root 4.0K Nov 27 14:43 lms
drwxr-xr-x 3 999 root 4.0K Feb 4 09:18 mongodb
drwxr-xr-x 7 999 docker 4.0K Feb 4 08:36 mysql
drwxr-xr-x 2 root root 4.0K Oct 3 08:40 notes
drwxr-xr-x 3 root root 4.0K Feb 4 08:36 openedx
drwxr-xr-x 5 root root 4.0K Dec 12 20:54 portainer
drwxr-xr-x 3 999 root 4.0K Feb 4 07:04 rabbitmq
drwxr-xr-x 2 root root 4.0K Dec 12 20:43 themes
drwxr-xr-x 2 root root 4.0K Feb 3 20:32 xqueue
frohro@fweb:~/openedx-docker$
Can you also paste here your apache configuration?
apache2.conf.log I added the .log to the filename so I could upload it here. The real name is apache2.conf. Is this what you wanted?
edx.fweb.wallawalla.edu.conf.log This might also be useful.
Some things from /var/log/apache2/error.log
[Mon Feb 04 07:02:39.130073 2019] [proxy:error] [pid 4259] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:02:39.130121 2019] [proxy_http:error] [pid 4259] [client 174.127.180.10:60360] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:02:47.138723 2019] [proxy:error] [pid 24277] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:02:47.138757 2019] [proxy_http:error] [pid 24277] [client 174.127.180.10:60362] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:02:47.145681 2019] [proxy:error] [pid 9839] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:02:47.145717 2019] [proxy_http:error] [pid 9839] [client 174.127.180.10:60364] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:02:50.139328 2019] [proxy:error] [pid 26081] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:02:50.139372 2019] [proxy_http:error] [pid 26081] [client 174.127.180.10:60366] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor
Some more:
[Mon Feb 04 07:02:59.143380 2019] [proxy:error] [pid 4259] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:02:59.143423 2019] [proxy_http:error] [pid 4259] [client 174.127.180.10:60368] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:03:07.136300 2019] [proxy:error] [pid 24277] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:07.136347 2019] [proxy_http:error] [pid 24277] [client 174.127.180.10:60370] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:03:07.143255 2019] [proxy:error] [pid 25988] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:07.143301 2019] [proxy_http:error] [pid 25988] [client 174.127.180.10:60372] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor
and
[Mon Feb 04 07:03:30.123861 2019] [proxy:error] [pid 6137] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:30.123903 2019] [proxy_http:error] [pid 6137] [client 174.127.180.10:60382] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:03:39.142755 2019] [proxy:error] [pid 26081] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:39.142796 2019] [proxy_http:error] [pid 26081] [client 174.127.180.10:60388] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor
and
[Mon Feb 04 07:03:47.138615 2019] [proxy:error] [pid 25988] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:47.138651 2019] [proxy_http:error] [pid 25988] [client 174.127.180.10:60402] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor
and
[Mon Feb 04 07:03:47.415506 2019] [proxy:error] [pid 4259] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:47.415540 2019] [proxy_http:error] [pid 4259] [client 174.127.180.10:60404] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor [Mon Feb 04 07:03:50.129779 2019] [proxy:error] [pid 6137] (111)Connection refused: AH00957: HTTP: attempt to connect to 127.0.0.1:8080 (localhost) failed [Mon Feb 04 07:03:50.129827 2019] [proxy_http:error] [pid 6137] [client 174.127.180.10:60412] AH01114: HTTP: failed to make connection to backend: localhost, referer: https://edx.fweb.wallawalla.edu/courses/course-v1:wallawalla+ENGR356+2018/instructor
and more of the same....
I looked and these errors started when the issue started (as close as I can tell) and I don't have them previously. I may have done a sudo apt upgrade before they happened. I can't be absolutely certain at this point when that happened. The first time this issue happened, I did a make stop, git pull, (because I thought it was issue #136, because when I did the make all after changing the lms.env.json file, the make all hung as described in #136, and I just did a Ctrl-C and make daemonize, and it started and was working properly. I really think the apt upgrade was before all this sometime, but I can't be absolutely certain. Then later in the day, I noticed the problem had returned, and so I filed this issue, and started working on it in earnest. :-)
Something is incorrectly requiring the commons.d3a94a87e2e57a61d594.7ce261227eb4.js file, and I don't know what nor why.
What is the output of grep d3a94a87 /path/to/tutor/data/openedx/staticfiles/webpack-stats.json
?
Nothing found.
Do you want the webpack-stats.json? Looks like it's about bedtime for you over on that side of the pond. :-) I really appreciate your help, but I don't want to keep you up.
Indeed, it was bed time :)
Can you please paste the output of the following two commands?
docker-compose run --rm lms grep commons.d3a94a87e2e57a61d594 /openedx/staticfiles/webpack-stats.json
docker-compose run --rm lms grep commons.d3a94a87e2e57a61d594 /openedx/staticfiles/studio/webpack-stats.json
frohro@fweb:~/openedx-docker/deploy/local$ docker-compose run --rm lms grep commons.d3a94a87e2e57a61d594 /openedx/staticfiles/webpack-stats.json
Starting local_elasticsearch_1 ... done
Starting local_smtp_1 ... done
Starting local_mongodb_1 ... done
Starting local_mysql_1 ... done
Starting local_rabbitmq_1 ... done
Starting local_forum_1 ... done
frohro@fweb:~/openedx-docker/deploy/local$
and the other one:
frohro@fweb:~/openedx-docker/deploy/local$ docker-compose run --rm lms grep commons.d3a94a87e2e57a61d594 /openedx/staticfiles/webpack-stats.json
Starting local_rabbitmq_1 ... done
Starting local_mysql_1 ... done
Starting local_elasticsearch_1 ... done
Starting local_mongodb_1 ... done
Starting local_smtp_1 ... done
Starting local_forum_1 ... done
Starting local_elasticsearch_1 ... done
ons.d3a94a87e2e57a61d594 /openedx/staticfiles/studio/webpack-stats.json
Starting local_mysql_1 ... done
Starting local_smtp_1 ... done
Starting local_rabbitmq_1 ... done
Starting local_mongodb_1 ... done
Starting local_forum_1 ... done
frohro@fweb:~/openedx-docker/deploy/local$
There is no change in the behavior.
It's past my bedtime now. I'll be up in about six hours. :-) Thanks, Rob
Breakfast time!
What is the output of these two commands:
docker ps
docker inspect local_memcached_1
frohro@fweb:~/openedx-docker/deploy/local$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
23fe91a4ff57 regis/openedx:hawthorn "docker-entrypoint.s…" 21 hours ago Up 21 hours 8000/tcp local_lms_worker_1
f256ab7e244a nginx:1.13 "nginx -g 'daemon of…" 21 hours ago Up 21 hours 0.0.0.0:8080->80/tcp, 0.0.0.0:8443->443/tcp local_nginx_1
4c60050b04ce regis/openedx:hawthorn "docker-entrypoint.s…" 21 hours ago Up 21 hours 8000/tcp local_cms_worker_1
078dfd56bba8 regis/openedx:hawthorn "docker-entrypoint.s…" 21 hours ago Up 21 hours 8000/tcp local_lms_1
1b20308a6a27 regis/openedx-forum:hawthorn "/bin/sh -c './bin/u…" 21 hours ago Up 21 hours 4567/tcp local_forum_1
c5ae80ab7b18 regis/openedx:hawthorn "docker-entrypoint.s…" 21 hours ago Up 21 hours 8000/tcp local_cms_1
7d9dc7e62d1a regis/openedx-xqueue:hawthorn "/bin/sh -c 'gunicor…" 21 hours ago Up 21 hours 8040/tcp local_xqueue_1
8efbb56a0c62 regis/openedx-xqueue:hawthorn "./manage.py run_con…" 21 hours ago Restarting (0) 29 seconds ago local_xqueue_consumer_1
2f185fc04400 regis/openedx-notes:hawthorn "/bin/sh -c 'gunicor…" 21 hours ago Up 21 hours 8000/tcp local_notes_1
13b48cdea664 mongo:3.2.16 "docker-entrypoint.s…" 21 hours ago Up 21 hours 27017/tcp local_mongodb_1
ae27be730a28 mysql:5.6.36 "docker-entrypoint.s…" 21 hours ago Up 21 hours 3306/tcp local_mysql_1
4ea9b20f65bc memcached:1.4.38 "docker-entrypoint.s…" 21 hours ago Up 21 hours 11211/tcp local_memcached_1
b938ffc8858e namshi/smtp "/bin/entrypoint.sh …" 21 hours ago Up 21 hours 25/tcp local_smtp_1
693425debc82 rabbitmq:3.6.10 "docker-entrypoint.s…" 21 hours ago Up 21 hours 4369/tcp, 5671-5672/tcp, 25672/tcp local_rabbitmq_1
c45396906d26 elasticsearch:1.5.2 "/docker-entrypoint.…" 21 hours ago Up 21 hours 9200/tcp, 9300/tcp local_elasticsearch_1
1718a04d1c71 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (1) 31 seconds ago openedx-docker_cms_worker_1
acd2b8bf529c ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (1) 30 seconds ago openedx-docker_lms_worker_1
c0ab6fe6d334 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (3) 52 seconds ago openedx-docker_cms_1
6d3a1c030db4 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (3) 51 seconds ago openedx-docker_lms_1
4bbd3a19c5cd 2ceaf452d923 "./manage.py run_con…" 7 weeks ago Restarting (2) 4 seconds ago openedx-docker_xqueue_consumer_1
f3334f3fc5e0 e796675eda33 "/bin/sh -c './bin/u…" 7 weeks ago Up 27 seconds 4567/tcp openedx-docker_forum_1
frohro@fweb:~/openedx-docker/deploy/local$
`frohro@fweb:~/openedx-docker/deploy/local$ docker inspect local_memcached_1
[
{
"Id": "4ea9b20f65bc68a910c7e5ff1ad9896b956d8399854add0180ed635023d33644",
"Created": "2019-02-04T16:36:30.453997679Z",
"Path": "docker-entrypoint.sh",
"Args": [
"memcached"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 31404,
"ExitCode": 0,
"Error": "",
"StartedAt": "2019-02-04T16:36:35.313305994Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:add80e4f412033ab9689848b423df58c62a7e5c99b59c93ac47610875265365b",
"ResolvConfPath": "/var/lib/docker/containers/4ea9b20f65bc68a910c7e5ff1ad9896b956d8399854add0180ed635023d33644/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/4ea9b20f65bc68a910c7e5ff1ad9896b956d8399854add0180ed635023d33644/hostname",
"HostsPath": "/var/lib/docker/containers/4ea9b20f65bc68a910c7e5ff1ad9896b956d8399854add0180ed635023d33644/hosts",
"LogPath": "/var/lib/docker/containers/4ea9b20f65bc68a910c7e5ff1ad9896b956d8399854add0180ed635023d33644/4ea9b20f65bc68a910c7e5ff1ad9896b956d8399854add0180ed635023d33644-json.log",
"Name": "/local_memcached_1",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "local_default",
"PortBindings": {},
"RestartPolicy": {
"Name": "unless-stopped",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/74804fc166c8a226cf61ee8d2cbc03ed83916658a27287f785a097276526d222-init/diff:/var/lib/docker/overlay2/cb00bd34f607fdf4fab053c3321ea5a8034a0f6c5ae6bc1e0b9a49f5539fc2c3/diff:/var/lib/docker/overlay2/dec190ce1ccfa03d0f0290ac9f5f2f365b553e0a583d2862e39739048d8f9a09/diff:/var/lib/docker/overlay2/fa1199fff04ad902711321bc72a0d6957631f65a57ec3644d8922f7b0553f1c3/diff:/var/lib/docker/overlay2/ee5aece8423257bf6a2665c7a510c112868d587fd976f32337516ef88c223616/diff:/var/lib/docker/overlay2/e90b244d0e6e70bfb579ce376a58a2d70dce5721903e8bb30142e37aa7e5ef23/diff",
"MergedDir": "/var/lib/docker/overlay2/74804fc166c8a226cf61ee8d2cbc03ed83916658a27287f785a097276526d222/merged",
"UpperDir": "/var/lib/docker/overlay2/74804fc166c8a226cf61ee8d2cbc03ed83916658a27287f785a097276526d222/diff",
"WorkDir": "/var/lib/docker/overlay2/74804fc166c8a226cf61ee8d2cbc03ed83916658a27287f785a097276526d222/work"
},
"Name": "overlay2"
},
"Mounts": [],
"Config": {
"Hostname": "4ea9b20f65bc",
"Domainname": "",
"User": "memcache",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"11211/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"MEMCACHED_VERSION=1.4.38",
"MEMCACHED_SHA1=68f8df44f2a215d9f9767e76bf8ef03d9134ffb4"
],
"Cmd": [
"memcached"
],
"ArgsEscaped": true,
"Image": "memcached:1.4.38",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "f2f0f9a8634f363b4e3ad3f101bed2a573d819b7f3e4c7bcddc34e407d422179",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "local",
"com.docker.compose.service": "memcached",
"com.docker.compose.version": "1.21.2"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "df460549f83fe42c805fc2fdb9b71e6b7aa480587ef941d58a2588804fe1023b",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"11211/tcp": null
},
"SandboxKey": "/var/run/docker/netns/df460549f83f",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"local_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"memcached",
"4ea9b20f65bc"
],
"NetworkID": "0bf936ad79e2ac61d568acabeb0a7bfc932d57d396d810fae4471ab7ebe17810",
"EndpointID": "ecc2c11583f360bbac1060053fdde4ba65dd5ace739f42e8be2cffec2192337c",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:05",
"DriverOpts": null
}
}
}
}
]
frohro@fweb:~/openedx-docker/deploy/local$`
You have containers running from 7 weeks ago: is that normal?
frohro@fweb:~/openedx-docker/deploy/local$ docker ps
...
1718a04d1c71 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (1) 31 seconds ago openedx-docker_cms_worker_1
acd2b8bf529c ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (1) 30 seconds ago openedx-docker_lms_worker_1
c0ab6fe6d334 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (3) 52 seconds ago openedx-docker_cms_1
6d3a1c030db4 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (3) 51 seconds ago openedx-docker_lms_1
4bbd3a19c5cd 2ceaf452d923 "./manage.py run_con…" 7 weeks ago Restarting (2) 4 seconds ago openedx-docker_xqueue_consumer_1
f3334f3fc5e0 e796675eda33 "/bin/sh -c './bin/u…" 7 weeks ago Up 27 seconds 4567/tcp openedx-docker_forum_1 ```
I don't understand containers that well, but this computer was completely shutdown Sunday. I am not running other things in docker that I'm aware of.
`frohro@fweb:~$ uptime
07:50:25 up 1 day, 12:03, 2 users, load average: 0.91, 0.78, 0.73
frohro@fweb:~$
`
Please stop all running containers with docker stop ID
, where ID is 1718a04d1c71, acd2b8bf529c, etc. Then, run make stop
and check that all containers are down with docker ps
. Then start up again with make dameon
.
I can't seem to stop these containers.
`frohro@fweb:~/openedx-docker/deploy/local$ docker stop 1718a04d1c71
1718a04d1c71
frohro@fweb:~/openedx-docker/deploy/local$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1718a04d1c71 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (1) 3 minutes ago openedx-docker_cms_worker_1
acd2b8bf529c ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (1) 3 minutes ago openedx-docker_lms_worker_1
c0ab6fe6d334 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (3) 3 minutes ago openedx-docker_cms_1
6d3a1c030db4 ff9d0d82e40d "docker-entrypoint.s…" 7 weeks ago Restarting (3) 2 minutes ago openedx-docker_lms_1
4bbd3a19c5cd 2ceaf452d923 "./manage.py run_con…" 7 weeks ago Restarting (2) 2 minutes ago openedx-docker_xqueue_consumer_1
frohro@fweb:~/openedx-docker/deploy/local$ `
I did a docker rm for these containers, and they are gone. After make daemon, the problem persists though.
When I look at the apache2 error log this morning, I don't seem to get all the same errors I was before. Here is what I get:
`frohro@fweb:/var/log/apache2$ grep edx error.log
[Tue Feb 05 00:05:15.198688 2019] [ssl:warn] [pid 1738] AH01909: notes.edx.fweb.wallawalla.edu:443:0 server certificate does NOT include an ID which matches the server name
frohro@fweb:/var/log/apache2$
`
Clicking on the links doesn't seem to produce any more errors with edx in them in the apache error log.
Hi Regis, I'm trying to decide what to do about my problem, so the students are not impaired too much by my it. I have screenshots of the student grades, and I have the class content exported. I can go for a few more days trying to figure this out, but by Friday at the latest, I need to have something back up and going. I note elsewhere (issue #107) you are thinking about backups, and that is really great. I'm kind of hoping you can give me advice. If and when it gets to a place where you think I should start over, please let me know. I also want to help you with the testing as much as I can, and if you like, I can make a copy of my openedX directory, and supply you with credentials to get in, or other things if they would be helpful. Thanks so much for all your great help! Rob
I'm just trying to document this a bit better, so I made a log file that reflects things after removing the docker images, and the other fixes that have been done. logs_2_5_2019.log
I decided to try the make commands that might get things back going again, I had a problem with the make update as seen below:
`frohro@fweb:~/openedx-docker/deploy/local$ make stop
docker-compose rm --stop --force
Stopping local_nginx_1 ... done
Stopping local_lms_worker_1 ... done
Stopping local_cms_worker_1 ... done
Stopping local_lms_1 ... done
Stopping local_cms_1 ... done
Stopping local_xqueue_1 ... done
Stopping local_xqueue_consumer_1 ... done
Stopping local_notes_1 ... done
Stopping local_forum_1 ... done
Stopping local_mongodb_1 ... done
Stopping local_smtp_1 ... done
Stopping local_mysql_1 ... done
Stopping local_memcached_1 ... done
Stopping local_elasticsearch_1 ... done
Stopping local_rabbitmq_1 ... done
Going to remove local_nginx_1, local_lms_worker_1, local_cms_worker_1, local_lms_1, local_cms_1, local_xqueue_1, local_xqueue_consumer_1, local_notes_1, local_forum_1, local_mongodb_1, local_smtp_1, local_openedx-assets_1, local_mysql_1, local_memcached_1, local_elasticsearch_1, local_rabbitmq_1
Removing local_nginx_1 ... done
Removing local_lms_worker_1 ... done
Removing local_cms_worker_1 ... done
Removing local_lms_1 ... done
Removing local_cms_1 ... done
Removing local_xqueue_1 ... done
Removing local_xqueue_consumer_1 ... done
Removing local_notes_1 ... done
Removing local_forum_1 ... done
Removing local_mongodb_1 ... done
Removing local_smtp_1 ... done
Removing local_openedx-assets_1 ... done
Removing local_mysql_1 ... done
Removing local_memcached_1 ... done
Removing local_elasticsearch_1 ... done
Removing local_rabbitmq_1 ... done
frohro@fweb:~/openedx-docker/deploy/local$ git pull
remote: Enumerating objects: 40, done.
remote: Counting objects: 100% (40/40), done.
remote: Compressing objects: 100% (13/13), done.
remote: Total 40 (delta 24), reused 40 (delta 24), pack-reused 0
Unpacking objects: 100% (40/40), done.
From https://github.com/regisb/openedx-docker
+ b0e822a...4122e14 release/ironwood.alpha -> origin/release/ironwood.alpha (forced update)
Already up to date.
frohro@fweb:~/openedx-docker/deploy/local$ make env
frohro@fweb:~/openedx-docker/deploy/local$ make rindex-courses
make: *** No rule to make target 'rindex-courses'. Stop.
frohro@fweb:~/openedx-docker/deploy/local$ make reindex-courses
docker-compose run --rm cms ./manage.py cms reindex_course --all --setup
Creating local_rabbitmq_1 ... done
Creating local_smtp_1 ... done
Creating local_memcached_1 ... done
Creating local_mongodb_1 ... done
Creating local_mysql_1 ... done
Creating local_elasticsearch_1 ... done
2019-02-05 21:32:35,534 INFO 1 [elasticsearch] base.py:63 - HEAD http://elasticsearch:9200/courseware_index [status:200 request:0.063s]
No handlers could be found for logger "elasticsearch.trace"
2019-02-05 21:32:35,536 INFO 1 [elasticsearch] base.py:63 - HEAD http://elasticsearch:9200/courseware_index [status:200 request:0.002s]
2019-02-05 21:32:35,538 INFO 1 [elasticsearch] base.py:63 - HEAD http://elasticsearch:9200/courseware_index/courseware_content [status:200 request:0.002s]
2019-02-05 21:32:35,553 INFO 1 [elasticsearch] base.py:63 - GET http://elasticsearch:9200/courseware_index/_mapping/courseware_content [status:200 request:0.015s]
frohro@fweb:~/openedx-docker/deploy/local$ make update
docker-compose pull
Pulling memcached ... done
Pulling mongodb ... done
Pulling mysql ... done
Pulling elasticsearch ... done
Pulling openedx-assets ... done
Pulling rabbitmq ... done
Pulling smtp ... done
Pulling forum ... done
Pulling lms ... done
Pulling cms ... done
Pulling lms_worker ... done
Pulling cms_worker ... done
Pulling notes ... done
Pulling nginx ... done
Pulling xqueue ... done
Pulling xqueue_consumer ...
ERROR: for xqueue_consumer Cannot overwrite digest sha256:7cbe268aa00d1c0467352e25107965d224c11c60f455767dfbef7e7d1e852f19
ERROR: Cannot overwrite digest sha256:7cbe268aa00d1c0467352e25107965d224c11c60f455767dfbef7e7d1e852f19
Makefile:44: recipe for target 'update' failed
make: *** [update] Error 1
frohro@fweb:~/openedx-docker/deploy/local$ `
I did the make update again, and it didn't get the error.
`frohro@fweb:~/openedx-docker/deploy/local$ make update
docker-compose pull
Pulling memcached ... done
Pulling mongodb ... done
Pulling mysql ... done
Pulling elasticsearch ... done
Pulling openedx-assets ... done
Pulling rabbitmq ... done
Pulling smtp ... done
Pulling forum ... done
Pulling lms ... done
Pulling cms ... done
Pulling lms_worker ... done
Pulling cms_worker ... done
Pulling notes ... done
Pulling nginx ... done
Pulling xqueue ... done
Pulling xqueue_consumer ... done
frohro@fweb:~/openedx-docker/deploy/local$ `
The problem persists though. :-(
After restarting tutor with the above make commands, and make daemon, I tried the links, and thought the logo file might be interesting. The thing I note is that the error should have occurred, when I tried to click the links in the course outline, or perhaps when it was starting, but I don't find any errors in this log attached. logs_2_5a_2019.log
@frohro I understand the situation you're in. It's quite possible that this bug is due to a misconfiguration, or an Apache configuration error, but if it's due to Tutor I need to get to the bottom of it. As an early adopter of tutor, it's important to me that you solve this issue. Let's try to fix this before Friday. Accessing the server directly by ssh would go a long way. You can add one of my public ssh keys to the server: https://github.com/regisb.keys Feel free to send me the server details by email: user name and tutor install folder.
I'm not sure what is going on at all. Earlier in the day, one of my students reported that they could not get any of the links within the content pages to respond. I thought it was an issue like #136, and did a git pull, and make all and everything was back running just fine. I thought it was fixed, so I proceeded to test the download grades as detailed in #143, and moved the new edited lms.env.json file into the three locations, and stopped and started the daemon again with make stop and make daemonize. So later today, I notice the same problem the student reported earlier. I can't load anything past the outline in the lms. It works fine in the CMS where I was adding new content. I don't see anything myself in the logs. Here is what the LMS log does when I try and load something that doesn't work.
Any suggestions? I really need to get this rolling again for the students, or my name is mud.
Thanks, Rob (maybe Mud)