Open ifrh opened 2 years ago
since error message is identical , perhaps the reason is identical, too: see https://github.com/GrahamDumpleton/mod_wsgi/issues/765
oh indeed ... the problem depends on contents of apache macro praktomat:
The problem seems to be gone, if I change that line on our server, where that apache macro was copied to file sites-enabled/default-ssl.conf
, to
WSGIScriptAlias /$id $path/Praktomat/wsgi/praktomat.wsgi process-group=local_$id application-group=%{GLOBAL}
and also add outside of all VirtualHost configurations:
WSGIRestrictEmbedded On
With that changes I could set variable NUMBER_OF_TASKS_TO_BE_CHECKED_IN_PARALLEL
inside Praktomat/src/settings/local.py
to another value than 1
.
Thae following observed problem does not occure, if I start Praktomat via
manage-local.py runserver
from command line. But sadly runnig via wsgi on apache is affected.After I had set variable
NUMBER_OF_TASKS_TO_BE_CHECKED_IN_PARALLEL
inside Praktomat/src/settings/local.py is set to another value than1
, I saw a browser error message using some TaskAdmin-Actions on a task , which has a model solution and exactly two student solutions".The problematic taskAdmin methods were
run_all_checkers
andrun_all_uploadtime_checkers_on_all
; both methods are callingcheck_multiple
fromchecker.basemodels
https://github.com/KITPraktomatTeam/Praktomat/blob/92d1cf50157426e7aa3cd20e665bfb31ffe2f25a/src/checker/basemodels.py#L349-L358The browser time out I would ignore, if I had tested some hundred task solutions, but with less than 5 solutions, there shouldn't be any time problem. The browser time out is a known situation, e.g. if a large number of solutions are checked, work is carried out in the background despite the message. But this time no solutions were checked in the background.
Adding some printing statements around
pool.map(check_it, solutions, 1)
incheck_muliple
which were written via wsgi intoapache2/error.log
, I found that this pool command did not end. It behaves like an infinity loop. I saw that in Apache's error log the output of my print command befor callingpool.map
was written. But instead of the output from print after the call topool.map
, I saw multiple gigabytes of entries like: