Closed blarghmatey closed 3 years ago
The relevant stack trace for this error condition is:
2021-02-03 20:19:00,628 ERROR 2490256 [celery.app.trace] [user None] [ip None] trace.py:255 - Task lms.djangoapps.instructor_task.tasks.calculate_problem_responses_csv.v2[6d4897e6-e11e-4be2-98e1-956f7c4ae05b] raised unexpected: TypeError('sequence item 1: expected str instance, NoneType found')
Traceback (most recent call last):
File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/celery/app/trace.py", line 412, in trace_task
R = retval = fun(*args, **kwargs)
File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/newrelic/hooks/application_celery.py", line 99, in wrapper
return wrapped(*args, **kwargs)
File "/edx/app/edxapp/venvs/edxapp/lib/python3.8/site-packages/celery/app/trace.py", line 704, in __protected_call__
return self.run(*args, **kwargs)
File "/edx/app/edxapp/edx-platform/lms/djangoapps/instructor_task/tasks.py", line 175, in calculate_problem_responses_csv
return run_main_task(entry_id, task_fn, action_name)
File "/edx/app/edxapp/edx-platform/lms/djangoapps/instructor_task/tasks_helper/runner.py", line 120, in run_main_task
task_progress = task_fcn(entry_id, course_id, task_input, action_name)
File "/edx/app/edxapp/edx-platform/lms/djangoapps/instructor_task/tasks_helper/grades.py", line 990, in generate
student_data, student_data_keys = cls._build_student_data(
File "/edx/app/edxapp/edx-platform/lms/djangoapps/instructor_task/tasks_helper/grades.py", line 925, in _build_student_data
for username, state in block.generate_report_data(user_state_iterator, max_count):
File "/edx/app/edxapp/edx-platform/common/lib/xmodule/xmodule/capa_module.py", line 481, in generate_report_data
answer_text = lcp.find_answer_text(answer_id, current_answer=orig_answers)
File "/edx/app/edxapp/edx-platform/common/lib/capa/capa/capa_problem.py", line 659, in find_answer_text
answer_text = ", ".join(
TypeError: sequence item 1: expected str instance, NoneType found
We're not sure how to reproduce this issue, although it seems like it might be something like:
This PR needs tests
Seems like this block needs some error handling to avoid find_answer_text
returning None:
So, in looking into the most recent case of this, I'm finding that the problems are failing aren't multiple choice which seems at odds with the docstrings on the code. I think we need to figure out the steps to reproduce here.
@blarghmatey I am supposed to add test in your PR, would you like to look around the @pdpinch comments over your PR here?
@asadiqbal08 can you take this over from Tobias? I'm not sure we're fixing the right problem here, and he doesn't have time to investigate any further.
Had a discussion with @HamzaIbnFarooq . He will join here for it on priority.
According to my research, the issue arises in the following situations:
1) The question has multi-selectable choices (for example in the case of checkboxes problems)
2) One or More of the selected answers do not have any text in them (e.g You put [ ]
in question markup but forgot to place any text in front of it)
Finding: It's not exactly an error, it's more kind of a Course-Authoring fault. Ideally, when a Course Author inputs something wrong in the question markup, this error should be pointed out but I believe it requires some validations in edX's Markup rendering module (which I haven't explored yet and might be a third-party library).
So the question is "Are we willing to take this issue further?" @pdpinch @asadiqbal08
Steps to reproduce:
1) Create a Checkbox Problem on studio
2) Don't put any text against one of the options
3) Through LMS select that option with no text
4) Go to Data Download
under Instructor
tab
5) Under the Reports
section, select the Problem and press Create a report of problem responses
The error will be generated.
Thanks @HamzaIbnFarooq for the steps to reproduce. I'm going to consult with the MITx team on next steps.
I was hoping to get some input from the MITx team about what seems the best approach here. I agree that the root cause is an authoring problem, but there isn't any validation that prevents the error and once the course is in the error state, there doesn't seem to be a way to correct it. I'm going to discuss some options with the team and make a decision about how to proceed. If you have any thoughts or suggestions, please let me know.
@pdpinch I found that the markdown gets rendered by edx's own code, so while debugging I devised a solution and created a PR (https://github.com/mitodl/edx-platform/pull/240). Kindly let me know about your thoughts on this approach.
@HamzaIbnFarooq do you think we should close this PR in favor of your #240?
@HamzaIbnFarooq do you think we should close this PR in favor of your #240?
Yes, I believe #240 shows a huge potential for solving this issue.
Edit: #240 is contaminated with some unwanted commits, the cap issue fix is done in the following commit https://github.com/mitodl/edx-platform/pull/240/commits/e2b08660e42321cf9e5e201f8c5040bb304e87bb
When an author creates a multichoice question problem (like checkboxes) through the studio, without putting any text value for an answer checkbox, then the system crashes while generating problem responses report. (A detailed comment is here)
1) While generating problem responses report
set a default value instead of giving error (as proposed by tobias here)
2) Add validation to the multichoice question problem module to notify the author in case he misses putting any text value (as proposed here)
3) Both points, 1 & 2, should be implemented to fix the old data and to ensure validation of newly created problems.
1) Which of the above approaches should be used? 2) Are we focusing this fix to be an upstream PR or a specific branch?
cc: @pdpinch
Thanks for the analysis.
I'd like to pursue #1 for koa (mitx/koa), unless it takes more than a month, in which case we'll need it for lilac (mitx/lilac). I'm hoping this will help us fix #262 as well.
I'd like to pursue #2 as well, but that could be upstream for edx-platform/master.
Does this still need to be merged, or was it replaced by #264?
Does this still need to be merged, or was it replaced by #264?
It was replaced by #264
In the current state if it is unable to find the answer text for a given response (maybe because it was skipped?) then the report will fail to generate. Adding in a conditional with placeholder text allows the report to generate rather than completely failing to provide this data to the course instructor.