e-valuation / EvaP

a university course evaluation system written in Python using Django
Other
95 stars 145 forks source link

evaluation_detail view annotates evaluation results twice #1694

Open richardebeling opened 2 years ago

richardebeling commented 2 years ago

An additional weirdness of the evaluation_detail view is that first the correct value is annotated here (it is correct because the list of evaluations contains all evaluations of the course), but then overwritten with an incorrect annotation here.

Originally posted by @karyon in https://github.com/e-valuation/EvaP/issues/1691#issuecomment-1003582633

richardebeling commented 2 years ago

I'd argue that we should consider refactoring a bit of the results code. E.g., the method name get_evaluations_of_course is not clear in what exactly it does (consider access rights? prefetch stuff? annotate results?) or what it should do, making maintenance harder. Maybe type annotations with type checking can help after #1688.

fidoriel commented 2 years ago

I am not sure but my guess is that these lines are doing nothing: https://github.com/e-valuation/EvaP/blob/1427e47d4eddec45d443bc441d61d4298219dbd8/evap/results/views.py#L208-L210

All mandatory evaluation data is fetched in line 195. get_evaluations_of_course calls get_evaluations_with_course_result_attributes. get_evaluations_with_prefetched_data is called when generating the template to use the cache instead of generating the results again. But get_evaluations_with_course_result_attributes is called nevertheless.

richardebeling commented 2 years ago

(Writing this because I did the blame-game earlier): The conditional reassignment seems to originate from 2c9a21d9d13bfe079a94443e2d86af1199ed9c9f, which was part of #1300:

    if evaluation.state != 'published':
        evaluation = get_evaluations_with_prefetched_data([evaluation])[0]

When removing these lines, the evaluation's grade distribution bar on the preview page is broken, so I guess it was necessary back then.

The call to get_evaluations_with_course_result_attributes seems to originate from #1440, which was supposed to be a fix for #1439. In the PR, we also had this observation, which might be relevant:

I noticed that in production code, the return value of get_evaluations_with_prefetched_data is always passed to get_evaluations_with_course_result_attributes. So I could imagine that the former always calls the latter, as there exists afaik to render only a single evaluation without the knowledge of the extra course attributes.

The test from that PR passes even when removing the three lines mentioned above -- and it also does this at 77d4e0fa88b542c19743987c7fbe490043030852, which is the commit introduced in #1440, and on main. However, at 77d4e0fa, the distribution bar will not be shown then (I guess the test could be better? :D), so it seems the three lines were still necessary back then.

In between these two, I did some refactoring in 6f64470, but it doesn't look like that had any impact regarding these lines.

richardebeling commented 2 years ago

I am not sure but my guess is that these lines are doing nothing:

https://github.com/e-valuation/EvaP/blob/1427e47d4eddec45d443bc441d61d4298219dbd8/evap/results/views.py#L208-L210

On current main, Subject-specific English will have a distribution bar that shows actual results. If you comment out the three lines, this distribution bar will no longer show results.

richardebeling commented 2 years ago

Closely related: https://github.com/e-valuation/EvaP/issues/1588 possibly related: https://github.com/e-valuation/EvaP/issues/1311