The Level model has many fields, among which are the following:
disable_algorithm_score, set to True if we want the level not to have an algorithm score (therefore, max score is 10).
model_solution. Badly named, this field is a string which represents a list of integers, each integer in the list represents how many blocks can be used in the solution of the level to get 10/10 algorithm score. Eg, if model_solution is [5, 6, 8], then the player will get 10/10 in algorithm score for using a solution which has either 5, 6 or 8 blocks in it. Most levels have only one integer in this list.
There are a few considerations to be done here:
The model_solution field applies only to levels which use Blockly. Any Python-only levels should have a blank model_solution field (currently represented by an empty list string).
The point of the model_solution field is to allow for the computation of the algorithm score. As such, levels for which disable_algorithm_score is True should also have a "blank" model_solution.
The problem is that these constraints aren't enforced anywhere, as such we had two main issues:
Some Blockly levels did not have a populated model_solution field: this meant that these levels only had a score out of 10 when in fact they should've been out of 20. This manifested in the form of the "full score" star not appearing on the scoreboard for these levels, as the logic was expecting a score out of 20.
Some levels which had no model_solution (most of them Python only levels) still had the algorithm score enabled. This is non-sensical since the algorithm score computation requires a) Blockly and b) a model_solution and Python only levels don't have either. This is a symptom of the fact that we still need to implement an algorithm score for Python levels. For this issue, this manifested itself as scores showing 20/10 in the game.
Main fixes:
Added a migration (0090) which populates the model_solution field for the levels that were missing it (these were only the levels in Episode 10).
Added a migration (0091) which disables the algorithm score for all levels without a model_solution.
Commented out the "hack" in game.js which doubled the total score in Python levels to force them to be 20 even though the level itself is out of 10.
Removed the additional check in pathFinder.js which checked for the presence of model_solution in order to compute the algorithm score - now redundant as the code can only check whether the level has the algorithm score enabled directly or not.
The Level model has many fields, among which are the following:
disable_algorithm_score
, set toTrue
if we want the level not to have an algorithm score (therefore, max score is 10).model_solution
. Badly named, this field is a string which represents a list of integers, each integer in the list represents how many blocks can be used in the solution of the level to get 10/10 algorithm score. Eg, ifmodel_solution
is[5, 6, 8]
, then the player will get 10/10 in algorithm score for using a solution which has either 5, 6 or 8 blocks in it. Most levels have only one integer in this list.There are a few considerations to be done here:
model_solution
field applies only to levels which use Blockly. Any Python-only levels should have a blankmodel_solution
field (currently represented by an empty list string).model_solution
field is to allow for the computation of the algorithm score. As such, levels for whichdisable_algorithm_score
isTrue
should also have a "blank"model_solution
.The problem is that these constraints aren't enforced anywhere, as such we had two main issues:
model_solution
field: this meant that these levels only had a score out of 10 when in fact they should've been out of 20. This manifested in the form of the "full score" star not appearing on the scoreboard for these levels, as the logic was expecting a score out of 20.model_solution
(most of them Python only levels) still had the algorithm score enabled. This is non-sensical since the algorithm score computation requires a) Blockly and b) amodel_solution
and Python only levels don't have either. This is a symptom of the fact that we still need to implement an algorithm score for Python levels. For this issue, this manifested itself as scores showing 20/10 in the game.Main fixes:
model_solution
field for the levels that were missing it (these were only the levels in Episode 10).model_solution
.game.js
which doubled the total score in Python levels to force them to be 20 even though the level itself is out of 10.pathFinder.js
which checked for the presence ofmodel_solution
in order to compute the algorithm score - now redundant as the code can only check whether the level has the algorithm score enabled directly or not.This change is