pointOfive / STA130_F23

Python/JupyterHub implementation of this UofT classic
10 stars 14 forks source link

HW1 Markus Tests Broken #3

Closed Bortoise closed 1 year ago

Bortoise commented 1 year ago

The Markus Tests for appear to be slightly broken. The Q1 test is failed with the following error message

Test cell was not executed because an earlier cell raised an error:

Run this cell and all the next ones, too

-> coffee_ratings = pd.read_csv("coffee_ratings.csv")

FileNotFoundError: [Errno 2] No such file or directory: 'coffee_ratings.csv'

Q3 seems to fail for a related reason. It seems that coffee_ratings.csv needs to be accessible in some way (for instance by being stored in the testing directory?).

Lastly the required answer for Q10 seems inconsistent with running pd.read_csv("avatar.csv").shape (from which I got 13385 rows and 11 collumns)

I found avatar.csv file at the following url, https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2020/2020-08-11/avatar.csv

mistryrohan commented 1 year ago

Yes, the tests seem broken for these two questions. I also changed the assignment due date to Friday because Markus does not allow for automated testing past the due date.

pointOfive commented 1 year ago

I have asked the MarkUs folks how to make these data sets available during MarkUs testing run time. I'm not immediately sure how to do that!

lucieyang1 commented 1 year ago

I also got errors for Q1 and Q3, but the test for Q10 passed for me, I used the avatar.csv dataset from https://github.com/pointOfive/STA130_F23/tree/main/Data

pointOfive commented 1 year ago

Okay, figured out (with the MarkUs folks help) why this was happening... ANY run error in a cell above a test cell mean the test cell fails. Then, the first error encountered (but not all the subsequent errors) before the test cell is reported.

So, to fix this for now, I put all errors in try blocks:

# Assuming `one_plus_one_is_two` has the value from the earlier cells
# the above test will pass, but the following test will fail
import traceback
try:
    assert 1+one_plus_one_is_two == 2, False
except AssertionError as error:
    print(traceback.format_exc())

# What did running the cell above do?
# deleted the variable coffee_ratings
try:
    coffee_ratings
except NameError as error:
    print(traceback.format_exc())

try:
    coffee_ratings
except NameError as error:
    print(traceback.format_exc())

# Why doesn't this cell work?
# we do not have anything defined as 'np'
try:
    np.read_csv("coffee_ratings.csv")
except NameError as error:
    print(traceback.format_exc())

So these are now just print statements and not errors and they don't cause MarkUs to fail. I did this for the notebook in the "1test" account on MarkUs if you want to have a look at it working now.

The MarkUs peeps are going to update MarkUs to allow notebook creators to specify cells that have intentional errors for demonstration purposes; so, we likely won't need to use try blocks to catch and print errors in the end. This is good as the printouts aren't all pretty color formatted and such. Anyway, I'll close for now once I get an emoji reaction from each of you confirming you've understood the situation here.

pointOfive commented 1 year ago

Oh -- also data files can be uploaded with tester files and are then available on MarkUs. That was the other part of the issue of how to make flat file csvs available for MarkUs autotesting.

pointOfive commented 1 year ago

@mistryrohan @lucieyang1 @Bortoise @MatthewYu06 When in a jupyter notebook, to specify that MarkUs not run a cell (e.g., if the cell intentionally throws an error that will cause problems for MarkUs), you can specify markus": {"skip": true} in the cell metadata as follows: