nbautoeval
nbautoeval
is a very lightweight python framework for creating auto-evaluated
exercises inside a jupyter (python) notebook.
two flavours of exercises are supported at this point :
At this point, due to lack of knowledge/documentation about open/edx (read: the
version running at FUN), there is no available code for exporting the results as
grades or anything similar (hence the autoeval
name).
There indeed are provisions in the code to accumulate statistics on all attempted corrections, as an attempt to provide feedback to teachers.
mybinder
Click the badge below to see a few sample demos under mybinder.org
- it's all
in the demo-notebooks
subdir.
NOTE the demo notebooks ship under a .py
format and require jupytext
to be
installed before you can open them in Jupyter.
This was initially embedded into a MOOC on
python2 that ran for the first time on the
French FUN platform in Fall 2014. It
was then duplicated into a MOOC on
bioinformatics in Spring 2016 where it was
named nbautoeval
for the first time, but still embedded in a greater git module.
A separate git repo was created in June 2016 from that basis, with the intention to be used as a git subtree from these 2 repos (because at the time, adding Python libraries in order to customize the notebook runtime on the remote Jupyter platform was a pain)
Now this tool ships as a standalone Python library hosted on pypi.org, and so it can easily be added to any docker image
pip install nbautoeval
Currently supports the following types of exercises
ExerciseFunction
: the student is asked to write a functionExerciseRegexp
: the student is asked to write a regular expressionExerciseGenerator
: the student is asked to write a generator function ExerciseClass
: tests will happen on a class implementationA teacher who wishes to implement an exercise needs to write 2 parts :
One python file that defines an instance of an exercise class; this in a nutshell typically involves
Args
dedicated classOne notebook that imports this exercise object, and can then take advantage of it to write jupyter cells that typically
example()
on the exercise object to show examples of the expected outputcorrection()
on the exercise object to display the outcome.Here again there will be 2 parts at work :
The recommended way is to define quizzes in YAML format :
yaml/
subdirmyst_parser
)then one invokes run_yaml_quiz()
from a notebook to display the test
debug=True
to pinpoint errors in the sourceRegardless of their type all tests have an exoname
that is used to store information
about that specific test; for quizzes it is recommended to use a different name than
the quiz name used in run_yaml_quiz()
so that students cant guess it too easily.
stuff is stored in 2 separate locations :
~/.nbautoeval.trace
contain one JSON line per attempt (correction or submit)~/.nbautoeval.storage
for quizzes only, preserves previous choices, number of attempts