carpentries-incubator / deep-learning-intro

Learn Deep Learning with Python
https://carpentries-incubator.github.io/deep-learning-intro/
Other
31 stars 36 forks source link

Check that all exercises are designed with diagnostic power #287

Closed svenvanderburg closed 1 year ago

svenvanderburg commented 1 year ago

Check that all exercises are designed with diagnostic power

From https://carpentries.github.io/instructor-training/reference.html: Diagnostic Power: The degree to which a wrong answer to a question or exercise tells the instructor what misconceptions a particular learner has.

From the Carpentries Curricilum Development Guide: Multiple choice questions (MCQs) can be a useful tool for formative assessmentif they are designed such that each incorrect answer helps the Instructor toidentify learners’ misconceptions. Each incorrect answer should be aplausibledistractorwithdiagnostic power. “Plausible” means that an answer looks like itcould be right, and “diagnostic power” means that each of the distractors helpsthe instructor figure out what concepts learners are having difficulty with

svenvanderburg commented 1 year ago

Episode 1

Calculate the output for one neuron

✅ We could make this in a MC question, where each answer is a plausable distractor with diagnostic power. I would say the open question answers also hold diagnostic power, you can deduct from it what people likely did wrong.

Deep Learning Problems Exercise

Deep Learning workflow exercise

✅ This is more to trigger a discussion

Episode 2

Penguin Dataset

Pairplot

One-hot encoding vs ordinal encoding

✅ Could be a MC question, but from the open answers you also get good diagnostics.

Training and Test sets

Create the neural network

The Training Curve

Confusion Matrix

Monitor the training process

Exercise: Explore the dataset

Exercise: Architecture of the network

Exercise: Reflecting on our results

Triggers more reflection than to serve as formative assessment ✅

Exercise: Baseline

Triggers more reflection than to serve as formative assessment ✅

Exercise: plot the training progress.

✅ Great exercise, but I improved the wording a little bit: https://github.com/carpentries-incubator/deep-learning-intro/pull/335

tobyhodges commented 1 year ago

Try to reduce the degree of overfitting by lowering the number of parameters

Assuming that the exercise is supposed to test learners' ability to specify the number of nodes in each layer, and evaluate the extent of overfitting in the resulting model, this challenge seems good. However, the solution needs some work - see #339.

Simplify the model and add data

This is a large exercise, but I like it.

Open question: What could be next steps to further improve the model?

Advanced layer types

Explore the data

Number of parameters

I am not familiar enough with the subject matter to be sure, but this one feels like it could be good in multiple-choice format. Are there common mistakes people make when estimating the number of parameters? If so, you might be able to provide a few different answer options, with each of the incorrect answers serving to diagnose one of these common misconceptions.

Border pixels

✅ as an aside: you might like to link to this callout in the Data Carpentry Image Processing curriculum, which talks in some detail about convolution at the outer limits of an image.

Number of model parameters

✅ I don't think this one needs to be multiple-choice, because it is formulated as a direct follow-up to the previous parameters exercise

Convolutional Neural Network

✅ but I am not sure about the meaning of the term 'expressiveness' in this context - consider a different wording, or describing it more thoroughly. (I searched and the word is only used twice in the lesson, in the solutions to this exercise and the previous one.

Network depth

Why and when to use convolutional neural networks

✅ This seems like an excellent reflective discussion exercise.

Vary dropout rate

General comment

I find the concept of diagnostic power a little harder to define for an "intermediate" kind of lesson like this: the skills and concepts you are teaching are not really about the programming with Python. Instead they are much more about deep learning methods, the concepts associated with dataset exploration, training and test sets, etc, and the skills needed to evaluate the results/performance obtained from a model. I am not sure how easy it is to design exercises that could elegantly diagnose specific misconceptions in this context (though I am admittedly no expert in deep learning, so I am only guessing). With this in mind, the kind of open discussion and exploration challenges you are using throughout the lesson seem very appropriate. One risk I can identify, however, is that I think these kinds of exercises will need to be given plenty of time, to allow for sufficient exploration/discussion so that learners and instructors have a good chance of identifying and correcting misconceptions.

svenvanderburg commented 1 year ago

@tobyhodges I think you're on point with your comment!

Regarding the number of parameters exercise, I agree and raised a new issue: #342. Shall we close this ticket?