Closed svenvanderburg closed 1 year ago
✅ We could make this in a MC question, where each answer is a plausable distractor with diagnostic power. I would say the open question answers also hold diagnostic power, you can deduct from it what people likely did wrong.
✅
✅ This is more to trigger a discussion
✅
✅
✅ Could be a MC question, but from the open answers you also get good diagnostics.
✅
✅
✅
✅
✅
✅
Triggers more reflection than to serve as formative assessment ✅
Triggers more reflection than to serve as formative assessment ✅
✅ Great exercise, but I improved the wording a little bit: https://github.com/carpentries-incubator/deep-learning-intro/pull/335
Assuming that the exercise is supposed to test learners' ability to specify the number of nodes in each layer, and evaluate the extent of overfitting in the resulting model, this challenge seems good. However, the solution needs some work - see #339.
This is a large exercise, but I like it.
✅
✅
I am not familiar enough with the subject matter to be sure, but this one feels like it could be good in multiple-choice format. Are there common mistakes people make when estimating the number of parameters? If so, you might be able to provide a few different answer options, with each of the incorrect answers serving to diagnose one of these common misconceptions.
✅ as an aside: you might like to link to this callout in the Data Carpentry Image Processing curriculum, which talks in some detail about convolution at the outer limits of an image.
✅ I don't think this one needs to be multiple-choice, because it is formulated as a direct follow-up to the previous parameters exercise
✅ but I am not sure about the meaning of the term 'expressiveness' in this context - consider a different wording, or describing it more thoroughly. (I searched and the word is only used twice in the lesson, in the solutions to this exercise and the previous one.
✅
✅ This seems like an excellent reflective discussion exercise.
✅
I find the concept of diagnostic power a little harder to define for an "intermediate" kind of lesson like this: the skills and concepts you are teaching are not really about the programming with Python. Instead they are much more about deep learning methods, the concepts associated with dataset exploration, training and test sets, etc, and the skills needed to evaluate the results/performance obtained from a model. I am not sure how easy it is to design exercises that could elegantly diagnose specific misconceptions in this context (though I am admittedly no expert in deep learning, so I am only guessing). With this in mind, the kind of open discussion and exploration challenges you are using throughout the lesson seem very appropriate. One risk I can identify, however, is that I think these kinds of exercises will need to be given plenty of time, to allow for sufficient exploration/discussion so that learners and instructors have a good chance of identifying and correcting misconceptions.
@tobyhodges I think you're on point with your comment!
Regarding the number of parameters exercise, I agree and raised a new issue: #342. Shall we close this ticket?
Check that all exercises are designed with diagnostic power
From https://carpentries.github.io/instructor-training/reference.html: Diagnostic Power: The degree to which a wrong answer to a question or exercise tells the instructor what misconceptions a particular learner has.
From the Carpentries Curricilum Development Guide: Multiple choice questions (MCQs) can be a useful tool for formative assessmentif they are designed such that each incorrect answer helps the Instructor toidentify learners’ misconceptions. Each incorrect answer should be aplausibledistractorwithdiagnostic power. “Plausible” means that an answer looks like itcould be right, and “diagnostic power” means that each of the distractors helpsthe instructor figure out what concepts learners are having difficulty with