ossu / computer-science

🎓 Path to a free self-taught education in Computer Science!
MIT License
170.79k stars 21.56k forks source link

Request for Comment: Framework for Evaluating Courses #609

Closed waciumawanjohi closed 4 years ago

waciumawanjohi commented 5 years ago

Problem: OSSU has no structure for gathering student judgement of courses.

Duration: Nov 15, 2019

Current State: Awaiting feedback

Proposer: Waciuma Wanjohi

Details: Open Source Society should make it possible for all contributors and students to give meaningful feedback on course quality. Several sites exist that gather and display 5 star ratings and reviews. None of these offer comprehensive course evaluations. Gathering such data will allow OSSU to make informed decisions about course recommendations.

Proposal: Employ a two stage strategy for course evaluation.

1) Ask students to use a modified version of California State University, Chico's Rubric for Online Instruction to evaluate the quality of teaching in a course.

2) Ask students to identify the Computer Science Curricula 2013(CSC2013) Body of Knowledge coverage that a course achieves.

On 1: CSU Chico's rubric was first written in 2003. It is an evaluation tool used by over 100 institutions of higher education source. It evaluates a number of important areas of course development including learner support, instructional design, and student assessment. The rubric evaluates how well courses live up to 25 goals.

On 2: The Computer Science Curricula 2013 is our primary curricular guideline. But OSSU has not carried out a comprehensive evaluation of the recommended courses to determine if the curriculum meets expectations. By asking students to evaluate completed courses, OSSU can generate the data necessary to undertake this important task.

Alternatives:

References:

Bractofishy commented 5 years ago

Thanks, man for all the contributions you are giving and keeping this guide up to date which is very helpful for children like me.😊

joshmhanson commented 4 years ago

I am not an educator in any sense, but having an intense interest in computer science and the education thereof, I want to add my two cents.

I have read that some teaching methodologies like active learning, and more broadly, those having some sort of association to Constructivism, can often produce better student outcomes than traditional lecture-style courses, but may in some cases lead to poor instructor evaluations, for reasons like the students feeling they didn't have as much instructor interaction. (I have also seen evidence to the contrary, as you will find if you look at the Wikipedia articles.)

I would tentatively generalize these results by pointing out that any kind of genuine/effective deliberate practice doesn't feel good, at least not at first. It feels like pain, much as exercise feels like pain. In contrast, just watching a video doesn't invite nearly as much pain and can trick the viewer into thinking they've learned something that they haven't truly internalized. Some of this was discussed in that Coursera course Learning How to Learn. As a result, students who are learning material more strongly might mistakenly think they are not learning (since they experience this constant mental consternation), while those who are learning more weakly will think they are learning the most since they never feel pained by the material.

By analogy, I despise vegetables and my eater evaluation of any plate that includes mainly vegetables will be much lower than that of, say, a plate of meat or pizza. Yet, being healthy requires eating disgusting amounts of vegetables every day. Part of the value of an educational institution is that it creates an environment where the unpleasant but necessary work of authentic learning is rewarded more highly than shallow learning.

I say this not to oppose student evaluations, but rather to caution against treating them as the gospel truth of course quality, especially at the novice/early stages of the curriculum where students haven't yet become accustomed to the daily grind of really learning computer science. As this article points out (I reference this only because the original article is paywalled), students are not well-positioned to evaluate the content of a course, as they won't know how the material fits into the big picture until they've learned enough to understand the big picture.

Conversely, students are better positioned than anyone else to evaluate the practical realities of taking the course. So as far as traditional course evaluations go, I think we should try to limit them to assessing basic quality of life issues that aren't easily captured by objective measurement instruments.

In response to the specific suggestions given in the RFC:

  1. I like some parts of Chico's Rubric for Online Instruction and it seems to fall more within the "quality of life" aspect that I mentioned above. The issue I see is that it seems to be designed for a single, specific course (i.e. an edX/Coursera course), while the direction I see OSSU moving in1 is towards subjects being the organizing unit of study, with a selection of supporting resources. I also don't like how section 5 of the rubric seems to suggest that any online course needs to incorporate some sort of technological "innovation" for the course to score well. I care about high-quality learning, not fancy tech.2 Maybe we could adapt this rubric to our needs.
  2. CS2013 is a 500-page document, and somewhat overwhelming to navigate. A small academic steering committee within OSSU might be willing to pore over it but I don't think the general student body of OSSU is going to want to do that. What if we instead asked students to summarize and explain in their own words the most important ideas they learned? This could serve multiple purposes: (1) documenting the progress of students through the curriculum; (2) asking students for active recall of material and (3) helping OSSU evaluate in an open-ended format how intended learning outcomes contrast with actual outcomes.

1 The reason I see it this way is in part because of the reduced reliability of freely accessible material that we've been seeing on edX and other MOOC sites, and in part because I often feel the very best material just happens to be in written/textbook form or in the form of pure YouTube videos, as these have much lower production costs compared to real MOOCs.

2 Over the past months I've been developing my own internal hierarchy of importance for what makes great educational material for self-study. At the top of this hierarchy sits clear and precise definitions arranged into a logical progression, which many many courses fail to provide; while such definitions can sometimes be found online, these definitions may be deeply misleading or confusing. (Example: an instructor might define circles in a certain way, such as the set of all points equidistant from the center. The student might not be at the point of understanding that this definition only happens to be true under certain limited circumstances, and that it is not generalizable to all circles — e.g, over the rationals, a circle might not have any points, or it might have holes where the equation doesn't have a solution.) Sitting just underneath this is high-quality problem sets, which are hard to find online, and sometimes quite hard to design on your own until you really know what you're doing. These two things sit are of primary importance in my hierarchy because the professor/author is basically the only person able to provide them.

Of secondary importance, to me, is the availability of student-authority interaction — by which I mean student-instructor interaction, but the "instructor" can be anyone who is an authority on the material, and sometimes StackOverflow is good enough. This is more important to me than student-student interaction since other students might give incorrect or misleading help.

Finally, I consider all the other typical criteria for course quality to be of tertiary importance: student-student interaction (OSSU can provide this itself, if needed), aesthetic design, variety of learning/teaching styles for different kinds of learners, etc. I consider these tertiary because they are all very nice to have, but lacking them, high-quality learning can still take place as long as the factors of higher importance are present.