sourcebots / tasks

Things that we need to do
MIT License
1 stars 0 forks source link

Work out how to improve course content as a whole #59

Open PeterJCLaw opened 5 years ago

PeterJCLaw commented 5 years ago

The feedback from the 2018 summer school was a little lower than the 2017 feedback. While it's still good overall, we should look into what we can do to improve it.

Extracting some comparative figures from the "Feedback Analysis C&M 2017" and "Feedback Analysis CE&R 2018" files in the drive, there's some possibly interesting data there.

We do need to bear in mind that the overall numbers are small for the level of precision being implied in the summary numbers.

What we can extract from the sheets:

Looking at the data we can draw comparative conclusions from:

Content 2017 2018
How To Build Your Robot 72% 68%
Workshop: Mechanics 72% 71%
Robot Hacking 87% n/a
Workshop: Python 70% n/a
Workshop: Electronics 74% 71%
Robotics Talk 68% n/a
Competition 90% 83%
--- --- ---
What is your overall opinion of the course? 91% 86%

Given the data-set we have has about 40 respondents in each case, the only one of these which is usefully discernible from noise is the "competition" one. There are some hints that our presentations and workshops may have dropped in quality, but the changes are pretty small. There might be something in the "overall opinion" one, however as noted that's likely to have been influence by strong swings in other factors.

PeterJCLaw commented 5 years ago

Ok, so now to try to actually understand those numbers.

I think it's important to note upfront that the above only compares SourceBots' contributions to themselves. If we instead compare our contributions to other parts of the course, the data (which I'm deliberately not publishing here) suggests that our contributions are amongst the lower rated parts of the course. While that's not universally the case (the competition line generally compares well), it does suggest we've got plenty of room to improve.

Next I think it's worth thinking about what the participant experience is and what they're going to be thinking about when rating the course. This is likely to be different by category, though I'd expect all would include:

Even if it's not perfect, this gives us a framework to work with.

I don't recall being particularly involved with participant-facing things in 2017, though I did see some in 2018. From what I recall we had a few occasions where things didn't go to plan. There was a lecture which was missing its lecturer and a couple of times in labs where we didn't seem to know when/where we were going. These are things we ought to be able to easily fix.

On the clarity/enjoyment side, we need to ensure that our segments are well prepared and enjoyable. They need to cover useful topics in a manner which is understandable. In presentations, presenter needs to really know the topic they're talking about. We need to know that they'll fit their timeslot well. My recollection of our presentations is that we rush their preparation and often get the timings wrong. For workshops we need to have checked them ourselves to ensure that we're introducing things at a good pace, with the necessary tools and equipment readily available. I've not been in the workshops, so I can't comment on how well we do this. We also need to ensure that mentors are available without being either distant or overbearing. I was recently talking to a colleague at work who had been to a Django Girls day and commented that the mentors there had mostly stayed on their own table and had thus felt absent and unapproachable. I was surprised by this, both because I'd assumed Django Girls would be good at this and because this (sitting at our own table) sounds like it describes what I've seen (and been part of!) at Tech Days. If we're doing the same in workshops, then we could easily stand to be much more active in going to help the participants. Obviously we need mentors who understand each workshop well in order to do that, though this can be aided (albeit not solved) by having mentors run through the workshops ahead of time themselves.

On the achievement side, I would suggest we need to have various incremental things which participants can feel they've completed on the way to building their robot. From what I've seen of our workshops (building individual sensors before combining the whole), I think we're actually good at this.

trickeydan commented 5 years ago

Actions that I can recall immediately:

Workshop: Python needs major work. It was very ad-hoc last year. I suggest letting somebody who has experience teaching programming run it this year.