carpentries-incubator / machine-learning-trees-python

Introduction to tree models with Python
https://carpentries-incubator.github.io/machine-learning-trees-python
Other
3 stars 7 forks source link

Post DUSC Instructor thoughts #24

Open DimmestP opened 1 year ago

DimmestP commented 1 year ago

Thoughts after DUSC workshop 15/11/23:

tompollard commented 1 year ago

This is great, thanks for the helpful feedback @DimmestP. I'll try to find some time to think about how the points can be addressed (and would welcome pull requests in the meantime!).

Some really quick thoughts:

Also using a non-medical dataset would expand the usability

I have mixed feelings about switching to a non-medical dataset (though I admit partly because of my own bias towards health data!). Wouldn't any dataset we choose have some kind of topic? I dislike "toy datasets" like Iris etc, so I'd be happy to switch but preferably to something interesting.

Generally needs more programming tasks.

Agreed, definitely more work needed here. I intentionally tried to reduce time spent on data pre-processing because it is covered in an earlier workshop, but I agree that evaluation, tasks, etc would be good topics.

The course really could do with highlighting the benefits of random forests and gradient boosting. This can only be done by adding more features sooner.

For me this is a tough one. I have found the vizualisation aspect of the workshop to be important, and it's not ideal that the ability to visualize models diminishes as number of features increases.

Ideally I'd like it if we could (1) keep visualization and (2) work out how to incorporate more features when needed (e.g. to demonstrate improved performance).

Perhaps ignore gradient boosting entirely. It is skimmed over so fast it doesn't convey any of the benefits or differences over random forests.

I agree the gradient boosting section needs work. I'd like to keep if possible, and add more detail.

At this point in the workshop, I usually take people to PubMed and point out some of the papers that have been published on this dataset using XGBoost. Not that they are exciting papers, but that prior to the workshop I think many people would believe those papers were doing something special.

Ideally the code should not be continually renaming the mdl variable, but create new variables for each model to help comparison

Definitely, there are a bunch of things like this that need cleaning up!