Closed Edward-RSE closed 4 months ago
The sentence "Use better theoretical methods that require less computation..." is a bit too vague, I think.
Prior to the last point, it may also warrant mentioning using a "good enough" approximation, as I think that is usually a more common approach (e.g. first-order approximations/lower-order methods, using simplified physics).
We don't specifically refer to the sequential and parallel diagrams in the text.
The painters analogy is good! I like how you brought concurrent vs parallel into it. I think we should keep this analogy throughout the lesson.
I'm not sure I like the order of this section. I think it would flow better to first introduce shared vs distributed memory and processes vs. threads before talking about the factors we have to take into account with parallel programming. For example, there's a bullet about having to communicate data between CPU cores, but we haven't introduced why that would be the case. The same goes for the mention of race conditions.
The sentence "either these cores share the same RAM (shared memory) or each core has its own dedicated RAM (private memory)" is not worded correctly. Each core of a CPU will share the same physical RAM. I think what you mean is so that that each process has its own private/dedicated memory.
I think we should refer to the diagrams explicitly in the text in the processes and threads callout boxes.
I like the painter's arm analogy. It explains the concept well.
Might be worth mentioning other multi-processing/threading APIs from other languages, such as multiprocessing in Python or the parallel package in R, to give more context.
Should we also include a few sentences about hybrid parallelism, if we introduce both types of memory model?
I would include an exercise to get them thinking about what they've just read. It could be as simple as asking learners to think about tasks in their life/research they could parallelise and if they'd use processes/threads. Something like that.
We should think about moving the sections (or parts of those sections) "Serial and Parallel Execution", "Parallel Paradigms" and "Algorithm Design" from episode 3 into this episode. I think this content would fit better earlier on, as it is more broad and not specific to MPI (like this episode).
Maybe we could add a bit more detail to the answer, like:
Completed
This issue is to track comments and changes for this episode.