In 3.2 Neural Networks, we feel like we would've benefitted from more detail about weights, biases, and activation functions used by a Perceptron to create an output.
In 3.2.2 Multilayer Perceptron section, we didn't fully understand the concept of layers until we watched the included videos. If students are intended to understand these ideas without help from video, perhaps go into more detail?
After reading the Backpropagation section, we still found ourselves confused about how this algorithm benefits deep neural networks
We wish there was more detail in the CNN section. Specifically, how are CNNs different from multilayer perceptrons?
In 3.2.5 Choosing Traditional ML vs DL, we were under the assumption that deep learning is a kind of ML. Could you clarify this in the text?
Chapter Three - DL Primer
Machine Learning Systems - 3 DL Primer.pdf
_Originally posted by @sgiannuzzi39 in https://github.com/harvard-edge/cs249r_book/discussions/256#discussioncomment-9729854_