Open milesfrain opened 4 years ago
We shouldn't even mention
bind
ormonad
here.
Maybe... but if I'm reading through this book and I have already come across these terms, do you think I might find it a bit weird that the book would choose not to mention them here? It's not as if they're unheard of outside of the pure FP niche. I think the book should at least mention that do notation is not just for arrays (or whatever type we first use it with), as otherwise readers might come away with the impression that it is just for arrays.
This early exposure to a more complex and mysterious example of do notation sets readers up for confusion in later chapters.
Could you explain this one in a little bit more detail please?
array comprehensions are not foundational material
I'm not sure about this; I think being able to work with arrays is pretty much necessary, and concatMap
in particular is likely to be used in lots and lots of places. If this section is made optional, will readers who have skipped it be able to work with arrays? In the past, I've seen beginners confused because they wanted to use functions which are provided by Array's functor/applicative/monad instances and didn't realise that the functions they were looking for are provided by those instances.
some coverage of simpler do notation examples (with Maybe, Effect, etc.) is needed to fully understand this section
I do agree that Maybe may be a better starting point for do notation, but I'm not sure full understanding should really be a goal of any part of this book. I don't think full understanding of do notation is possible until you're well into the "intermediate" category.
And what's going on with this comparison between map and map?
In this sentence, the first occurrence of map
refers to this bit, earlier on in the chapter:
The type of map is actually more general than we need in this chapter. For our purposes, we can treat map as if it had the following less general type:
forall a b. (a -> b) -> Array a -> Array b
whereas the second refers to Prelude's map with the more general type.
I conflated the "do notation for arrays" section (which I'm proposing deferring) with the earlier "array comprehension" section, which we should keep.
To elaborate on the confusion with introducing do notation with an Array
example. I think it gives readers the wrong impression that it always means "choose", and it requires some digging to figure out what's going on. At this point in the book it will be entirely mysterious.
https://github.com/purescript/purescript-prelude/blob/v4.1.1/src/Control/Bind.js
https://github.com/purescript/purescript-prelude/blob/v4.1.1/src/Control/Apply.js
The inner-workings of do
/bind
for Maybe
are much more transparent. So I think this should accompany the reader's first introduction to do notation and then they won't also be trying to figure out how to incorporate this "choose" behavior into their mental model.
It's tricky to decide on the right sequence for material, since so much content is related. For example, should we cover type classes before Maybe
and do notation?
I don't think type classes should be covered before. This is how you get to "all theory before practice" type of book which in my opinion doesn't keep you engaged very well since you get to do practice way down the line.
I think beginners often wonder "How do I access the contents of a wrapped type?" e.g. Just 5
and what happens with Nothing
?!. How do I get to 5
. do
can be simply introduced as "unwrap" in the appropriate context type here.
I think the haskell wikibook introduces it like this via Maybe
, can't remember exactly what chapter.
Also the example with Array
is confusing for beginners (speaking from experience) because it isn't clear why guard
finishes the operations before pure
so maybe some of that needs to be restated or rethought a bit. You need to know the monad laws (bind
etc.) in order to get why guard
stops the computation.
Thinking of deferring the "do notation" (for arrays) section to Chapter 7.
The Applicative Functors for Parallelism section discusses how there's flexibility in ordering of effects (parallel vs sequential), which is a nice parallel to zippy vs default Applicative
instances for Array
and List
.
I'd like to introduce do notation with Maybe
, but that's getting a bit off topic from the "Recursion, Maps, and Folds" theme of chapter 4, so perhaps there's a better place to cover this. Chapter 7 makes a deep dive into Maybe
and ado
notation, and then Chapter 8 focuses more on do
notation - either location seems like a good place to introduce do
notation via Maybe
.
Probably also need to move the "guards" section, since that builds on "do notation", but I don't expect this to be too disruptive to the chapter exercises.
The Applicative Functors for Parallelism section discusses how there's flexibility in ordering of effects (parallel vs sequential), which is a nice parallel to zippy vs default
Applicative
instances forArray
andList
.
Pedantic nitpick: I think you can have a zippy Apply
instance for Array
, but I don't think you can implement a lawful Applicative
instance.
My 2 Cent, I think chapter 4 is a disaster It made me question if Purescript is a usable language at all, and if I was wasting my time trying to learn it Magic is bad, any language that have a lot of Magic is bad, do notation in chapter 4 is magic it work, but you have no clue why
I do wonder how many before me, dropped Purescript at chapter 4 (I honestly and I am not exaggerating, decided to take a break and come back later when I have a clearer mind, but like an itch you could not scratch I came back a day later, and then decided to skip this section, it was hopeless to try to understand, even after few tried to explain it on slack, without full knowledge of Monads, this remains Magic, you can use it, but you have no clue how it works or why it works)
Yes please do thoroughly explain Monads, and why we need them, and why we cant do without them in Purescript before you explain the simplified syntax or Monad comprehension
Why explain Monad comprehension, before you explain Monads
Please also note, that one cant just google for the missing information, I tried googling to make sense of chapter 4, it doesnt work
I do wonder how many before me dropped Purescript at chapter 4
I often wonder the same and suspect we don't hear from those folks, so I appreciate your persistence and willingness to take the time to provide feedback. Chapters 1, 3, and 4 all need more attention.
You need to know the monad laws (bind etc.) in order to get why guard stops the computation.
You don't need to know the monad laws to know why guard stops the computation, you just need to know that <-
in an Array-do block becomes concatMap
, and that guard
returns an empty array if the condition is false. This isn't really to do with the monad laws, rather the specifics of the Monad Array
instance.
I think it gives readers the wrong impression that it always means "choose", and it requires some digging to figure out what's going on [...] I'd like to introduce do notation with Maybe
I think that if you introduce do notation with just one example, there's always going to be a risk of people thinking it's more specific than it actually is. However, I think trying to explain the full generality of do notation upfront is much more risky than that; it's really quite a big hill to get over, as if the reader hasn't already come across a pure FP language, they probably won't have (properly) encountered an abstraction that's as powerful as Monad is. So I agree that starting with just one example at first is best, and I think Maybe is a good candidate because it is indeed simpler, but I disagree that the reason for doing so is that the Array example gives readers the wrong impression that it always means "choose"; I think the reason should just be that Array is too complex an example to start with.
Pedantic nitpick: I think you can have a zippy Apply instance for Array, but I don't think you can implement a lawful Applicative instance.
This is correct: you can only implement a zippy Applicative for lazy lists, because the implementation of pure
has to be repeat
if the laws are to be satisfied, and you can't implement repeat
for strict lists or for Arrays.
I was looking for a way to give feedback on the "Array comprehensions/Do notation/Guards" trilogy in chapter 4, and this issue seems to be the best place for that. This feedback might give you some insight on why someone could be discouraged from learning PureScript altogether after reaching this chapter.
Note that I started the book already familiar with some basic concepts of functional programming (recursion, currying, immutability...), while still unaware of or unable to wrap my head around deeper concepts, especially monads. I understand that monads are used for side-effects but I still haven't managed to form an intuition of how they fit in the functional paradigm.
Without further ado:
concatMap
, but as a beginner I personally prefer things to be explicit until I acquire a more intuitive understanding. This seems too soon.pure
function in the first example for the do notation seems like a bad idea. The function itself is simple but introducing any new function just adds friction when trying to understand the example.Boolean -> Array Unit
but I don't know what a Unit
is so this means nothing to me.[[i,j]]
array to be concatenated if the condition is met, and an empty array otherwise, but how this seemingly happens outside of the linear function parameter -> result
flow that we've been used to from the beginning looks like magic to me.As a general note, it seems like this chapter comes too early in the book. Algebraic types and pattern matching are introduced one chapter later, and they are to me both easier to understand and more fundamental to functional programming than do notation, or even recursion for that matter. Teaching recursion without being able to pattern match linked lists or trees in your examples feels a bit masochistic IMO, but maybe that argument has been discussed in another issue, in which case feel free to redirect me.
In any case, I would argue that rather than deferring the Array do notation section, deferring the whole chapter altogether should be considered.
I'm not against including the do
and guard
comprehension example. It's a nice technique, and this shows how Purescript offers what other languages do with special syntax or special functions. It was the one part of chapter 4 that was confusing to me, though, and I think there are small changes that could help. I found everything before that point in the book to be very clear. (I'm familiar with Clojure's for
list comprehension macro, and with Haskell comprehension syntax, which is of course designed to evoke related syntax in mathematics. I have a shallow understanding of the use of do
in IO
in Haskell and Idris.)
The previous discussion is long, and I have skimmed it. I don't think I am repeating anything above, but I might have missed something.
do
returns [[i j]]
repeatedly. But the original functional comprehension example operated on an array of pairs. So how do repeated arrays of single pairs substitute for an array of many pairs. That makes no sense intuitively. I think that a sentence or two explaining what is happening behind the scenes would help. This is related to one of @julien-deoux's comments.if
or case
construct. The guard is performing a filtering function here, which doesn't feel like it involves branching. Since "array comprehension" seems to refer to the previous section: I don't think of map
(or concatMap
) as involving branching, either.:while
in Clojure's for
macro, but the effect that's achieved in this code is actually like the for
macro's :when
--as illustrated in some the examples on this page.)do
in Haskell and Idris--but it's very limited experience--I was confused by i <- 1 .. n
and the similar expression for j
: I think of <-
as do
's version of let
. (I know that behind the scenes it's something else, but never mind that.) So to me, i <- 1 .. n
looks like it assigns [1, 2, 3, 4, 5]
to i
all at once (if we let $n=5$), instead of assigning 1 to i
, then 2 to i
, etc. The book is intended for people who don't know Haskell, so maybe this confusion shouldn't matter, but I wonder whether it would be worth adding a small note for people who don't know Haskell well but have seen one of the usual introductory IO
do
examples that assign what's on the right side of <-
to the left side variable all at once.do
example appears too early in the book, as I indicated above, I think it's OK to include it here. But maybe it would help to say something like "You can think of this as a piece of special, magical syntax. It's not--as you'll learn--but for now, that's OK." I had never seen this use of do
(as suggested above, my experience with Haskellian do
is very limited), but this use does remind me in a kind of abstract way of Clojure's for
macro. It looks like for
but with a different syntax. From that point of view, the example is quite understandable, apart from a few points. (Hmm, then again, some people learning Clojure are confused by for
at first.) There are many cases in these early chapters where the text indicates that there's more depth to know about, but we're taking it slow, so just pretend it's like this.do
section. The point above about what pure
returns is confusing because it's not sufficiently connected to the array comprehension section.
Some opinions and proposals for the Do Notation section section (feel free to disagree):
bind
ormonad
here.Some other confusing bits of this section are discussed in https://github.com/purescript-contrib/purescript-book/pull/63
And what's going on with this comparison between
map
andmap
? Aren't those the same for arrays? Was this different in the past?