Closed masaeedu closed 5 years ago
In smart pipelines, it would be expressed thus:
const size = x => x |> Iter.map(Fn.const(1))(#) |> Iter.fold(Int.plus)(0)(#)
You need to include the lexical topic token (currently #
) to make it explicit. It's intended to avoid footguns.
@mAAdhaTTah What kind of footguns does this avoid? When you're programming in a pointfree style as shown above, you're basically going to be writing #
every time, often in an awkward place at the end of a multiline function application. Off the top of my head I can think of map
, filter
, reduce
, scan
, flatMap
, concat
, etc. etc, all of which take more than one argument, and must have all but the last applied away in order to sensibly use them in a pipeline.
There needs to be some way of making reverse application play well with forward function application, otherwise it will be very painful to use.
@masaeedu Smart Pipelines aren't optimized for pointfree-style, except insofar as x |> double
is valid. It's also not that common in JS-land in general, so there's an argument that optimizing for avoiding footguns in the common case is more important.
From the Smart Pipelines README:
The writer must clarify which of these reasonable interpretations is correct:
input |> object.method(#, x, y); input |> object.method(x, y, #); input |> object.method(x, y)(#);
There's more details there. I'll let @js-choi expand on the argument for this though, as this is his proposal.
Neither of the first two seem like a reasonable interpretation for an operator a |> b
=== b(a)
, for b
=== object.method(x, y)
, but I won't argue that point. If this is not a supported use case for this operator, I'd request that no attempt be made to "subsume" function composition using it. A separate function composition operator can be provided that just does simple function composition and facilitates programming in the style above.
If there is still any time to tweak the proposal, you might consider just interpreting all a |> b
where b
does not explicitly contain #
as a |> (b)(#)
=== (b)(a)
.
This means input |> await object.method(x, y)
is interpreted as (await object.method(x, y))(input)
. If this is not what the user wanted, they can explicitly specify await (object.method(x, y)(#))
or whatever it is they actually meant. Mentally desugaring any code involving |>
would be quite straightforward; if b
in a |> b
doesn't contain #
, desugar to (b)(a)
, irrespective of what else the b
expression holds.
Edit: Oh, looks like there’s been some updates since I last loaded the page. I’ll reply to the other messages when I can.
Thanks for the question. The answer is that, yes, smart pipelines can accommodate curried functions. But if you create the curried functions inline, then you need to use topic style.
This phrase:
Only identifiers and s; no
( )
,[ ]
, or other ops
…applies only to smart pipelines’ bare style, which is a special convenience syntax for simple, unambiguous unary functions. All other expressions, including function calls on function calls, are supposed to use topic style.
const size = x => x |> Iter.map(Fn.const(1))(#) |> Iter.fold(Int.plus)(0)(#)
One benefit of this is semantic clarity. To summarize that link, in general, input |> f(otherArg)
is ambiguous between three reasonable interpretations:
f(otherArg, input)
,
f(input, otherArg)
, and
f(otherArg)(input)
.
With smart pipelines, input |> f(otherArg)
is an early error, which forces the writer to clarify which interpretation they mean, for the human reader’s benefit:
input |> f(otherArg, #)
,
input |> f(#, otherArg)
, or
input |> f(otherArg)(#)
.
As for your second code block, I’m not sure what FSFX
and Flt
are; they don’t resemble Ramda’s or Lodash/FP’s APIs. You’d have to give definitions of your helper functions for me to translate it to smart pipelines. The best I can guess right now is:
const downloadAndVerify = input => input
|> checkExists
|> checkExists(#) ? # : download(#)
|> do {
const hash = computeHash(#);
if (checksum === hash) #;
else throw new Error(`Checksum of downloaded file was ${hash}: expected ${checksum}`);
};
Smart pipelines can accommodate curried functions. But if you create the curried functions inline, then you need to use topic style, not bare style.
@js-choi The FSFX.chain
function is a binary function:
chain :: (a -> FileSystem -> Fluture b) -> (FileSystem -> Fluture a) -> (FileSystem -> Fluture b)
equivalent to monadic bind. checkExists
, doNothing
, download
etc. are all instances of the monad, i.e. functions FileSystem -> Fluture x
.
You can't just unwrap things as you've done because the chain
is essential; this is a monadic computation. Similarly, if I have a functor of things, I can't just get rid of my map
and unpack the computations to the top level.
Here's a simpler example with observables:
inputObservable
|> Obs.map(x => x * 2)
|> Obs.flatMap(x => Obs.delay(1000, x))
|> Obs.filter(x => x < 50)
|> Obs.scan(x => y => x + y)
Regarding your three reasonable interpretations, I don't think any of those is reasonable except for the last one. a |> b
=== b(a)
by simple lexical substitution implies that foo |> x.y()
is x.y()(foo)
. I guess what seems intuitive is subjective, so perhaps it really is a significant problem for users that a |> b
=== b(a)
suggests foo |> x.y()
desugars to x.y(foo)
. I'm just one data point.
Neither of the first two seem like a reasonable interpretation for an operator
Elixir, for example, does in fact slot in the pipeline value as the first parameter of the function, so that is certainly a feasible interpretation when coming from another language. I don't think the second is as likely, as JS doesn't have built-in currying, but it's not entirely unreasonable either. In any case, if it's ambiguous at all, forcing the developer to be explicit seems entirely reasonable.
That said, it sounds like you'd prefer F# Pipelines overall, as that mirrors more closely your expectations for how the operator will function. None of the current proposals have been "adopted", so there's plenty of time to adjust things as you like.
One thing I'll mention is that if we go with Smart Pipelines, there is less of a need to use curried functions. You could just have a normal n-ary functions with a placeholder. So it wouldn't require the parens and would look thus:
const size = x => x |> Iter.map(Fn.const(1), #) |> Iter.fold(Int.plus, 0, #)
@mAAdhaTTah That doesn't help you with x |> Iter.map(Int.add(1))
; Int.add
still needs to be curried. There's many other examples of functional programming patterns that can not be accommodated, and even if they could be, would be restricted to within the body of the pipeline operator. Curried functions are a perfectly adequate solution to the problem of needing curried functions.
@mAAdhaTTah is correct that Elixir’s pipe operator tacitly inserts its input into first parameters. Clojure supports both tacit first- and last-parameter insertion. The R language’s magrittr library also uses tacit first-parameter insertion.
More importantly, there are many existing and idiomatic JavaScript APIs whose functions’ “primary inputs” are first parameters, such as DOM fetch
, ES new Uint8Array
, DOM new WebSocket
, Node fs.readSync
, and pr’s fs.read
. It goes either way in real APIs; both ways are reasonable interpretations, as well as autocurrying-style unary-function creation.
When you're programming in a pointfree style as shown above, you're basically going to be writing
#
every time, often in an awkward place at the end of a multiline function application. Off the top of my head I can think ofmap
,filter
,reduce
,scan
,flatMap
,concat
, etc. etc, all of which take more than one argument, and must have all but the last applied away in order to sensibly use them in a pipeline.
With regard to “you’re basically going to be writing #
every time,” this is not true for applying to unary functions. o.m(input)
can simply be written as input |> o.m
. (It is true for n-ary functions, but, as discussed above, pipelining into n-ary function calls is ambiguous anyway.)
If there is still any time to tweak the proposal, you might consider just interpreting all a |> b where b does not explicitly contain # as a |> (b)(#) === (b)(a).
This possibility was considered. It was passed over in favor of the current simple covering rule (“you must include #
, unless it’s a simple function call”). Syntactic locality is another goal of the proposal. Such context sensitivity would require frequent large lookahead; this is a footgun for human readers.
Consider v |> foo(blah, bar, partial(foo, x, y + z), blah)
. Is this a tacit function call or is it a topic-style expression? For a human to determine this crucial difference, they would require carefully reading the entire pipeline-body expression. They could easily miss the presence of a #
, such as with v |> foo(blah, bar, partial(foo, x, # + z), blah)
, which is a topic-style expression.
To mitigate this uncertainty, v |> foo(blah, bar, partial(foo, x, y + z), blah)
is an early error. This guarantees to the reader that, if the program compiles, it’s in topic mode, and there’s a #
somewhere in the body expression.
The
FSFX.chain
function is a binary function:chain :: (a -> FileSystem -> Fluture b) -> (FileSystem -> Fluture a) -> (FileSystem -> Fluture b)
Ah, so this is using Fluture. I’d have to study its API more to give a better translation of the original code block https://github.com/tc39/proposal-pipeline-operator/issues/116#issue-311683977. But if nothing else, that could be accomodated by adding
(#)
or, #)
to the end of each pipeline step, making it clear that it is a function call, rather than just an expression; and if the writer forgets any(#)
, they will know with an immediate early error. This is indeed a tradeoff—a disadvantage for a particular API style in return for advantages in several other API styles.
Eventually the smart pipe syntax might be extended with build-in higher-order forms of application like functor mapping, monadic chaining, and Kleisli composition; @tabatkins, @isiahmeadows, and I have discussed this before (http://logs.libuv.org/tc39/2018-03-13, https://github.com/tc39/proposal-pipeline-operator/issues/116#issue-311683977, https://github.com/js-choi/proposal-smart-pipelines/issues/24). That idea is out of scope for now, though; I have my hands full enough with the core proposal’s Babel plugin.
The same goes for the code block in https://github.com/tc39/proposal-pipeline-operator/issues/116#issuecomment-379027332:
inputObservable
|> Obs.map(x => x * 2)(#)
|> Obs.flatMap(x => Obs.delay(1000, x))(#)
|> Obs.filter(x => x < 50)(#)
|> Obs.scan(x => y => x + y)(#)
Here again, this is a tradeoff—a disadvantage for a particular API style in return for advantages in several other API styles. (At the very least, an important footgun is avoided: if the writer forgets any ending (#)
, they will know with an immediate early error.)
That doesn't help you with
x |> Iter.map(Int.add(1))
;Int.add
still needs to be curried.
As far as I can tell, x |> Iter.map(Int.add(1), #)
would still work.
Regarding your three reasonable interpretations, I don't think any of those is reasonable except for the last one.
a |> b === b(a)
by simple lexical substitution implies thatfoo |> x.y()
isx.y()(foo)
. I guess what seems intuitive is subjective, so perhaps it really is a significant problem for users thata |> b === b(a)
suggestsfoo |> x.y()
desugars tox.y(foo)
. I'm just one data point.One thing I'll mention is that if we go with Smart Pipelines, there is less of a need to use curried functions. You could just have a normal n-ary functions with a placeholder.
There's many other examples of functional programming patterns that can not be accommodated, and even if they could be, would be restricted to within the body of the pipeline operator. Curried functions are a perfectly adequate solution to the problem of needing curried functions.
First-parameter function calls, last-parameter function calls, autocurrying-function calls, and non-function-call expressions are all used in JavaScript. All of these styles, not just the autocurrying style, exist and are idiomatic JavaScript. (To repeat examples above of JavaScript APIs whose functions’ “primary inputs” are first parameters: DOM fetch
, ES new Uint8Array
, DOM new WebSocket
, Node fs.readSync
, and pr’s fs.read
.) Smart pipelines attempts to accomodate all of these common styles, while ensuring distinguishability between them.
Thanks for your patience, @masaeedu.
More importantly, there are many existing and idiomatic JavaScript APIs whose functions’ “primary inputs” are first parameters, such as ...
That's fine, but this is a different contention from saying someone would get confused into thinking x |> y()()
is equivalent to y(x)()
or y()(x)
, especially if the behavior of the bare style is clearly documented as a |> b
=== a |> b(#)
. We're not debating whether to desugar x |> y.foo()
to some other, more popular idiom in JS; we're debating whether it should be illegal syntax, for fear of someone getting confused over what it means. Disallowing x |> y.foo()
isn't doing anything to make usage of fetch
/Uint8Array
etc. any sweeter.
this is not true for applying to unary functions
Yes, it's not true that you're going to append it in strictly 100% of expressions, but most combinators of interest accept a parameter that inform their behavior, so as I said, you're basically going to be writing it every time. You can of course make a temporary variable with const myMap = map(add(1))
to turn these combinators into unary functions, and then do foo |> myMap
, but this defeats the purpose of the operator.
Syntactic locality
Implicitly appending a #
to the expression doesn't result in any decrease in syntactic locality, because you always have to read the entire expression and find the #
to understand the meaning of the expression. It's impossible to reason about what the legal smart pipeline expression v |> foo(blah, bar, partial(foo, x, # + z), blah)
means unless you read the whole thing and find the #
. If you don't find one, it's not a giant mental leap to understand that the application is with respect to the entire expression.
Obviously there's a need to exercise good judgement to prevent a needle and haystack situation with really complex expressions, but the need for good judgement is applicable to all language features, and importantly, applies to the smart pipeline feature regardless of whether complex bare expressions are allowed or not.
As far as I can tell,
x |> Iter.map(Int.add(1), #)
would still work.
@mAAdhaTTah was suggesting that the need for curried functions is reduced by the existence of #
-equipped pipelines. It isn't, as the need for a curried Int.add
illustrates. To put it another way, writing your functions as x => y => ...
is driven by a broader set of goals than can be solved in the pipeline operator proposal.
All of these styles, not just the autocurrying style, exist and are idiomatic JavaScript.
This is not relevant to what we're discussing. The fact that functions exist where you'd have to explicitly do x |> foo(#, 10)
is undoubtedly true, and for these you'd just use topic style, exactly as shown. But it seems like a non-sequitur to disallow the bare style for complex expressions like x |> bar(10)
=> x |> bar(10)(#)
simply because of the existence of other, more popular patterns, at no benefit to users of either style.
Overall, appending the (#)
everywhere may not seem like a big problem, but a user of the functional programming idiom in JS tends to suffer from this kind of death by a thousand cuts of small, individually insignificant annoyances (async/await tied to promises, syntax noise in function application, now this (#)
at the end of everything), simply because no one cares about that aspect of the experience. It's not inevitable that functional programming has to suck in JS, it's just a question of priorities.
Thanks for your patience
Likewise, @js-choi. Trying to respond to and accommodate all these different opinions must be like herding cats. 😄
Just to throw it out there, the flip side of this is we're somewhat struggling with the perception that the pipeline operator as an "FP feature", rather than multi-paradigm. I don't know if / how much it impacts this discussion, but it's something to bear in mind.
I've been sad because this language feature that I've seen in other languages and really want to have seems so obviously aimed at pure functions, but it seems like there's a movement to try to make it work with object methods, which doesn't make sense to me.
I don't go around trying to make class-related proposals seem more function-like, why should the function-related features have to bend themselves towards towards classes :-x
One other thing is that if the operator is idiomatically spaced (like x |> f(#)
rather than x::f()
), it encourages people to think of the function as fundamentally separate as an entity from the value it's operating on. This will inevitably box people into using it like an FP feature, whether you mean for them to see it that way or not. To draw a concrete example with the three main variants (F#-style, smart pipelines, and method-like chains:
// F#-style
// This is technically parsed as `x |> (f.g())`, not `(x |> f).g()
x
|> f
.g()
// Smart pipelines
// This could be parsed as either `x |> (f(#).g())` or `(x |> f(#)).g()`,
// as both are semantically equivalent.
x
|> f(#)
.g()
// Method pipelines
// This is technically parsed as `(x::f()).g()`, not `x::(f().g)()
x
::f()
.g()
it seems like there's a movement to try to make it work with object methods
The problem isn't trying to make the pipeline operator work with object methods; the problem is the bind operator (working on methods) and the pipeline operator (working on functions) are both fundamentally about pipelining / chaining, and there isn't an appetite for accepting both into the language. If we're going to solve "pipelining" as a use case, it has to be in a single operator.
Implicitly appending a # to the expression doesn't result in any decrease in syntactic locality, because you always have to read the entire expression and find the # to understand the meaning of the expression. It's impossible to reason about what the legal smart pipeline expression
v |> foo(blah, bar, partial(foo, x, # + z), blah)
means unless you read the whole thing and find the #.
Correct, but it's important that, in the current syntax, you know immediately that the expression is in topic-form, so you at least know that you do have to go looking for that #
to interpret it. The alternative that you're suggesting would mean that you have to first check whether the #
exists or not, as that would dramatically change the intention of the entire expression. (It's the difference between the top-level expression preparing an unary function to receive the pipelined value, or the top-level expression using the pipelined value directly; these are very different scenarios!) And it means that, in general, if you forget to put in the #
(such as if, for example, your fingers are still used to typing Elm pipelines, and you're implicitly assuming it'll get passed in as the first argument), your entire expression is accidentally interpreted very differently at runtime.
@mAAdhaTTah was suggesting that the need for curried functions is reduced by the existence of #-equipped pipelines. It isn't, as the need for a curried Int.add illustrates.
"Reduced" is not "eliminated". F#-style encourages heavy usage of curried functions. Smart pipelines reduce this need substantially. There are still situations where you might want currying, of course.
(But that's just a question of how much weight you're willing to tolerate from lambdas, ultimately. Int.add(3)
is longer than x=>x+3
; with smart pipeline's PF feature, +>#+3
is even shorter. Where exactly you negotiate the explicit/terse trade-off can reasonably vary by person.)
@tabatkins I don't see a difference between "first check to see whether the #
exists or not" and "looking for that #
to interpret it". You always have to read the entire expression, beginning to end, and if you read the whole thing and there's no #
, it's in its default position, i.e. the end.
"Reduced" is not "eliminated". F#-style encourages heavy usage of curried functions. Smart pipelines reduce this need substantially.
I either curry my functions, or I don't. So long as there's use cases that no other tool but partial application will solve, I need to keep currying my function definitions. As an example, I still need to keep map
curried in order to be able to pass around the result of partially applying it. The pipeline operator doesn't solve this problem, or the many others for which partial application is intended, so it doesn't reduce the need to curry my functions at all.
I don't see a difference between "first check to see whether the # exists or not" and "looking for that # to interpret it". You always have to read the entire expression, beginning to end, and if you read the whole thing and there's no #, it's in its default position, i.e. the end.
The point is that reading code well, especially when there's the possibility of FP-ish shenanigans, requires you to know what the expected types of things are before you can start to interpret them. Otherwise you're just mentally tokenizing, not actually reading the code. There's a huge difference between the top-most expression constructing a function that'll get called for a value, and the top-most expression just evaluating to some value; your understanding of the expression as a whole changes pretty drastically in the two situations.
I either curry my functions, or I don't. So long as there's use cases that no other tool but partial application will solve, I need to keep currying my function definitions.
I think we're talking past each other. F# style encourages currying within the pipeline, so you don't have to write lambdas. Smart pipelines reduce this need.
And note that feature PF (the +>
syntax) gives you partial application too.
There's a huge difference between the top-most expression constructing a function that'll get called for a value, and the top-most expression just evaluating to some value
When functions are values, which is the mental model for someone performing "FP-ish shenanigans", this is not such a huge difference. It's quite common to treat functions alternately as final results, intermediate values, and as transformers of other values (which again, may be functions), often all within the same expression.
Regarding knowing the actual types of things: the change proposed does not require knowing the type of things any more than the existing proposal does; it's a purely lexical transformation. Obviously understanding what values the expressions foo(#, bar)
or baz(quux)
=> baz(quux)(#)
produce actually requires knowing the types of things, but performing the desugaring baz(quux)
to baz(quux)(#)
requires zero understanding of the semantics of baz
and quux
.
I think we're talking past each other. F# style encourages currying within the pipeline, so you don't have to write lambdas. Smart pipelines reduce this need.
We are indeed. I still have to write my functions as lambdas, regardless of what shape smart pipeline assumes.
Imagine the following function:
const f = x => y => ...
, which I'd like to use like this:
v |> f(x)
The suggestion is that because I can do v |> f(x, #)
, I'm now free to substitute my API (for many, if not all f
), with:
const f = (x, y) => ...
This is wrong. I still need to do const f = x => y => ...
, regardless of what features the smart pipeline operator provides to me. Why?
Because there's way more places where I use partial application of the form:
hof(f(x))
In such places, the pipeline operator and its facilities for convenient partial application are useless to me. I need to pass a partially applied function to another function (a frequent occurrence in functional code), and my options are to either wrap things in a lambda right there, as in hof(y => f(x, y))
, or to use some explicit method/syntax for partial application (bind
/?
).
When functions are values, which is the mental model for someone performing "FP-ish shenanigans", this is not such a huge difference.
As a functional programmer myself, I strongly disagree; I have to flip a mental switch to read something as manipulating a function versus just executing something. We might have to leave this as just us having different mental models of things.
However, given that most programmers use FP lightly (just passing around functions as callbacks), I think it's reasonable to state that for most programmers, there's a big mental difference when attempting to interpret an expression between something that results in a function that'll take a value and something that just uses that value directly.
We are indeed. I still have to write my functions as lambdas, regardless of what shape smart pipeline assumes.
Ah, sure, by "using currying" I was referring to actually partially applying a function, not just declaring your functions to allow it. So yeah, we're just using the terms differently. 👍
@masaeedu Do you have any examples in that codebase where you use arrow functions within the pipeline?
@js-choi Is expanding bare-style in Smart Pipelines to function like F# Pipelines currently do out of the question?
If I'm not misunderstanding, that second arrow function I linked to would need to be expressed as:
export const sequence = A => o =>
pairs(o)
|> Arr.map(([k, v]) => v |> A.map(embed(k))(#))(#)
|> (A.of(empty) |> Arr.foldl(A.lift2(append))(#))(#);
It's not the end of the world, but it's just weird syntactic noise trying to match up the #
-es to the expressions they belong to (and ignoring the extra parens it introduces). It'll get worse in long functions like the first one I linked to.
Correct, but that's because you're intentionally doing point-free stuff, instead of just calling methods on objects. Point-free is supported by this syntax, but not catered to.
If you rewrite it to actually use arguments, it should be clearer, but I honestly can't tell what this is doing in the first place to be able to rewrite it. This is not the sort of code that benefits from being even terser; it deserves to be separated into sub-statements and commented, imo.
I would argue point-free isn't supported at all by Smart Pipeline, as your points need to be made explicit through the #
syntax.
I would agree with this assessment: the only exception is with simple identifiers/properties, but the utility of that one special case is almost nil.
On Tue, Apr 17, 2018, 18:57 James DiGioia notifications@github.com wrote:
I would argue point-free isn't supported at all by Smart Pipeline, as your points need to be made explicit through the # syntax.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tc39/proposal-pipeline-operator/issues/116#issuecomment-382183215, or mute the thread https://github.com/notifications/unsubscribe-auth/AERrBO-tp-yRg39rKOh8wbY-pwP1lOJYks5tpnNzgaJpZM4TIyB7 .
@isiahmeadows I'm not sure I'm interpreting you right - are you saying that "simple identifiers/properties" has almost nil use-case?
I would argue point-free isn't supported at all by Smart Pipeline, as your points need to be made explicit through the # syntax.
You can still use PF constructs to build functions, you just have to pass the argument to the function at some point. Claiming that it "isn't supported" implies that you couldn't write code using PF patterns at all, which is incorrect.
As expressed earlier, the weight of doing PF in topic-syntax is just adding a (#)
to the end. Contrast this with the weight of doing non-PF in F# syntax, which is wrapping your code in (x=>...)
- more characters, wrapping instead of suffixing, naming the argument, and requiring a closure which you have to hope can be optimized away. Now balance that with how much code is written PF versus non-PF.
Point free/tacit by definition requires zero names to be present:
xs.map(this.method.bind(this, foo))
xs.map(x => this.method(foo, x))
That inner explicit named argument is a point, making it no longer point-free.
In the smart pipeline proposal, #
is an explicit argument name. You have
no choice in the name of the argument, and you don't have to declare it,
but it is still a name you have to reference explicitly. This is why I
say there is nothing "tacit" about the smart pipeline syntax apart from the
special cases of x |> f
and x |> o.m
being interpreted as f(x)
and
o.m(x)
respectively.
On Tue, Apr 17, 2018, 19:13 Tab Atkins Jr. notifications@github.com wrote:
@isiahmeadows https://github.com/isiahmeadows I'm not sure I'm interpreting you right - are you saying that "simple identifiers/properties" has almost nil use-case?
I would argue point-free isn't supported at all by Smart Pipeline, as your points need to be made explicit through the # syntax.
You can still use PF constructs to build functions, you just have to pass the argument to the function at some point. Claiming that it "isn't supported" implies that you couldn't write code using PF patterns at all, which is incorrect.
As expressed earlier, the weight of doing PF in topic-syntax is just adding a (#) to the end. Contrast this with the weight of doing non-PF in F# syntax, which is wrapping your code in (x=>...) - more characters, wrapping instead of suffixing, naming the argument, and requiring a closure which you have to hope can be optimized away. Now balance that with how much code is written PF versus non-PF.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tc39/proposal-pipeline-operator/issues/116#issuecomment-382186577, or mute the thread https://github.com/notifications/unsubscribe-auth/AERrBOOWOScjnVNRdp1Oh-kvdUQ2J5DWks5tpnb8gaJpZM4TIyB7 .
You can still use PF constructs to build functions, you just have to pass the argument to the function at some point.
Yes, you have to make the "point" explicit. If you have to make it explicit, it's no longer "point-free". Whether that's valuable or not is a separate debate, but requiring explicit points in your pipeline makes point-free impossible.
Edit: Isiah beat me to it :p.
If we're going to compare w/ F#, I'll just point out that using arrow functions a feature. Developers know how to use arrow functions already; learning the ins and outs of the "lexical topic" not a trivial amount of overhead vs building on a syntax we already know.
Also, the babel plugin currently optimizes away arrow functions. It would be trivial for an engine to do the same. I don't find that argument compelling.
That said, I don't think this is the right thread to continue this argument.
@tabatkins It wouldn't become any clearer with arguments, the barrier to understanding it is domain knowledge, which is caused by my not having commented anything. These are typeclass instances for various types that I'm cranking out in the process of working on a different project (in most cases by copying a definition almost verbatim from Haskell).
Saying that I should break this into more statements and invent names for the subexpressions is well and good in principle, but the terse definition makes it easy to see the transformation being made once you're familiar with the general pattern. You'll see this in other programming languages as well, more ceremony seems like it should aid understanding but in fact reduces it.
WhenI read the following, it reads as:
export const sequence = A => o =>
// for all the key-applicative value pairs in this hash
// (lots of implicit domain knowledge used here)
pairs(o)
// transform each applicative v by embedding its items i into { [k]: i }
|> Arr.map(([k, v]) => v |> A.map(embed(k)))
// append the embedded objects across the original applicatives
|> (A.of(empty) |> Arr.foldl(A.lift2(append)));
Now you may not care about any of this, and it may seem ugly and overcomplicated to you. That's fine, we all have opinions, and fortunately there's many ways to skin a cat in JS. Nevertheless it's very useful code to me, and it affords me convience in using FP patterns I'm familiar with in JS. You're basically saying, "well just don't do that then, then it won't suck for you. Do this thing I like doing instead".
The future shape of JS should leave some room for interpretation, rather than just forcing one approach down everyone's throats and treating everyone else as edge cases. What I'm asking for doesn't hurt the method-oriented usage you prefer in any way whatsoever; it simply disables a check that's treats code with IMO intuitive semantics as a syntax error. Why not allow flexibility so the sort of code seen above is fun to write as well?
This stuff is mostly irrelevant to the discussion at hand, but I'm actually pretty sure you actually can't do any of this stuff using a method oriented API anyway. That A
is an applicative typeclass instance being passed in as a first-class value, which only works if the function definitions are divorced from the data instead of attached to it. The benefit is being able to treat data in a single scope as an instance of different typeclasses without having to perform any value level wrapping or conversion (e.g. M.append(1)(2)
can mean 1 * 2
or 1 + 2
depending on what monoid M
is).
I realize it's easy to interpret the code above as pointless obscurantism (and therefore not representative of real world code) since it isn't clear what it's used for. Here's a couple of things I'm using Obj.sequence
for:
{ foo: [1, 2], bar: [3, 4] } |> Obj.sequence(Arr)
// [{ foo: 1, bar: 3 }, { foo: 1, bar: 4 }, { foo: 2, bar: 3 }, { foo: 2, bar: 4 }]
{ foo: Promise.resolve("x"), bar: Promise.resolve("y") } |> Obj.sequence(Prm.Seq)
// Promise that resolves to { foo: "x", bar: "y" }
I can't see a way to do an equivalent of this with object methods, and if I could, I'd still have to constantly be wrapping and unwrapping raw data to equip it with methods.
All this is to emphasize that I'm not writing code with curried functions and partial application etc. because of some personal fetish for FP; the code you're seeing is the result of applying well understood, timeworn solutions to the practical problems I'm facing. We'll probably see more code like this in the future, so it's well worth considering how code that uses this style (as opposed to methods) is affected by the design.
I'm very familiar with Traversable functors and the sequence operation, I just couldn't tell what that actual code was doing. ^_^ (
You're basically saying, "well just don't do that then, then it won't suck for you. Do this thing I like doing instead".
I understand it can sound like that, but I'm really not. My point is, more explicitly: this kind of code is very rare, when measured across the universe of JS code written in the wild. And while FP things are becoming more common, there's no indication that it will ever become anywhere near the majority programming style, at least to the extent demonstrated here. I think I'm being charitable here in saying that 99% of authors would never write a line remotely like that. (That is, I think saying that 1% of authors would write code like this is being very charitable; it's probably much less.)
Thus, optimizing a syntax for this usage, while making common 99% cases more difficult, is a very bad trade-off. We can do more good for more users of JS by optimizing the syntax toward the code patterns they're more familiar with; in particular, this means calling methods on objects, and using operators. Both of these things are very easy in topic syntax (#.foo()
, obj.foo(#)
, # + 1
, etc); they're more difficult in F# syntax. (You either have to wrap your code in an arrow function, or switch to using curried functional variants of things.)
Topic-syntax also has a closer relationship to the original code that pipelines are meant to untangle. You can always just pull out some nested expression, put it on the LHS of the pipeline, and leave a #
in its place; this can be recursed to make the expression as flat as you want. F# style requires you to either wrap the RHS in an arrow function to achieve a similar transform, or else do a more complex rework of the RHS expression to make it point-free.
There's a few other reasons I have for preferring this syntax over F#, but this is a decent summary, at least. ^_^
I'm very familiar with Traversable functors and the sequence operation, I just couldn't tell what that actual code was doing. ^_^ (
Great! So then you'll recognize that there's nothing for it but to use functions instead of methods here: you can't access the pure
operation as a method because you have nothing to access it as a method on.
Thus, optimizing a syntax for this usage, while making common 99% cases more difficult, is a very bad trade-off.
How does it make 99% cases more difficult? As I said earlier, the a |> x.b()
syntax isn't being reserved as sugar for something a method-oriented user of JS would prefer; it's simply being reserved as invalid syntax. If you don't do that, nothing changes for a user of what (you claim to be) the 99% approach to using this. To put this a different way, the code that would be valid under the proposed change would be a strict superset of what is valid under the current proposal.
@tabatkins
Topic-syntax also has a closer relationship to the original code that pipelines are meant to untangle. You can always just pull out some nested expression, put it on the LHS of the pipeline, and leave a # in its place; this can be recursed to make the expression as flat as you want. F# style requires you to either wrap the RHS in an arrow function to achieve a similar transform, or else do a more complex rework of the RHS expression to make it point-free.
I'd rate that as a feature, not a bug. If it takes a substantial amount of gymnastics to avoid 5 small tokens (( fun arg -> [...] )
), you might want to rethink going point-free.
As for topic-style pipelines, here's what I feel: instead of giving the chance for users to get in trouble from their own gymnastics, you're instead forcing engines to jump through numerous syntactic hoops just to infer the meaning of an expression. In addition, if you consider letting pipelines be lifted extensibly, I can't really adapt very well to the topic style. Edit: One other point is that you begin to lose the self-documenting aspect some pipelines have of their operand's types. Tacit isn't always better, but it definitely beats anything that quacks like de Bruijn indices for readability (and topic-style most certainly does IMHO).
I've got thoughts on this comparison, but I'd like to suggest we focus this thread on the tradeoffs within Smart Pipelines between allowing vs. denying additional bare-style forms, instead of comparing Smart & F#. As it currently stands, the answer the question originally posed is: Yes, you can use curried functions, but it requires (#)
at the end, to make the argument's location explicit. This is included to avoid footguns.
I suggest we reframe this conversation around whether we expand the footguns as a result of enabling bare-style forms that ease the use of curried function & point-free style, and if that trade-off is worth it.
If we want to open up a conversation / debate about the trade-offs between the two proposals, we should probably open a dedicated thread for that, although I'd like to hold off on that debate until #104 is resolved, as the answer to that question will impact that debate.
I've yet to see an F# programmer who had shot his own foot off. 😜
Requiring people to think more functionally is a good thing. As for the #
operator, its cool, I like it. It should be its own proposal and work everywhere, not just in pipelines.
Simple use cases for partial application are easy to implement with the F# proposal
https://gist.github.com/babakness/ba3c2ac03b030cecceca8a64877a79f4
Then you can do
import { _ , bind } from 'cool-stuff'
const getPointOnLine = (slope,offset,x) => slope * x + offset
const pointOnLine = document.querySelector('#whatever').value
|> Number
|> getPointOnLine::bind( .5, 10, _ )
Since we can already do this, how is adding stuff to Number making it easier to read?
const pointOnLine = document.querySelector('#whatever').value
|> Number::bind( _ )
|> getPointOnLine::bind( .5, 10, _ )
JS developers already understand what this does
document.addEventListener( '.foo' , myAwesomeHandler )
This add no benefit
document.addEventListener( '.foo' , myAwesomeHandler::bind( _ ) )
@babakness please remember, JavaScript is far and away from being anything like F#. Much of F#'s "safety" (translate - defense against footgunning -- and F# is a great language) comes from its very strict typing system. Also, if JS adopts F#'s flavor of pipelines with implicit function calls, we'll still be lacking many features that make it convenient to use like auto-currying, or passing additional parameters using nothing but a space without having to create wrapper functions.
Also we want to avoid the constructors for Number, Array, Object, etc. (there's well established reasons for avoiding these, and footguns that they cause).
The two statements you made with addEventListener look confusing, is this what you meant to do?
document.addEventListener('.foo' , myAwesomeHandler)
document.addEventListener('.foo' , _::myAwesomeHandler) //Ensures I don't lose my 'this' context
Hey! No that's not what I meant! This is the source code to my gist
export class Placeholder {}
export const _ = new Placeholder()
export const isPlaceholder = placeholder => placeholder instanceof Placeholder
export function bind( ...placeholders ) {
return ( ...fillers ) => this.call( this, ...placeholders.map(
item => isPlaceholder( item ) ? fillers.shift() : item
) )
}
As you can see
document.addEventListener( '.foo' , myAwesomeHandler )
document.addEventListener( '.foo' , myAwesomeHandler::bind( _ ) )
These two lines do exactly the same thing.
These also do the same thing
const pointOnLine1 = document.querySelector('#whatever').value
|> Number
|> getPointOnLine::bind( .5, 10, _ )
const pointOnLine2 = document.querySelector('#whatever').value
|> Number::bind( _ )
|> getPointOnLine::bind( .5, 10, _ )
My point is that _
is useful in some cases and not in others. This code works today, see here
(note: not sure on best name papp
or bind
, my babel repl uses the papp name, the gist, bind)
Just open your browsers console to see the result.
My point is also that #
is nice, it should be its own proposal and not something just for pipelines. It can work with point-free just fine. I use point-free in JS to make the code read more declaratively.
Can't it be argued that adding stuff like await #
is the real footgun when you don't have a type system? And why create side effects in a pipeline?
@babakness not sure how whether you create side effects or not is relevant to whether or not it's within a pipeline or why await #
or yield #
etc. would be a footgun specifically because of pipelines, whether or not there's a type system.
I am curious what you mean, but don't wish to derail from the topic.
I really can't disagree strongly enough about #
being its own proposal. It's easy to show how much simpler, ergonomic and convenient it is for the developer in many, many use cases. In the context of a pipeline, how do I add 2 numbers together?
Here's how F# would do it.
let add a b = a + b
let x = 5 |> add 2
How would I do it in JavaScript with F# style pipelines?
const add = (a) => (b) => a + b
const x = 5 |> add(2)
How about with the topic?
const x = 5 |> # + 2
Even the bind operator gives the F# style pipeline a run for its money w.r.t. ergonomics, convenience, and complexity (IMO):
function add(b) { return this + b }
const x = 5 :: add(2)
EDIT: made some edits.
@aikeru Yeah, #
is great, we should have it. But we shouldn't do away with the point-free style in pipelines.
import { _ , bind } from 'cool-stuff'
const pipe = (...funcs) => input => funcs.reduce( ( acc, fn ) => fn( acc ) , input )
const pipeline = (input, ...funcs) => pipe(...funcs)(input)
pipeline(
document.querySelector('#whatever').value,
/* |> */ Number,
/* |> */ getPointOnLine::bind( .5, 10, _ )
)
Both |>
and #
go a long way to simplify code when combined with point-free.
To go with your example
const x = 5 |> # + 1
That's great, but sometimes point free is better
const x = 5 |> inc
@aikeru
Also we want to avoid the constructors for Number, Array, Object, etc. (there's well established reasons for avoiding these, and footguns that they cause).
I'll grant you Array
, Function
, and to a lesser extent, Object
, but the rest are not nearly as problematic:
Number(foo)
is equivalent to +foo
in every way. Every time you compare two values, this logic is called for both operands.String(foo)
is equivalent to foo + ''
in every way except (and this is intentional) with symbols, where it coerces them to a string instead of throwing. Every time you use a built-in method expecting a string, this logic is called.Boolean(foo)
is equivalent to !!foo
in every way. Every time you use a value in if
, this logic is called for that condition.Object(foo)
coerces foo
to an object if it's not already one. Every time you use a built-in method that expects an object, this logic is applied.
null
/undefined
, this returns {}
(to avoid the usual TypeError
).new Boolean(true)
for Object(true)
Function(...args, foo)
is equivalent to new Function(...args, foo)
...args
is joined implicitly via ", "
before parsed into parametersFunction(args, body)
, where args
was specified as an array of individual parameter strings.Array
is the only wildcard here:
@isiahmeadows Don't you consider these issues?
new Boolean(true) === true // false
new Number(5) === 5 // false
new String("foo") === "foo" // false
I remember reading additional reasons, but maybe these are enough?
@aikeru That's the main reason to avoid wrapper objects, since if you do typeof new Boolean(true)
(for example), you'll get "object"
, not "boolean"
like you would get from typeof true
. I was specifically talking about the coercing call behavior (Boolean(true) === true
), not the constructor behavior new Boolean(true) !== true
.
I understand the difference, I just would assume that their close association (ie: the only difference being whether you use 'new') was enough to put them off. I think I understand your reasoning now, thanks.
@aikeru Yeah consider this
function getTrue(x ) { return true}
getTrue() === true // true
(new getTrue() === true ) // false
or
function identity( x ) { return x}
identity(true) === true // true
(new identity(true) === true ) // false
I have the following code:
Both
map
andfold
are curried functions of multiple arguments (2 and 3 respectively).The TC39 proposal has a slide that says of the "smart pipeline" proposal:
Does this mean the code above would be illegal? How would I express this instead?
Here is a more extreme example of the same concept: pipelining through multiple partially applied 2-ary functions.