Closed zenparsing closed 3 years ago
From my perspective, this proposal loses simplicity because of how much new syntax a developer has to learn in order to understand it.
I can't see this being true. The |>
operator gives you a single, temporary variable. How is that not simple to learn and use?
The only thing Hack-style |>
doesn't accommodate for ergonomically is curried functions. That's it. It's a fair and valid criticism, but I'm opposed to arguments that paint this as an inordinate deterrent, using "lack of simplicity" as a primary yet indistinct proposition.
If you think writing arrow functions is not unergonomic, that's fair too. I'm not sure what the opinion landscape is on that, but I would guess most think it's too many symbols to type for a common use case.
@zenparsing
Oh, good example. I suppose $
has to be treated as a normal (but unique) variable.
@js-choi:
At least, to a Clojure programmer’s mind, % is inalienably connected to #(…). It’s not an indicator character so much as it’s just a regular variable that happens to have an implicit binding within a certain lexical context.
While this may be true for a Clojure programmer, I don't think it'll be true for JS programmers. A JS programmer will have to wrap their head out the |>
symbol, how it functions to pass values between functions, and then understand how the $
functions along with that, so it's really going to be a two-step for JS people.
I can't see this being true. The |> operator gives you a single, temporary variable. How is that not simple to learn and use?
But it's not really a "variable". It looks kinda like a variable, may even be recognizable as variable-like, but [1, 2] |> $.filter(x => x > 1)
is not treating the $
as a variable. It's a placeholder for stuff to come later, which, to my eye, is new syntax.
By contrast, porting an arrow function into this context is just an arrow function, nothing changes. So you really only have to learn one thing, how to chain functions together with |>
, and use something you already know, arrow functions, to do so.
If you think writing arrow functions is not unergonomic, that's fair too.
Yes this is part of it, but I do want to emphasize the learnability aspect. I do think adding placeholders instead of building on current knowledge and syntax with make this harder to learn, and that adding |>>
would compound the problem.
It's a fair and valid criticism, but I'm opposed to arguments that paint this as an inordinate deterrent, using "lack of simplicity" as a primary yet indistinct proposition.
This is fair. As I typed out the response, I got away from the original "argument from simplicity" and instead am arguing "simplicity as learnability."
But it's not really a "variable".
But it is a variable. It has the same use case and semantics as a variable, especially after @zenparsing showed why one might want to close over it. Per your example:
var result = [1, 2] |> $.filter(x => x > 1)
// basically translates to...
const $ = [1,2]
var result = $.filter(x => x > 1)
Because it's just a variable, you can do anything a variable can do (except reassign). There's nothing new to learn expect that the variable exists in the first place.
Because it's just a variable, you can do anything a variable can do (except reassign).
... or declare it, or use it outside of the pipeline, it's not declared explicitly anywhere, it just kind of "shows up", and I do think the closing-over makes that more confusing. The close-over example (although I'm not sure you want this to be valid) looks like the type of thing that would be really mind-bendy for people I know:
var result = 10
|> function () { return $ |> $+1 }
|> $()
If it's a variable, it's a variable that is getting implicitly reassigned between each pipeline step. The current scoping rules don't really apply; it's not really block-scoped, unless we consider a pipeline a "block". All of this is why I suggested it's "variable-like," since it's not really just a variable, but a variable with special rules around when it's valid and when it isn't.
If we want it to be act like a variable, I'd almost prefer something akin to @js-cho's Clojure-like syntax:
anArray |> in ($) {
pickEveryN($, 2),
$.filter(x => x > 2),
shuffle($),
$.map(x => x * x),
Promise.all($),
await $,
$.forEach(console.log)
}
In this case, the "variable" is explicitly introduced, we get to decide what it's called, it's clearly scoped to a block, and then you just have to understand that the |>
introduced a block that will pipe between the list of functions. It also looks similar to the pattern matching syntax, which may be beneficial for that proposal as well.
The close-over example looks like the type of thing that would be really mind-bendy for people I know
I agree, and that example made me to respond unfavorably to the idea at first. But @zenparsing provided an example that changed my position to "ok it's valid, but please don't write bad code with it".
All of this is why I suggested it's "variable-like," since it's not really just a variable, but a variable with special rules around when it's valid and when it isn't
Let's be honest, var
, let
, and const
all have special rules around when they're valid and when they aren't. The pipeline variable follows this pattern, but is smaller in scope. Its rule is that |>
introduces a variable that is scoped to its righthand side. This would be the only motivation to use the pipeline operator at all, so I don't imagine it would be a difficult thing to pick up.
I'm not buying it. How is Hack-style hard to predict behavior? What corner cases are there to demystify?
Another issue to consider. How would Hack-style |> interact with inner function scopes?
This will keep happening.
You may, but Javascript is a big tent filled with lots of developers who may not find things like shuffle(9)(x) particularly intuitive or obvious.
A big tent doesn't necessarily mean augmenting features so everyone is equally unhappy, it could also mean there's OO features and FP features and neither are awkward to use for the benefit of another camp.
I'm not against making allowances for other coding styles at all, but when it removes the core functionality of a |>
(composing functions), my support ends.
I think it's odd to take a feature that composes functions and then remove core functionality to appease a use case it's not optimized for. E.g. in OO we have method chaining. |>
is explicitly designed to make composition flat and vertical. There's already solutions there for those problem domains.
I'm personally in favour of |>>
. But if we can somehow retain the core functionality I'm not opposed to that. It'd be nice if it wasn't 3 characters though, maybe !>
as !
means execute in a few languages and hopefully wouldn't have any ambiguities.
I think |>>
is a bridge too far, it's really starting to look like Haskell at that point, which you may be surprised to find, I do not want.
This will keep happening.
This statement is FUD. The behavior I pointed out is the same behavior you'd get with variable scoping. (Edited to sound less personal, which I didn't intend to be.)
I think it's odd to take a feature that composes functions and then remove core functionality to appease a use case it's not optimized for.
What use case are you talking about? Have you seen my comparison chart comment? This is not just about await
.
So for one manifestation of this nested-scope problem ([for nested explicit-call pipelining…] with explicit bindings to arbitrary identifiers), Clojure’s solution says that inner scopes simply lexically override outer scopes, and
10 |> function () { return $ |> $+1 } |> $()
would befunction () { return 10 |> $+1 } |> $()
, i.e.10 |> $+1
, i.e.,11
. For another manifestation ([nested explicit-call pipelining…] with implicit bindings to a special symbol%
), Clojure punts and says that the possible semantic ambiguity to humans isn’t worth allowing nesting the scopes, and10 |> function () { return $ |> $+1 } |> $()
would be a syntax error.I think that the first solution is obviously better than the second. As @zenparsing points out, nested functions in pipelines is useful; indeed, Clojure programmers use them all the time. But, as long as we are talking about an implicitly bound variable, I’m still wondering whether
?
or another non-identifier would be better than an already-valid-identifier like$
. Clojure does not have an answer for that, other than that it chose a technically nonvalid but compiler-valid identifier%
for an implicit-binding form, forbade nested implicit binding, and has required explicit binding for other syntaxes.TL;DR: If you used Clojure’s
as->
(and identity monads in general) as precedent,10 |> function () { return $ |> $+1 } |> $()
would befunction () { return 10 |> $+1 } |> $()
, i.e.10 |> $+1
, i.e.,11
.
I realize now that the second paragraph in this quotation from a comment I made to @gilbert the other day (https://github.com/tc39/proposal-pipeline-operator/issues/84#issuecomment-359939771) came to an incomplete conclusion. There are not just two options: a third option for JavaScript’s pipelining, intermediate between Clojure’s two choices. This third option might be better than either of the two options.
The big problem isn’t nested inner functions: it’s nested explicit-call pipelining…even without an inner function. If we assume that explicit-call pipelining is |>
, its placeholder variable is ^^
(as @littledan first suggests in https://github.com/tc39/proposal-pipeline-operator/issues/75#issuecomment-360513202), and its associativity is left to right (as it must be, as it defines a variable binding to its LHS) then an example of the big problem would be 10 |> (^^ |> ^^ + 1) |> ^^
, which “forces” right associativity, causing the first appearance of ^^
to not yet be evaluatable, since the 10
expression has not yet executed. (However, it is still possible to perform syntactic substitution of the inner explicit-call pipeline’s RHS’s Edit: This is incorrect. You can simply evaluate as usual from left to right, if you consider the LHS of each pipeline to be like a binding in a ^^
with its LHS expression, but that would add convulsed magic of a special edge case, instead of relying on JavaScript’s usual expression-evaluation rules…do
expression or the argument of an IIFE, and the RHS to be like the rest of the do
expression or the IIFE’s body. See https://github.com/tc39/proposal-pipeline-operator/issues/84#issuecomment-360897808.)
Using |>
and ^^
, here are therefore the three options:
First option: Allow an explicit-call pipeline to contain an RHS inner nested explicit-call pipeline and/or a nested RHS inner function. (I don’t think this is a good idea.)
10 |> (^^ + 1 |> ^^ + 1) |> ^^
would induce syntactic replacement of the inner pipeline’s RHS’s would evaluate to ^^
with its LHS expression (^^ + 1
), which forms 10 |> (^^ + 1) + 1 |> ^^
(10 + 1 |> ^^ + 1) |> ^^
, i.e. 12
.
10 |> function () { return ^^ + 1 + 1 } |> ^^()
would be equivalent to function () { return 10 + 1 + 1 }
, i.e., also 12
.
10 |> function () { return ^^ + 1 |> ^^ + 1 } |> ^^()
would induce a syntactic replacement like in the first case, which would be equivalent to function () { return 10 |> (^^ + 1) + 1 } |> ^^()
, i.e. 10 |> (^^ + 1) + 1
, i.e., also 12
.
Second option: Allow an explicit-call pipeline to contain an RHS inner nested explicit-call pipeline but not a nested RHS inner function. (This is also not a good option.)
10 |> (^^ + 1 |> ^^ + 1) |> ^^
would induce syntactic replacement of the inner explicit-call pipeline’s RHS’s would evaluate to ^^
with its LHS expression(10 + 1 |> ^^ + 1) |> ^^
, just like for option A, which would also be equivalent to 12
.
10 |> function () { return ^^ + 1 + 1 } |> ^^()
would throw an error if ^^ were not defined outside this expression, because of an attempt to perform throw a reference error, because undefined + 1
^^
is not defined within inner functions.
10 |> function () { return ^^ + 1 |> ^^ + 1 } |> ^^()
would also throw a similar error for similar reasons to the second case.
Third option: Allow an explicit-call pipeline to contain an RHS nested inner function but not a nested RHS inner explicit-call pipeline. (This might be an okay option.)
10 |> (^^ + 1 |> ^^ + 1) |> ^^
would throw a syntax error during parsing because of a nested explicit-call pipeline.
10 |> function () { return ^^ + 1 + 1 } |> ^^()
would evaluate to 12
.
10 |> function () { return ^^ + 1 |> ^^ + 1 } |> ^^()
would throw a syntax error during parsing, like for the first case.
The first option is better than the second, but the third option is better might be better than both of them: the third option is least magical without preventing an essential use case (creating callbacks). @gilbert and @mAAdhaTTah are rightly concerned about programmers unintentionally clobbering an implicitly declared variable like ^^
when nesting contexts that define/override ^^
. It could be an easy footgun. Similarly to how Clojure forbids nested implicit-parameter anonymous functions, the third option would forbid nested implicit-parameter explicit-call pipelining. But it still allows nested regular functions (using function
or =>
), which require explicit parameters. Callbacks are extremely useful, as @zenparsing points out, and the third option still allows them, while statically forbidding the lexical clobbering of ^^
that @gilbert, @mAAdhaTTah, and many others would find confusing/risky.
Keep in mind that all this is irrelevant to programmers who would want to use only implicit-call pipelining, which would be a separate operator from |>
above. If the implict-call pipeline operator is a separate operator from the explicit-call pipeline operator, then they don’t have to care at all about placeholder variables.
Also keep in mind that it is arbitrary here whether it is explicit-call or implicit-call pipelining that gets |>
or |>>
, and whether the placeholder argument is ^^
or %%
or ?
or whatever. Those are mostly bikeshedding questions independent of these more fundamental semantic issues.
@js-choi That restriction seems pretty reasonable to me. Although I suppose a nested pipe could appear in something like a callback function:
value
|> somethingWithCallback(^^, data => {
data |> ^^.filter(...);
});
But this seems like a bit of a stretch.
Well, option 1 isn't that bad. It may be unintuitive to people at first, but it becomes much easier if you think of them as nested lexical scopes that each create variables of the same placeholder name, shadowing the placeholder variable of the scope outside. (I was incorrect when I said that variable replacement would be necessary; I have edited my previous comment.)
One easy way to think of nested lexical contexts is to rephrase them as nested IIFEs. Thus if we pretend that ^^
is a valid parameter-variable name, then:
x |> f(^^)
…would be equivalent to:
( ^^ => f(^^) )(x)
And:
(10 |> ^^ + 1) |> ^^ + 2
…would be equivalent to:
( ^^ => ^^ + 2 )( ( ^^ => ^^ + 1 )( 10 ) ) // => 13
And:
10 |> (^^ + 1 |> ^^ + 2)
…would be equivalent to:
( ^^ => ( ^^ => ^^ + 2 )( ^^ + 1 ) )( 10 ) // => also 13
And:
10 |> (^^ + 1 |> ^^ + 2) |> ^^
…would be equivalent to:
( ^^ => ^^ )( ( ^^ => ( ^^ => ^^ + 2 )( ^^ + 1 ) )( 10 ) ) // => also 13
And:
10 |> function () { return ^^ + 1 + 2 } |> ^^()
…would be equivalent to:
( ^^ => ^^() )( ( ^^ => function () { return ^^ + 1 + 2 } )( 10 ) ) // => also 13
And:
10 |> function () { return ^^ + 1 |> ^^ + 2 } |> ^^()
…would be equivalent to:
( ^^ => ^^() )( ( ^^ => function () { return ( ^^ => ^^ + 2 )( ^^ + 1 ) } )( 10 ) ) // => also 13
And:
value |> somethingWithCallback(^^, data => data |> ^^.filter(...))
…would be equivalent to:
( ^^ => somethingWithCallback(^^, data => ( ( ^^.filter(...) )( data ) )) )( value )
In other words, it’s just nesting a bunch of implicit IIFEs, one for each step, each with one implicit parameter variable of the same name: ^^
. The IIFE’s argument is the LHS of the |>
, and its is body is the RHS of the |>
. The ^^
parameter shadows any outer ^^
within the body of the IIFE (the |>
’s RHS). As you might already know, this is what is sometimes known in other languages as the “binding operation” of the “identity monad”.
However, even though this might be powerful and actually conceptually simple, these nested IIFEs with ^^
parameter variables might not be immediately intuitive to people not used to them…
What if we adopted a more strict restriction: placeholders can only be used in function/method calls as the receiver or argument. Would that resolve the issue?
@littledan: That would forbid expressions like x |> await ^^
, x |> ^^ * 3
, and x |> [0, x]
, right? Would not that nullify much of the value of having a parameterized pipeline operator? Having a parameterized pipeline operator would address await
and, presumably, any other possible future expression.
Shadowing the outer lexical context in a nested parameterized pipeline’s RHS (e.g., x |> [^^, ^^ |> await ^^ |> ^^ * 3]
) would simply be bad practice. This would be similar to how shadowing the outer lexical context in a nested function’s body (e.g., function (a) { return [a, function (a) { return (await a) * 3 }] }
) is bad practice. You shouldn’t do it because it’s unnecessarily confusing to read—but it’s still possible to do, as a logical corollary from the simple rules of lexical scoping. This is the same; if this clobbering is a footgun, it’s as small as a footgun as lexical clobbering in general is.
In other words, shouldn’t ^^
just be treated as the same as any other lexically bound variable? The cost is as much as that of allowing nested functions—not enough to complicate its simplicity or compromise its versatility.
The second-to-last comment I made above shows a bunch of rough equivalences between nested Hack/binding-style pipelines and nested IIFEs. This isn’t totally correct, though. I had forgotten that await
only affects the innermost function, and because of that await
messes up the IIFE equivalency.
It’s more accurate if you use do
expressions instead: Thus if we pretend that ^^
and _^^
a valid variable name, then the following are equivalent:
x |> f(^^)
( ^^ => f(^^) )(x)
do { const ^^ = x; f(^^) }
(10 |> ^^ + 1) |> ^^ + 2
( ^^ => ^^ + 2 )( ( ^^ => ^^ + 1 )( 10 ) ) // => 13
do {
const ^^ = do {
const ^^ = 10;
do {
const _^^ = ^^;
do { const ^^ = _^^; ^^ + 1 }
};
};
^^ + 2
} // => also 13
10 |> (^^ + 1 |> ^^ + 2)
( ^^ => ( ^^ => ^^ + 2 )( ^^ + 1 ) )( 10 ) // => also 13
do {
const ^^ = 10;
do {
const _^^ = ^^;
do { const ^^ = _^^ + 1; ^^ + 2 }
}
} // => also 13
10 |> (^^ + 1 |> ^^ + 2) |> ^^ + 0
( ^^ => ^^ )( ( ^^ => ( ^^ => ^^ + 2 )( ^^ + 1 ) )( 10 ) ) // => also 13
do {
const ^^ = do {
const ^^ = 10;
do {
const _^^ = ^^;
do { const ^^ = _^^ + 1; ^^ + 2 }
}
};
^^ + 0
} // => also 13
10 |> function () { return ^^ + 1 + 2 } |> ^^()
( ^^ => ^^() )( ( ^^ => function () { return ^^ + 1 + 2 } )( 10 ) ) // => also 13
do {
const ^^ = do {
const ^^ = 10;
function () { return ^^ + 1 + 2 }
};
^^()
} // => also 13
10 |> function () { return ^^ + 1 |> ^^ + 2 } |> ^^()
( ^^ => ^^() )( ( ^^ => function () { return ( ^^ => ^^ + 2 )( ^^ + 1 ) } )( 10 ) ) // => also 13
do {
const ^^ = do {
const ^^ = 10;
function () {
return do {
const _^^ = ^^;
do {
const ^^ = _^^ + 1; ^^ + 2
}
}
}
};
^^()
} // => also 13
value |> somethingWithCallback(^^, data => data |> ^^.filter(...))
( ^^ => somethingWithCallback(^^, data => ( ( ^^.filter(...) )( data ) )) )( value )
do {
const ^^ = value;
somethingWithCallback(^^, data => do {
const ^^ = data; ^^.filter(...)
})
}
value |> g(^^, await (f(x => ^^) |> ^^ + 1 |> h(^^)))
// Nested IIFEs would not work here.
do {
const ^^ = value;
g(^^, await do {
const _^^ = ^^;
do {
const ^^ = f(x => _^^);
do {
const _^^ = ^^;
do { const ^^ = _^^ + 1; h(^^) }
}
}
})
}
Here, a dummy variable _^^
is used only because declarations that shadow an outer lexical context’s variable cannot use that outer context’s variable of the same name in its assignment; the declaration immediately shadows the variable of the same name with an undefined variable. In contrast, IIFEs separate the argument values from the parameter-variable declarations; the argument values are in the outer lexical context and the parameter-variable declarations are in the inner context.
value
|> g(^^, await (
f(x => ^^)
|> ^^ + 1
|> h(^^)
))
Assuming that ^^
is lexically scoped to the RHS of the pipes (keeping in line with left-associativity), it desugars to this in F# style:
value
|> (_1 => g(_1, await (
f(x => _1)
|> (_2 => _2 + 1)
|> (_3 => f(_3))
)))
which is equivalent to this after inlining expressions:
( _1 => g(
)(value);
or
do {
const _1 = value;
g(
_1,
await h((f(x => _1) + 1))
);
}
The only places where inlining isn't possible is the cases where there are multiple references to ^^
within a given scope.
Then you'd have to desugar to an IIFE or a do expression.
FWIW, if https://github.com/tc39/proposal-pipeline-operator/issues/23#issuecomment-362386783 were to be accepted, that pipeline could be expressed as
value
|> x => g(x, await (
f(_ => x)
|> y => y + 1
|> h
))
F#-style which is more readable to me because the bindings are explicit. Those ^^
and assorted |:
or |>>
delve too deep for me into Sigilistan.
Edit: formating and typos
@pygy: The only places where inlining isn't possible is the cases where there are multiple references to ^^ within a given scope. Then you'd have to desugar to an IIFE or a do expression.
Your analysis is correct. Assuming the absence of do
expressions, Hack-style pipes are more powerful than F#-style pipes, in that the latter cannot inline RHS arrow functions with multiple references to their LHS’s value, without creating new function objects—and without doing more complex movement of autogenerated lexical bindings to the innermost surrounding valid block, without disturbing the original pipe expression’s result in situ. Without creating new function objects or moving autogenerated lexical bindings outward, do
expressions would be required: e.g., x |> y => [y, y]
“inlined” into do { _1 = x; [_1, _1] }
.
@pygy: F#-style which is more readable to me because the bindings are explicit.
The problem with requiring all placeholder bindings in non-call expressions to be explicit is precisely the same problem as requiring all F#-style pipes to have placeholders. It’s really verbose for every step. That would be why F#-style pipes might still be standardized even if Hack-style pipes were also standardized (as #89 formally proposes). The terseness goal applies to both.
@pygy: Those
^^
and assorted|:
or|>>
delve too deep for me into Sigilistan.
The Hack-style pipe need not be |:
, and the placeholder need not be ^^
. Those are bikeshedding problems; there may well be alternative tokens that are more readable and less Sigilistani. Although I don’t know whether said bikeshedding should go in this issue or a new issue.
For people reading in the future: @gilbert has created a summary wiki page of the current proposals. There is also a new issue (#89) specifically regarding mixing the Hack-style/parameterized/binding pipe and the F#-style/tacit/call pipe.
Is there any use case for having only hack-style pipelining and not using a placeholder?
@ljharb I don't think so - in such a case you'd essentially be starting a new pipe. It should probably be a syntax error if you don't use the binding. Is that what you were thinking, or something else?
Yep - i would hope it’s a syntax error.
Besides the this
identifier, is there any other precedence of implicitly bound values in JavaScript? I cannot recall any, which makes the hack syntax quite exceptional for the language.
The placeholder variable would be at least somewhat less “weird” than this
, in that it would come from a simple compile-time lexical binding, like with let
and const
—rather than a special runtime call-site binding, like with this
. In that lexical sense, at least, there is predecence. But, I don’t recall any other other precedent for the concept of implicit binding. Edit: Of course, super
, arguments
, new.target
, and import.meta
exist…Those are also precedent. There is also no predecent for the tacit calling in the F#-style pipeline either, for what it’s worth…
I’m hoping that, by using an invalid identifier that is well understood not to be a normal variable, the Hack style would still be clear, unambiguous, and readable. I don’t think this is as true as for making the placeholder an already-valid identifier such as $
…but hopefully there would be an invalid identifier prettier than ^^
. Maybe ><
, since <>
would conflict with JSX and legacy E4X.
super
, arguments
, new.target
, import.meta
?
From what I understand, in the Hack-style proposal:
LHS |> RHS
is defined as ($ => eval(RHS))(LHS)
while in the F#-style proposal:
LHS |> RHS
is defined as RHS(LHS)
I think that's why people talk about "simplicity" of the F#-style proposal.
@bisouduperou: That is not completely true. Proposal 2: Hack Style Only would be equivalent to nested do
expressions. eval
is not involved. This would naturally enable function-scoped expressions such as await
, yield
, and yield *
, which would not work with nested arrow functions. For instance, if we pretend that a ##
nullary operator is the Hack-pipe placeholder (cf. #91 for bikeshedding) but that ##
is also bindable using const
(which it wouldn’t), and if we make |>
left associative, then these would be exactly equivalent:
LHS |> RHS
do { const ## = LHS; RHS }
x + 1 |> await ##
do { const ## = x + 1; await ## }
x |> await ## |> [0, ##, ## * 2]
do { const ## = do { const ## = x + 1; await ## }; [0, ##, ## * 2] }
This syntactic transformation is branchless; there are no syntactic special cases; its cyclomatic complexity is 1.
In contrast, Proposal 1: F-sharp Style Only would indeed often be equivalent to RHS(LHS)
…but it would require special casing of LHS |> await
(and maybe other types of expressions like LHS |> .RHS
, as has been proposed in https://github.com/tc39/proposal-pipeline-operator/issues/75#issuecomment-359227235 and https://github.com/tc39/proposal-pipeline-operator/issues/23#issuecomment-362923403). So these would be equivalent:
LHS |> RHS
LHS.RHSWithoutPeriod // if RHS starts with “.”
await LHS // else if RHS is “await”
RHS(LHS) // else
…which is not as conceptually simple and uniform as Proposal 2’s transform. It demands a syntactic cyclomatic complexity of n, where n is the number of special cases for the RHS. If only await
is accomodated, then this syntax has a cyclomatic complexity of 2 without accomodating methods, instanceof
, arithemetic operations, yield
, etc. If await
and methods are accomodated (as in the example above), then syntactic cyclomatic complexity is 3, while still not accomodating instanceof
, arithemetic operations, yield
, etc.
Proposal 3: Split Mix would be equivalent to the above two syntaxes for their respective operators, although the F-sharp pipe operator probably would lose its special cases, since they would be made useless by the separate Hack pipe operator. Each of the two pipe operator’s cyclomatic complexity would be 1, at the cost of having two pipe operators.
Proposal 4: Smart Mix would be equivalent to:
LHS |> RHS
do { const ## = LHS; RHS } // if RHS contains “##”
RHS(LHS) // else
I personally think Proposal 4 is a good compromise, with a syntactic cyclomatic complexity of only two, while keeping RHS(LHS)
pithy, but also accomodating any possible expression, and adding one not two new operators (not including a placeholder nullary operator). But I could be wrong in my opinion. I just want the syntax to have versatility without complex, ad-hoc special casing.
@js-choi FWIW, I'm not in love with the .method()
syntax for F# because of the additional complexity you mention. While we do special case await
(and potentially other unary-like operators, so not instanceof
, but maybe import
or typeof
), thinking about those like functions reduces how difficult it is to learn / use them.
(As an aside, I'm not sure I understand the usage of "cyclomatic" complexity here, which I colloquially understood as being about # of potential code paths in a piece of code.)
@mAAdhaTTah: Sorry; by “cyclomatic complexity” of the syntax I’m speaking from the parser/compiler’s perspective (as well as the programmer’s perspective while they reason about what their code means). Each special case of the syntactic transformation is yet another parsing-time branch that makes the syntax more expressive but also makes it more complex to parse and to reason about. The syntax should balance between simplicity of parsing (for both compiler and programmer) and expressiveness/versatility, as well as it can.
FWIW I ran into another case where this would be useful: usage with the keyword new.
var school_uids = await Group.findAll(...)
|> $$.map(g => g.school_uid)
|> new Set($$)
Hi all, I'm not an expert on language design, but find it ridiculous that after 3 years there is still no agreement on the syntax. I admit I haven't read everything, but I did quite a bit and there doesn't seem to be any chance of a compromise. Can we just write down the use cases, sort them by priority and try to satisfy as many as possible? I tried it, and concluded that a placeholder is necessary for all but the simplest cases.
After giving it some more thought, I've arrived at something similar to Hack-style:
?
, initialized with caller's ?
.|>
can be used to initialize ?
.That's pretty much it. I'm not saying this is the best proposal ever, but anyone can learn it in a minute, and it can do pretty much anything.
//parameter piping
1; twice(?); console.log(?) // 2
//this piping
[3, 2, 1]; ?.sort(); console.log(?) // [1, 2, 3]
//multipiping not possible (easily)
1; 2; let x=?+? // 4 not 3
let mult = by => ? * by
4; mult(5) // ? = 20
I think it should be possible to make a new ?
at every { block } or even at every ( expression ) that's not a function's parameters.
// in case anyone finds this useful
3; (1; ? + 2 ) * ? //? = (1+2)*3
It might even replace the ::
, if the to-be-bound functions are rewritten to operate on ?
rather than this
. Borrowing the example from https://github.com/tc39/proposal-pipeline-operator/issues/107,
function doubleSay() { return ? + ", " + ?; }
function capitalize() { return ?[0].toUpperCase() + ?.substring(1); }
function exclaim() { return ? + '!'; }
"hello"; doubleSay(); capitalize(); let result=exclaim();
result // "Hello, hello!"
One less-than-awesome feature is that the injected ?
gets overwritten after first line, but you can make a backup copy if needed.
Because JavaScript doesn't allow standalone RHS
It does.
I haven't read the full thread very closely, but to respond to "doesn't handle multi argument functions", those can be handled quite easily by uncurrying and uncurrying. For example:
const add = x => y => x + y
// A multi-argument function
const lift2 = f => xs => ys => xs.reduce((p, x) => [...p, ...ys.map(f(x))], [])
const curry = f => x => y => f([x, y])
const uncurry = f => ([x, y]) => f(x)(y)
const xs = [1, 2, 3]
const ys = [4, 5, 6]
// Still works fine with the pipeline operator
const result = [xs, ys] |> uncurry(lift2(add))
console.log(result)
// => [ 5, 6, 7, 6, 7, 8, 7, 8, 9 ]
Note that uncurrying into arrays like this isn't the only valid form of uncurrying, a very frequently used pattern in JS is to collapse a function that has 20 different arguments into a function that takes a single argument with 20 different properties.
I'd also like to propose a new $=
operator for people who want a metavariable like syntax to represent the previous result in a pipeline. I've taken the liberty of going through all the browsers and implementing it, so you can use it in all browsers today! (you're welcome)
Let's translate @js-choi's example from above:
let timer =
planets
|> pickRandom($)
|> "Hello " + $
|> setTimeout(() => console.log($), 1000);
, which can be translated to:
let timer = (
$= planets,
$= pickRandom($),
$= "Hello " + $,
setTimeout(() => console.log($), 1000)
)
The nice thing about the $=
operator is that after much careful design and analysis on my part, I was able to get it to work well with all the other JS language features!
// Works nicely with new, await etc.!
let googlecom = async () => (
$= await fetch("https://www.google.com"),
$= await $.text(),
new DOMParser().parseFromString($, "text/html")
)
googlecom().then(console.log, console.error)
Now that this new operator is available in major browsers, we can have a compromise where the |>
operator can be used whenever we want to avoid referring to $
everywhere, and the $=
operator can be used when we prefer the pointed style.
@masaeedu you can already do something similar to $=
with the sequence operator:
const greetRandom = async ($) => (
$= await pickRandom($),
$= "Hello " + $,
setTimeout(() => console.log($), 1000)
)
greetRandom(['mercury', 'venus', 'earth', 'mars'])
It's more of a pipe
than a pipeLine
, but I find it handy. h/t @gilbert , although I'm having trouble finding the repo where he outlined this syntax.
I included async ... await
just to illustrate that it works.
@kurtmilam Ah, yes, that's quite nice (although to clarify, you can already do something identical to the $=
operator using the $=
operator; the $=
operator works in browsers today). We should open two more competing proposals called the $=
operator and the "smart" $=
operator.
Maybe all we need is a new sequence operator. AFAIK, the main complaint about the current one is that it's confusing to developers, since the comma means different things in different contexts.
@masaeedu Duplicate of this comment :) That issue also links to the proposal you're thinking of, @kurtmilam
@gilbert That's the one - thanks for linking it!
As a fan of the F# pipeline proposal, I'd really like to know whether there are any good arguments against @masaeedu 's suggestion that TC39 go with an F# style pipeline and recommend the existing sequence operator for all of the additional operations a Hack style pipeline would enable.
In other words, I don't see $=
as just a bit of silly fun. Rather, it seems to me that everything(?) a Hack style pipeline would enable is already possible using the existing sequence operator.
$
is an identifier, and thus imo is not available for use in an operator.
@ljharb Coffee? (-;
Reading through this thread again, I summarized some pros and cons of the Hack style pipeline operator.
Pros:
Cons:
Not a con:
Note: the Minimal style is intuitive and elegant in its own right.
In terms of whether to support await
, there are a few issues with using then()
:
Using the following Hack style chain as example:
let ageMap = fetch()
|> await $
|> filterBy($, x => x.age >= 18 && x.age <= 65)
|> sortBy($, x => x.rating, 'desc')
|> pickN($, 100)
|> mapBy($, x => x.age)
// ...
// further operations that looks up ageMap
// ...
Using Minimal style with then()
would become:
fetch()
|> then(fetchedData => {
let ageMap = fetchedData
|> filterBy(x => x.age >= 18 && x.age <= 65)
|> sortBy(x => x.rating, 'desc')
|> pickN(100)
|> mapBy(x => x.age)
// ...
// further operations that looks up ageMap
// ...
})
The example involving then()
is harder to read, and was harder to reason when I was writing it. If we end up going with the Minimal style, I would probably break the await
operation into its own statement to preserve the simplicity.
Or in this case simply
(await fetch())
|> filterBy(x => x.age >= 18 && x.age <= 65)
|> sortBy(x => x.rating, 'desc')
|> pickN(100)
|> mapBy(x => x.age)
// ...
// further operations that looks up ageMap
// ...
Or in this case simply
(await fetch()) |> filterBy(x => x.age >= 18 && x.age <= 65) |> sortBy(x => x.rating, 'desc') |> pickN(100) |> mapBy(x => x.age) // ... // further operations that looks up ageMap // ...
That's true, but if the await happens in the middle of the chain, it gets more complex.
There are several suggestions for awaiting in the middle of pipe chain. E.g:
fetch
|> map()
await |> filter()
That's true when you only have the bare minimum F# proposal. Not when you also add await
and yield
semantics. Yes it adds an extra, but Hack does that too: the placeholder.
I agree that Hack covers more cases, but the biggest cost of it, to say it bluntly, is in my view the extra ugly syntax. And ugly syntax is exactly the thing that we want to reduce with the introduction of the pipeline operator. When you don't care about ugly syntax you can do all those things covered by Hack already. See https://github.com/tc39/proposal-pipeline-operator/issues/84#issuecomment-419699457.
I consider |>
as a more powerful .
operator. It can be viewed of as what other languages call extension methods. .
means something like 'operate on ...', and |>
more or less should mean that as well I think. With the difference that with .
, the defined methods are fixed per type and with |>
the 'methods' (if you still want to call them like that) can be extended. And even in a clean way: they're just functions.
E.g. it finally becomes possible to 'extend' the array type with a reverse
operation so it can be used like this.
const r = [1,2,3] |> map(add(1)) |> reverse();
Or flatten
so we can do this
const r = [[1,2], [3,4]] |> flatten() |> map(add(1))
The extra power you gain with Hack is that you don't need extra syntax for await
and yield
when used in the middle of the chain. For the rest of the probably less used cases where a placeholder might be useful, in F# you can easily solve these with 3 extra characters: $=>
. E.g. where in Hack you'd write $[3]
or $.thing
, in F# you'd write $=>$[3]
or $=>$.thing
. (And when you do this a lot you can even write a separate operator for that.) For most use cases it will only add extra syntax since you're forced to use the placeholder.
Yes, I am aware the point is to provide extension methods in a functional way. I presume most people who are interested in this topic are aware of such :)
I can live with F# style having essentially two operators (|>
and =>
). I would say the Hack style sacrifices slight terseness in certain cases to achieve more universal consistency. But the F# style has the advantage of not requiring a reserved symbol to represent the placeholder.
I think whether you like Hack style syntax is subjective. The https://github.com/tc39/proposal-pipeline-operator/issues/84#issuecomment-419698505 mentioned above does the trick, but is much harder to mentally parse than the Hack style pipeline. There is a reason why $=
is not used in the last statement. Programming with this construct requires the programmer to constantly think of the underlying plumbing, so it's not really equivalent to the Hack style.
I think await
is very import to JS. Losing the ability to handle await
in the chain is significant. I think it's something we can potentially compromise on, but it's not something we can simply overlook and pretend it's not important.
The design of Hack style naturally covers both await
and yield
, but I feel adding it to F# style feels more like a bolt on feature. It can potentially introduce more problems down the road.
I think losing await
would be good for JS on the whole.
I would say the Hack style sacrifices slight terseness in certain cases to achieve more universal consistency.
Yup, tho note that it gains slight terseness in most cases compared to F#-style. I have several examples in my explainer gist for the options, and the comparison slide from March's committee presentation shows it off very compactly.
Precisely one case - calling an unary function with the topic value as its sole argument - is terser in F#-style. If that's an extremely common use-case in your programming (such as if you're very heavily using Ramda), then the few extra characters required by Hack-style (to actually invoke the function) can be a slight imposition, but if you write anything else in your pipelines, Hack-style is terser.
(And that's assuming we land on a satisfactory parsing/precedence solution such that F#-style pipes can contain arrow functions without requiring parens around them. That's not guaranteed; there are some drawbacks to making that possible. If we don't hit that, then F#-style get significantly worse, with a minimum of five extra characters needed in each pipeline step, added both at the beginning and end of the step, versus Hack-style.)
Doesn't F# style assume first argument injection? So it does make the following HOF usage ergonomic:
let result = population
|> filterBy(x => x.age >= 18 && x.age <= 65)
@highmountaintea It does not, no. I think that would be far more surprising than any of the other solutions currently proposed.
Btw, personally I am undecided on which proposal to advance. My desire is to develop the Hack discussion further until we either come up with more solid arguments and consensus for it, or abandon it in favor of other proposals. I feel continuing to argue back and forth in the #167 thread was simply creating an impasse (the reason why there has not been progress for the past few years). Instead, advancing the various proposals further independently would help resolve the limbo we are currently in.
@highmountaintea It does not, no. I think that would be far more surprising than any of the other solutions currently proposed.
Interesting. Sorry for my misunderstanding. I was more focused on the Minimal proposal and the Hack proposal because those were the two proposals people bring up in the #167 thread.
The current proposal (in which the RHS is implicitly called with the result of the LHS) does not easily support the following features:
In order to better support these features the current proposal introduces special-case syntax and requires the profilgate use of single-argument arrow functions within the pipe.
This proposal modifies the semantics of the pipeline operator so that the RHS is not implicitly called. Instead, a constant lexical binding is created for the LHS and then supplied to the RHS. This is similar to the semantics of Hack's pipe operator.
Runtime Semantics
left
be the result of evaluating PipelineExpression.leftValue
be ? GetValue(left
).oldEnv
be the running execution context's LexicalEnvironment.pipeEnv
be NewDeclarativeEnvironment(oldEnv
).pipeEnvRec
bepipeEnv
's EnvironmentRecord.pipeEnvRec
.CreateImmutableBinding("$", true).pipeEnvRec
.InitializeBinding("$",leftValue
);pipeEnv
.right
be the result of evaluating LogicalORExpression.oldEnv
.right
.Example
Advantages
Disadvantages
Notes
The choice of "$" for the lexical binding name is somewhat arbitrary: it could be any identifier. It should probably be one character and should ideally stand out from other variable names. For these reasons, "$" seems ideal. However, this might result in a conflict for users that want to combine both jQuery and the pipeline operator. Personally, I think it would be a good idea to discourage usage of "$" and "_" as variable names with global meanings. We have modules; we don't need jQuery to be "$" anymore!