tc39 / proposal-pipeline-operator

A proposal for adding a useful pipe operator to JavaScript.
http://tc39.github.io/proposal-pipeline-operator/
BSD 3-Clause "New" or "Revised" License
7.45k stars 109 forks source link

Effect of Hack proposal on code readability #225

Closed arendjr closed 2 years ago

arendjr commented 2 years ago

Over the past weekend, I did some soul searching on where I stand between F# and Hack, which lead me into a deep-dive that anyone interested can read here: https://arendjr.nl/2021/09/js-pipe-proposal-battle-of-perspectives.html

During this journey, something unexpected happened: my pragmatism had lead me to believe Hack was the more practical solution (even if it didn't quite feel right), but as I dug into the proposal's examples, I started to dislike the Hack proposal to the point where I think we're better off not having any pipe operator, than to have Hack's.

And I think I managed to articulate why this happened: as the proposal's motivation states, an important benefit of using pipes is that it allows you to omit naming of intermediate results. The F# proposal allows you to do this without sacrificing readability, because piping into named functions is still self-documenting as to what you're doing (this assumes you don't pipe into complex lambdas, which I don't like regardless of which proposal). The Hack proposal's "advantage" however is that it allows arbitrary expressions on the right-hand side of the operator, which has the potential to sacrifice any readability advantages that were to be had. Indeed, I find most of the examples given for this proposal to be less readable than the status quo, not more. Objectively, the proposal adds complexity to the language, but it seems the advantages are subjective and questionable at best.

I'm still sympathetic towards F# because its scope is limited, but Hack's defining "advantage" is more of a liability than an asset to me. And if I have to choose between a language without any pipe operator or one with Hack, I'd rather don't have any.

So my main issue I would like to raise is, is there any objective evidence on the impact to readability of the Hack proposal?

ken-okabe commented 2 years ago

My comment has moved to

https://github.com/tc39/proposal-pipeline-operator/issues/223#issuecomment-923765230

arendjr commented 2 years ago

I’m sorry, I’m feeling somewhat feverish so I’ll postpone replying to other comments, but please @stken2050, you have your own issue about algebraic concerns. Can we keep the discussion here focused on readability implications?

ken-okabe commented 2 years ago

Ok, I've moved my post, sorry about that, and take care.

Lokua commented 2 years ago
james 
|> goosify(^)
|> unfoo(^)

As a reader, I cannot understand what the second instance of ^ is without reading the previous line. It means every line I read with ^ is a double take. As a reader, I have to constantly compute the value of ^, and it doesn't help that ^ is not the same thing but looks the same, so my brain is in constant conflict while reading.

That said, the F# style also requires I read the previous line:

james 
|> goosify
|> unfoo

But I don't have to constantly compute the placeholder. I do have to compute the meaning of one pipe to the next, but now I can do it immediately without a middleman getting in the way. It's like the placeholder is screaming "comprehend me! decode me!" - basically asking me to do the job of a computer.

Also, as a writer whose job is to communicate meaning, hack style removes my ability to provide descriptive names, which is basically the universal first step of writing readable code. I'm honestly shocked at this proposal. Imagine if you started at a new company and they enforced that all unary functions you write regardless of context had to name their single argument x (or god forbid, ^ :trollface:). That's how I feel and I'm not exaggerating - this proposal scares me because it's going to lead to code that is uglier, harder to read, and harder to refactor/abstract later on!

Edit: I too would rather see no pipe than this and am admittedly biased as I've always thought a pipe operator (or pretty much any more new syntax) is a bad idea. But if we are going to have it, I'd rather see it implemented in a way that improves readability, not (IMHO) hinders it.

kaizhu256 commented 2 years ago

again from frontend/fullstack programming perspective, the focus-of-interest is the final return-value (to be message-passed).

readability is improved (at least in frontend-systems) if you use a temp-variable with a descriptive name for the final return-value:

// what's the intent of data being message-passed?
james 
|> goosify
|> unfoo
|> await postMessage

// vs

// temp-variable `userid` clearly describes the final-data being message-passed
let userid = james;
userid = goosify(userid);
userid = unfoo(userid);
await postMessage(userid);
ken-okabe commented 2 years ago

I think there is no difference for the usage of operator between frontend and backend or others, and temp-variable is anti-pattern of software development. Generally, if the code where let is replaced to const is broken, it's a bad code. Destructive substitution should be avoided in general and if some mechanism of a programming language fits to a manner to be avoided, that is the mechanism to be avoided.

That is why const is introduced in JS, which is the excellent mechanism to prohibit Destructive substitution that is an "error" in the code.

runarberg commented 2 years ago

It is quite interesting that after participating in these threads as an advocate for F# for quite some time now. And reading what people have to say, that I am starting to arrive at the opinion that we might not need a pipe operator at all.

One of the reasons why I wanted a pipe operator in the first place was to allow library authors to provide point free operators on their constructs. So basically because of RxJS and the like. The current pipe function (or method in the case of RxJS) serves this purpose but has limitations. I personally—after reading what people have to say, and realizing what the TC39 committee believes about this programming style in general—am fine continuing the status quo for the time being. If and when Operator overloading arrives, these libraries can then overload the bitwise OR operator | with a Function RHS to give us the pipes that I want (https://github.com/tc39/proposal-pipeline-operator/issues/206#issuecomment-922091263).

import Pipe from "my-pipe-lib";
import { map, filter, range, take, reduce } from "my-iter-lib";
with operators from Pipe;

const { value } = new Pipe(
  range(0, Number.POSITIVE_INFINITY),
)
  | filter(isPrime)
  | map((n) => n * 2)
  | take(5)
  | reduce((sum, n) => sum + n, 0);

If and when operator overloading arrives and library authors start providing their operators with this style I honestly think that hack pipelines would be in the way. When reading this above example you can clearly see what is going on. However if hack pipelines are also available as an option, and if you have seen a lot of code written with hack pipes, you might be a little confused by what is going on here. “These are pipes, but there is no topic marker. Why? How does this even work? What magic is this?” Without the hack pipelines in the language these questions are irrelevant. If you are confused, you just go look at the docs for the library where the operator overloading is explained. No trying to figure things out in relation to the similar but distinct hack pipes.

ljharb commented 2 years ago

If anything that’s an argument against operator overloading, not hack pipes.

voronoipotato commented 2 years ago

The argument against hack pipes is that we believe they are a hazard (sometimes for different reasons, but the conclusion is the same). Most of us in this thread, and I suspect in the wild, would rather have no pipes, than hack pipes.

ken-okabe commented 2 years ago

I also think hack-pipe is so harmful that if it's the default route as they claim, it's far better not to have any. I feel very sorry not to have F# or minimal style, but it's better than JS will be broken forever. Then, I also hope we have operator overloading, then totally no problem.

aadamsx commented 2 years ago

If anything that’s an argument against operator overloading, not hack pipes.

It's hard to believe these guys don't get it ... no acquiesce to our point of view, no: "great point" or "that makes sense" or "maybe we need to rethink this" etc.

Just a steadfast wall of rebuttals and basically:

"Hacks right, it's coming like-it-or-not, we can do this all-day-long, deal with it :trollface:".

ljharb commented 2 years ago

@aadamsx it was indeed a great point - about how operator overloading can cause confusion by being similar to existing non-overloaded, or visually similar operators. It is not, however, a great point against hack pipes.

mAAdhaTTah commented 2 years ago

It's hard to believe these guys don't get it ... no acquiesce to our point of view, no: "great point" or "that makes sense" or "maybe we need to rethink this" etc.

Since you're interested in seeing things from each others' perspectives, what arguments in favor of Hack do you think is a good point or makes sense?

runarberg commented 2 years ago

it was indeed a great point - about how operator overloading can cause confusion by being similar to existing non-overloaded, or visually similar operators. It is not, however, a great point against hack pipes. — @ljharb

Can you please elaborate? I admit that having both hack pipelines and overloaded pipeline operators may be confusing (though I am not 100% sure on this), however why are you so sure that operator overloading is the problem but not hack pipes? It seems to me that you’ve simply jumped to conclusion and picked one.

Here is my reason to think that hack pipelines might not be needed if operator overloading allows simple opt-in pipelines:

The explainer for operator overloading provides a number of good use cases other then pipelines operator overloads, including CSS units calculations and easy calculations on custom users types such as vectors or matrices. Given that both hack pipelines and operator overloading might be redundant in giving us the ability to pipe values into functions, it is often wise to pick the one which has more broad use-cases. It can be debated which operator that is F# pipes or Hack pipes, but it obvious that operator overloading beats hack pipelines in broadness of use-cases.

In many sciences it is often a measure of a quality of a theory how much it can predict out side of the initial scope. If the only thing a theory can predict is how much a 20-22 year old supermarket customer will spend on an average weekday, it is a pretty lousy theory, if however the theory can also not only predict the spending of all ages, in all kinds of shops, but also how likely they are to commute to the supermarket on a bus, how likely they are to bring their own bag, how the shopping changes if they meet their friends, etc. That’s a lot better theory. Hack pipelines are pretty much only good for linearizing nested function calls. Operator overloading can do that an much much more.

Now tell me, given that operator overloading and hack pipelines are redundant in providing pipeline operator to the language, why should we loose the operator overloading but not hack pipelines?

ljharb commented 2 years ago

@runarberg your argument in https://github.com/tc39/proposal-pipeline-operator/issues/225#issuecomment-924479581 seems to me to be roughly "with operator overloading, and an overloaded operator that's similar to a default one but has different semantics, users might be confused". That could be true about any operator, not just pipeline - you could overload + to do something that's not semantic addition/combination, and that would be confusing. This confusion has in fact played out in my experience in languages that offer operator overloading.

If operator overloading had advanced as far, or farther, than this proposal, then it would be a reasonable claim to make that "maybe we don't need this proposal, since users can do it with operator overloading". However, they haven't, and it's not clear they will advance, and this proposal is stage 2, so it wouldn't be appropriate to subjugate this proposal to an earlier one.

If you believe operator overloading is a better overall proposal, and would obviate this and others, then I'd encourage you to participate on that proposal's repo and help drive it forward.

ken-okabe commented 2 years ago

@runarberg

Given that both hack pipelines and operator overloading might be redundant in giving us the ability to pipe values into functions, it is often wise to pick the one which has more broad use-cases. It can be debated which operator that is F# pipes or Hack pipes, but it obvious that operator overloading beats hack pipelines in broadness of use-cases.

A great insight that I can agree with.

@ljharb

However, they haven't, and it's not clear they will advance, and this proposal is stage 2, so it wouldn't be appropriate to subjugate this proposal to an earlier one.

@js-choi wrote a great article in Prior difficulty persuading TC39 about F# pipes and PFA #221 and Brief history of the JavaScript pipe operator for us.

Also, @rbuckton https://github.com/tc39/proposal-pipeline-operator/issues/91#issuecomment-917645179

I have been a staunch advocate for F#-style pipes since the beginning, but it's been difficult to argue against the groundswell in support of Hack-style pipes due to some of the limitations of F#-style pipes. If we kept debating on F# vs. Hack, neither proposal would advance, so my position as Co-champion is "tentative agreement" with Hack-style.

"tentative agreement" as he emphasis, so I think the stage-2 is not with a full consensus and not fair to treat as if the established manner, and most importantly the reaction against the achievement of stage-2

Frustrations of people looking forward to tacit syntax #215

Since this is a significant spin-off issue here to discuss why don't you open a new issue? @runarberg

ljharb commented 2 years ago

@stken2050 yes, but my comment was speaking to operator overloading, altho the same statement can be applied to PFA.

Stage 2 is stage 2, there is no "partial consensus" in the TC39 process. Hack Pipelines for stage 2 have the only kind of consensus we have - 100% consensus - but that doesn't preclude individual delegates being hesitant or unexcited about it.

"Consensus" also doesn't mean "set in stone" - the process allows for changes, and certainly a delegate could decide to block stage 3, and that is more what I infer from the "tentative" part of the agreement: that we should not assume stage 3 consensus will be automatic.

ken-okabe commented 2 years ago

@ljharb

"Consensus" also doesn't mean "set in stone" - the process allows for changes, and certainly a delegate could decide to block stage 3, and that is more what I infer from the "tentative" part of the agreement: that we should not assume stage 3 consensus will be automatic.

It's the virtue and I appreciate we don't have to debate this point, and we are in such a process, therefore I'm afraid to say I don't see much reasoning to

However, they haven't, and it's not clear they will advance, and this proposal is stage 2, so it wouldn't be appropriate to subjugate this proposal to an earlier one.

Furthermore, it's possible to announce a pointer to https://github.com/tc39/proposal-operator-overloading from here then I'm sure the human resource will migrate, in fact, the reason the discussion here suddenly heated up is we noticed the announcement Hack-style has proceeded to stage-2.

https://github.com/tc39/proposal-operator-overloading/blob/master/README.md

looks nice to overload for pipeline-operator.

runarberg commented 2 years ago

I created a new issue to talk about the potential confusion of pipelines created with overloaded operators #228.

mAAdhaTTah commented 2 years ago

Responding to @shuckster's comments on readability from here:

I simply cannot get my head around the fact that you'd consider the latter easier to read than the former, other than taking your word for it. I'm sure you do find the latter easier to read - I'm not saying otherwise - except to say that perhaps the "readability" argument isn't quite as clear-cut as we both might think?

I'm actually not arguing that this is more readable to me, because frankly, both of those syntaxes are equally comfortable to me. I did point-free programming extensively for a period of time, and I understand why the point-free version feels elegant, like everything superfluous has been trimmed down to its cleanest form. But I don't have any problem reading or comprehending the second one either. It's explicit & clear, I can see exactly where the arguments are going, and I don't really have a problem with the placeholder.

My argument for the readability of Hack isn't just based on my perception of its readability (although obviously, that plays into it). It's based on my experience having written point-free style code in a team and the level of comprehension (or lack thereof) I got from coworkers about that code. It just didn't look anything like the code they read & write on a regular basis. I worked at an agency so there were a number of projects of going on at a time, and I was the only one who wrote code like that. A coworker once joked that my code came "preminified" cuz it was a bunch of tiny, one-line compositions 😂! Unless they were familiar and comfortable with that style, the amount of overhead going from React/Vue/Backbone/jQuery/etc. projects into heavy Ramda was significant, and I'm relieved the project I wrote that code for has long since shuffled off to the great bitbucket in the sky, because I have no faith another developer could maintain it.

Further, projecting outward, I don't think my experience is out of sorts relative to the wider ecosystem. I think functional and/or point-free JavaScript is a minority style, which means me being the only one writing that style in a team of 10 is... about right. I have not run across that much point-free code since I stopped writing it regularly myself, and when I have, it's generally regarded as the more gnarly part of the codebase. That means non-point-free code (or whatever you want to call it) is what people are going to be most familiar with. They're going to mostly have seen foo(x) and bar(x), and if they need to sequence them, they're more likely to do bar(foo(x)) than pipe(foo, bar)(x).

This means that for most developers, they're going to see x |> foo(^) and recognize the RHS as a function call. They've seen that before, they know what it means, so they'll already be familiar with part of what's going on here. Because familiarity is most closely correlated with readability, I'm thus really arguing that x |> foo(^) will be more readable to more people because it's more syntactically similar to a wider swatch of the JavaScript code written, so more developers will have experienced code like it.

I don't think there's a single idiomatic JavaScript, but there are communities within JavaScript with their own idioms. What is readable, what is familiar, in your JavaScript community is very likely to be different than mine, which makes arguing about readability very difficult. Even worse, if you've been using pipe(foo, bar)(x), you're being asked to take a readability hit compared to your current baseline, requiring now a point where you were previously point-free. I get that, and I want to acknowledge that it's frustrating. I hope this at least explains why I think Hack is more readable generally, even if it isn't for you.


For that matter, I'll also mention I know OP is opposed to any pipe, so this comment doesn't really pertain to the primary thrust of the this thread, but I'm posting in here in the interest of keeping similar arguments collected together.

lozandier commented 2 years ago

@mAAdhaTTah The pipeline operator is ubiquitous with writing code primarily in a left-to-right functional composition manner in a variety of disciplines. That in and of itself is a minority way of programming technically. A JS function doesn't have to be written in a composable way at all after all–let alone an entire codebase.

Being more involved to do than to not do, using and composing functions will always be a minority aspect of JS code. To me, the idea of the hack-style pipeline operator is to do pipelining that makes non-function abstractions less arduous to write than functions with it as a result.

However, pipelining non-functions abstractions isn't what most think of when it comes to pipelining in multiple disciplines/programming contexts–most certainly not the primary intent of those that clamored for it to exist in the language; this is why there has been substantial pushback of this version of the operator to not continue as-is.

The majority of people that will use the pipeline operator as early adopters, have championed it to be a standard, and accordingly will be the majority of people who will use it long after it long exists in the language (hopefully without the hack-style semantics) will always accordingly be a (vocal) "minority" to you since functional programming can always be framed as the minority way of writing code among all who decide to code in JS.

That said, such group of people happens to be a meaningful amount of the JS community that have created the attention of the need of this proposal to exist.

Overall, the commonality of functional programming use cases shouldn't be downplayed in the manner it seems you are (and potentially other TC39 members?) considering how popular functional libraries are and how well-received functional-programming-related features added to the language have been. This is a concern shared by a meaningful amount of the JS community in recent years with how the TC39 has operated regarded things related to functional programming; such sentiment has only gained considerable traction with how this pipeline operator proposal has been handled thus far.

Anyhow, a factor of code not being written in a pipeline-oriented way more often is because the pipeline operator is not natively available to the language, which is problematic for engineers in a variety of ways.

Even with that in mind, it's telling in an overwhelming amount of surveys polling the JS community on what new features the TC39 should consider addressing, the pipeline operator is consistently one of the most desired features desired + the efforts TC39 companies such as Microsoft and Netflix have made to create or use libraries such as Reactive Extensions (RxJS, RxPython, RxSwift, etc) available to leverage its semantics reliably across platforms.

Overall, the pipeline operator has been in demand for quite some time, and certainly not in the matter the Hack-style currently combats arbitrarily.

The problem with the Hack-style pipeline operator when it comes to readability by a meaningful amount of the JS community that write their code in a pipeline-oriented manner today

The hack-style has undoubtedly disappointed many who feel that a tacit approach to functions is being arbitrarily removed to support non-functional expressions being linearized when those things should be represented more as functions towards retaining the tacit syntax of representing the simplest functions to functionally compose with: Functionals that have one parameter (or none).

While it is applauded the effort and thought to linearize expressions and things not explicitly syntactically written as functions that @tabatkins helpfully explained, it's long been evident the majority that wanted this operator to exist natively in the language don't think it should be at the expense of the simplest way of representing a functional composition that the pipeline operator–historically tied to being a functional-composition-oriented operator–should accept: unary functions expressing first-class composition. First-class composition is typically understood to be done without tokens like (^) hack-style currently requires. Read more about it here

The Hack-style had a choice of offloading additional characters for more generic linearizing of data between functions and non-functions and chose the latter which is problematic for a operator tied to functional composition–so much so that it has been objected to by some of the most prominent functional programmers in the language–including maintainers of some of the most popular libraries associated with the language today (i.e. @benlesh).

It shifts away from this ubiquitous understanding of linearizing being functional-oriented that it's inconvenient to use with the code majority of people write to compose things today. The way the hack-style handles unary functions explicitly violates first-class composition semantics.

Some have attempted to dismiss this blaming the functions being written in a HOF manner which is a tangent from pipelining. Functions that are augmented to return functions or embedded with techniques to handle invocations of the functions without all the core parameters filled in (currying) are functions merely being written to be more flexible to be composed regardless if it's used with a pipeline operator or not.

That's something the pipeline operator itself doesn't need to concern itself with.

Having a partial application token to compose with functions with multiple parameters (+1 arity) should instead be closely tied/leveraged by the pipeline operator, but there doesn't seem to be contention with that standpoint besides what token to use for that purpose.

Regardless, the Hack-style adds cognitive noise requiring (^) for even the simplest function composition calls (first-class composition); many want to use additional tokens to functions primarily only to communicate pipelining data to a function that won't be immediately the first parameter

It shouldn't matter much if a developer decides to do

x |> multiply(2, ?) |> add(10, ?)

or (F#/minimal-style)

x |> multiply(2) |> add(10)

or more succinctly (point-free)

x |> double |> add10

to represent

add(10, multiply(2, x))

Also, in contexts where a function is passed in other places for later execution in JS like as a parameter of another function, you don't need any additional characters for the functions:

function composeWithTwoFunctions (aFunction, anotherFunction) {
  // You'd use `arguments` object /`...` operator + `reduceRight` to compose with any number of arguments that are functions passed in. 
  return (x) => anotherFunction(aFunction(x));
}

So why the pipeline operator with the current Hack-style syntax arbitrarily wants to break away from such conventions associated with composition?

Anyhow, the pipeline operator being strongly tied to linearizing functions historically, the tacit means of expressing function composition of unary functions present in alternate forms of the pipeline operator proposed is ideally intact for readability that I think can be inferred beyond reasonable doubt being shared by majority of programmers across multiple languages regardless if they primarily functionally program or not.

Expressions such as ? + 2 can easily be understood as (x) => x +2 to enable functions not needing (^) yet non-functional expressions being supported.

The same should be done for expressions using await (turned into async functions) and so on.

The proposal as-is is arbitrarily going out of its way to be a contrarian to how pipelining is seen as a left-to-right function composition operator by the majority of the JS community that led to the original proposal being supported to the extent it was–as well as being contrary to how problem solvers understanding pipelining in general.

My anecdotal remarks since you shared yours

Thanks again for sharing your anecdotal experience with point-free programming to better understand your perspective, @mAAdhaTTah!

Here's my anecdotal take: At Google, I work directly with wide variety of engineers solving problems involving machine learning (ML) and user experience (UX) problems. I often facilitate the manifestation of Web apps as large as YouTube TV, AI image classification tools, and implementing design system tools used by hundreds of Google teams world-wide.

For ML especially, pipelining with the linearization of functions first and foremost with first-class composition of unary functions represented in a tacit manner aligns with the mental modal how people prepare and transform data for ML use cases + CLI tooling (especially in bash).

In Julia you can today do

a |> add6 |> div4 |> println

For functions that aren't unary you'd import the Pipe package to then write something like

@pipe a |> addX(_,6) + divY(4,_) |> println

In Python you can use RxPython to use pipe() or write something like the following with short composable functions usually written using lambda:

compose(f, g, h)(10)

Either way, first class composition is represented in a tacit manner aligning with functional composition conventions.

The lack of a pipeline operator in JS natively often gets in the way of human resource alignment tasks (i.e. interview processes) and performance optimization processes because people that are accustomed to relying on libraries such as RxJS, LoDash, Rambda to model problems in a pipeline-oriented matter are suddenly inconvenienced in such processes because it's not available natively in the language.

Tasks represented as functions programmatically are the default to be passed through pipelining abstractions such as a pipeline operator.

With my experience working with Code School, Treehouse, CodeNewbies, a TA of peers at USC, and several prominent bootcamp in mind, it's been intuitive this fact of pipelining as well as the pipeline operator representing functional composition.

It accordingly hasn't been a problem to communicate f(g(x)) as x |> g |> f the past 8 years I've been a full-time professional working for a variety of entities (start-ups, non-profits, corporate, etc) to existing SWEs, recent grads or interns; and new teammates who may not be accustomed to modeling their problems in a pipeline-oriented way.

For Web apps and UI small and large, I've successfully integrated reactive functional programming in codebases leveraging the RxJS's Observable/Subject as an endofunctor abstraction to pipeline zero or more values over time efficiently. This has been extremely popular and well-received, even by the most harshest critic I've had originally against such things.

Such style of programming is so popular at Google that Angular, our most popular front-end framework, uses such abstractions heavily in its end-user APIs.

Many love it so much that they leverage the framework-agnostic abstraction as is in a variety of front-end and back-end JS code. I use RxJS in my React, Vue, and Lit projects all the time–especially for async problems and data transformation tasks.

While there has been onboarding I needed to lead, I've been credited having new college grads, existing engineers, and new hires quickly be up to speed being able to contribute and maintain code using such abstractions. What consistently has been a large factor of why it has been easy for me to onboard engineering stakeholders to is the intuitiveness of pipe.

The behavior of pipe(), acknowledged by the champions of Hack, coincidently aligns with the pipeline behavior of the F#-style and so on. Unlike the Hack-style, it makes first-class composition clear and concise without needing any tokens other than parentheses around the functions it accepts as params to communicate first-class composition.

A consistent frustration by bootcamp students, recent grads, interns, and work colleagues is that it's hard for them to leverage their growing ability to solve problems in a pipeline-oriented manner that they prefer without it being natively in the language.

This is especially the case for global scale projects where every byte counts; this is something that I've had difficulty with complying with performance budgets I could not change. For recent grads and interns, it's frustrating they can't dramatically rely on simplifying their functional code in interviews and so on with the semantics of pipeline operator not yet being natively available.

Since the prominence of Hack reaching this stage, my attempts to leverage it through TypeScript forks and Babel replacing pipe() behavior from libraries–or the F# TypeScript fork with the Hack-style one–have been universally panned. So much so, I've had to revert such changes to keep using pipe or revert back to using the F#/minimal proposal TypeScript fork..

Throughout the Summer, newcomers (especially CS grads) and existing eng alike are particularly consistently confused why (^) is needed for unary functions/tasks that will be executed by JS in a chain communicated by the pipeline operator. "It breaks first-class composition semantics" was a common complaint.

It also added significant cognitive noise chaining functions that returns a function which was expected. It confused them and I on how much parentheses they need; whether they set-up the high-order function correctly the first time, and so on; this is especially the case with shallow curried functions:

// Hack style
x |> multiply(2)(^) |> add(10)(^)

Such problems were essentially gone by merely replacing hack-style pipeline operator with pipe or the TypesScript fork supporting the F#-style/minimal-style version of the operator.

The hack-style operator in comparison seemed more arduous and verbose than it needed to be; as one intern best put it:

It's like JavaScript [is] now actually trying to be Java in its verbosity with this take on functional composition!

Because of this experience, I do not think Hack-style should go forward with its current treatment of passing in functions–especially unary ones–in its effort to accommodate non-functional expressions.

The Hack-style approach to linearization isn't what majority of people had in mind with the realization of this operator in JS; the pipeline operator majority of people had in mind is a functional composition operator to better communicate consumable data processed left-to-right sequently by functions with no tokens such as ((^)) needed to represent first-class composition.

I'm of the opinion that the sentiment about pipeline operator needing to better accommodating unary functions in a tacit matter for the sake of readability succeeds the popularity of #SmooshGate(Array.prototype.flat); the impasse between JS developers and TC39 members on the matter is accordingly understandably weird to many why more isn't done to accommodate this ubiquitous understanding of the pipeline operator accordingly.

ken-okabe commented 2 years ago

@lozandier

The pipeline operator is ubiquitous with writing code primarily in a left-to-right functional composition manner in a variety of disciplines. That in and of itself is a minority way of programming technically.

This is not true. We use 1 + 2 + 3 or "hello " + "world" in JS.

Fundamentally, essentially, this proposal is to introduce a new binary operator to JS, which is the same league of exponentiation operator ** introduced In ES2016 Syntax Math.pow(2, 3) == 2 ** 3 Math.pow(Math.pow(2, 3), 5) == 2 ** 3 ** 5 https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Exponentiation

The Hack-style approach to linearization isn't what majority of people had in mind with the realization of this operator in JS; the pipeline operator majority of people had in mind is a functional composition operator

To be exact, the pipeline operator majority of people had in mind is a functional application operator, and which is like this:

a |> f |> g |> h ===
a |> f |> (g . h) ===
a |> f . (g . h) ===
a |> (f . g) |> h ===
a |> (f . g) . h ===
a |> (f . g . h) 

where |> function application . function composition

Please read https://github.com/tc39/proposal-pipeline-operator/issues/223#issuecomment-922841935 if you have a time.

For "what majority of people had in mind with the realization of this operator in JS" is functional is true, in fact, there is a proof that I mentioned here: https://github.com/tc39/proposal-pipeline-operator/issues/228#issuecomment-925312884

Having said that, your statements are very reasonable, and the problem is they no longer won't listen to any reasonable claims as he (RxJs author) said: https://github.com/tc39/proposal-pipeline-operator/issues/228#issuecomment-925465598

I've said my piece. Pretty thoroughly. I wasn't really heard. And I lost a friend over it. I simply just don't care what happens with this anymore. Therefore, anything that might help functional programming libraries is unlikely to pass the TC39. It is what it is. It's the hack proposal or nothing.

We agree with you, and why don't you stop persuading members? Because it's a waste of your precious time. It's impossible to discuss in any reasonable way to talk someone who already decided to ignore any reasonable claims and you and we are the ones who are forced to listen to unreasonable empty explanations as if it does matter a lot.

It's the hack proposal or nothing.

If it would be a choice of poison or nothing, We want to chose nothing and wait for other features such as operator overloading.

So why don't you join to Pipelines with operator overloading #228 ?

We don't want pipe any more. It's redundant over the operator overloading. We no longer support this pipeline-operator proposal as a whole.

shuckster commented 2 years ago

Thank you for the reply @mAAdhaTTah . I must say that @lozandier's astonishing post is a far better rebuttal than I could conceive. It completely reshapes my own perspective on how up-and-coming the FP JavaScript landscape really is, and I'm grateful to hear the encouraging anecdotes about its take-up.

It also inflames my concern for just how much FP opportunity is up for squandering by choosing the wrong pipe operator. We are so very lucky to find ourselves with first-class functions in this crazy language, and I say this not at all to diminish the hard work of the VM authors who have been practically forced to improve it because of the explosion of the web as a platform. (I think they enjoy the challenge, though. ;) )

To keep it on the topic of readability, I see far greater potential for that in the resulting APIs of FP libraries that would use the pipeline operator they want, rather than a token-based one that only seems to help those on the very first rungs of the FP ladder (and judging from @lozandier 's reply, I'm no longer convinced they stay there for very long!)

mAAdhaTTah commented 2 years ago

@lozandier I appreciate the thoughtful response. I started writing up a response of my own, but I deleted it because it would be shitty of me to argue with your experience, even if it's different than mine. Instead, given that you've done some exploration of a transition to Hack in a functional codebase, I would be appreciative if you could share some of those before and after code samples (if you have them and are able to share them – I understand work code may not be sharable). As I argued here, I believe Hack pipe would be beneficial to the functional community & functional programming in JavaScript, so I'd be curious to see, in practical terms, the downside to that style in Hack in real-world code, rather than the add/multiply trivial examples that have so far left me unconvinced.

SRachamim commented 2 years ago

I still don't get it. Why do we even discuss the nature of a naturally-occurring, ancient, discovered, proved, concept of function composition? A feature which was implemented correctly in almost all modern languages? A pipe should just let us pipe a value through one or more functions. There's only one way to go: a |> f === f(a). The only thing we need to discuss on a pipeline operator proposal is which symbol to use...

That's how simple a pipe is. Why ruin it and make it something which is not?

I wish we'll abandon this proposal entirely. I rather see the minimal pipe or nothing at all.

Nazzanuk commented 2 years ago

As a non-purist I do think that the Hack proposal isn't the shortest and sweetest syntax. But the benefits of having the flexibility of the right hand being any expression is very intriguing / appealing and I can't wait to be able to use it.

I would from my general perspective consider linearising expressions far more beneficial than a pure yet restricted functional approach. So I'm optimistic that it's going to be a great addition to js and save a lot of unnecessary temporary naming and advance declaration of non-unary functions which the F# solution appears to require.

I'm confident as with most things once it has been used enough times, the perceived lack of readability of the operator will slowly cede to the familiarity of using it so frequently.

The prevailing opinion in this thread seems to be "my way or nothing at all" - I'm actually quite happy with this more rounded, general solution.

lozandier commented 2 years ago

@Nazzanuk A pipeline is ubiquitously understood in data science and computer science as a series of logic blocks/tasks/subroutines that will be executed, in order, when data becomes available.

Please provide examples of expressions today you can't encapsulate in a function (logical blocks of execution / imperative code packaged as a single unit to perform a specific task) that you think Hack-style helps with to justify

1) Its right hand expressions not being automatically inferred as (<piped data>) => <expression>

2) Its verbosity for unary functions that represents the simplest subroutines you can write in JS to consume data that is still an instance of the Function first-class object

Note at the time of this writing, functions only return one value in JS (no tuples).

The most essential representation of a pipeline task is a unary function; it being more verbose to write than expressions that are far rarer to pipeline with via the hack-style pipeline operator is something a meaningful amount of developers over almost half a decade are objecting to.

Nazzanuk commented 2 years ago

@lozandier I'll contend that 3 characters worth of verbosity for unary functions is a very fair trade off for less verbosity around literally every other use case.

The syntax for arithmetic, awaiting promises, object literals, template literals etc. all look comparatively nicer. I spend more time with these as a whole than I do composing pure functional pipe operations, I appreciate that probably is not the same for everybody.

The Hack-style looks like a great implementation overall for JS, providing readability benefits outside of the functional scope. It may not be the purest most essential representation of a pipe from a data science and computer science perspective because of an extra 3 characters - but I think that's ok.

lozandier commented 2 years ago

@Nazzanuk That doesn't necessarily answer my question, but when it comes to verbosity it needs to be asked why expressions aren't able to be rendered as unary functions for developers automatically with a pipeline operator that still enables tacit syntax with chaining unary functions:

// latter experession should be same as (x) => ({bar: x });
value |> foo |> {bar: ?};

// latter should be the same as (x) => `Hey, ${x}!`; 
value |> foo |> `Hey, ${?}!` 

// latter should be same as async (x) => await bar(x);
value |> foo |> await bar
benlesh commented 2 years ago

Readability is pretty subjective, but with regards to helping with readability around arithmetic and awaiting, I'm unsure it's better than what is currently available:

Chaining arithmetic operations with |> is a bit rough looking. x |> ^ + 2 |> 10 / ^ Who would do that when they can just do 10 / (x + 2)?

Chaining awaited promises is already achievable, in a more memory-efficient way*, with .then(). x |> await foo(^) |> await bar(^) |> await baz(^, 'something') is just foo(x).then(bar).then(x => baz(x, 'something')).

(* large async-await blocks will not GC was is in the async function's scope where then chains will GC what they don't use, generally)

Honestly, yield may be the only thing here that is solidly different with the current proposal. Although, much like generator functions, I seriously doubt most people have the chops (or the need) to use them for coroutines.

lozandier commented 2 years ago

@lozandier I appreciate the thoughtful response. I started writing up a response of my own, but I deleted it because it would be shitty of me to argue with your experience, even if it's different than mine. Instead, given that you've done some exploration of a transition to Hack in a functional codebase, I would be appreciative if you could share some of those before and after code samples (if you have them and are able to share them – I understand work code may not be sharable). As I argued here, I believe Hack pipe would be beneficial to the functional community & functional programming in JavaScript, so I'd be curious to see, in practical terms, the downside to that style in Hack in real-world code, rather than the add/multiply trivial examples that have so far left me unconvinced.

@mAAdhaTTah Yeah, my work code is unfortunately not sharable; I currently work for Google's AI Responsible Innovation Team; my code is tightly coupled with sensitive and proprietary code. I've only recently begun to slowly do open-source code again with Chrome and Material.

FWIW, @benlesh's code examples that you've probably already seen pretty much align with how I typically write code; I prefer to solve problems using the reactive functional programming (RFP) paradigm.

I use RxSwift, RxJS/IxJs, RxPython, RxJava/RxKotlin, and so on to normalize my ability to use such paradigm across languages to solve problems while simultaneously advocating for end users as a User Experience Engineer.

Concerns with the arduous nature of Hack-style pipelining with mixins

Something I forgot to mention with my earlier response is Hack-style's handling of Mixins:

class Comment extends Model |> Editable(^) |> Sharable(^) {}

Code such as above has been deemed unnecessarily arduous from my experience instead of just

class Comment extends Model |> Editable |> Sharable {}

I utilize mixins fairly often (i.e. mixin shareable behavior onto UI Components).

For the following code:

class AppRoot extends connect(store)(LitElement){}

Hack-style would require me to write the following:

// Hack-style
class AppRoot extends LitElement |> connect(store)(^) {}

instead of the following matching ordinary pipeline functional composition semantics

class AppRoot extends LitElement |> connect(store) {}

The latter is seen by colleagues and I being far more readable with the ubiquitous pipeline functional composition conventions in mind I've mentioned in my previous comment.

Nazzanuk commented 2 years ago

@lozandier fair, that is probably optimal and I'd be happy with that as well, I'd argue that the Hack-style would be consistent and still very succinct and readable.

// latter expression should be same as (x) => ({bar: x });
value |> foo(^) |> {bar: ^};

// latter same as (x) => `Hey, ${x}!`; 
value |> foo(^) |> `Hey, ${^}!` 

// latter as async (x) => await bar(x);
value |> foo(^) |> await bar(^)

But it really does seem trivial to be a sticking point, considering the enormous benefits the operator will bring as a whole.

Nazzanuk commented 2 years ago

Readability is pretty subjective, but with regards to helping with readability around arithmetic and awaiting, I'm unsure it's better than what is currently available:

Chaining arithmetic operations with |> is a bit rough looking. x |> ^ + 2 |> 10 / ^ Who would do that when they can just do 10 / (x + 2)?

I'd probably expect the chaining to be done around the arithmetic not inside

x |> await fetchVal(^) |> 10 / (^ + 2) |> await saveVals([otherVal, ^])

Chaining awaited promises is already achievable, in a more memory-efficient way*, with .then(). x |> await foo(^) |> await bar(^) |> await baz(^, 'something') is just foo(x).then(bar).then(x => baz(x, 'something')).

Yeah chaining promises are great, but obviously just limited to promises, I think the value is the flexibility in more general usage

x |> await foo(^) |> convertData(arg, ^) |> `hello: ${^.name}` 
lightmare commented 2 years ago

(* large async-await blocks will not GC was is in the async function's scope where then chains will GC what they don't use, generally)

This is an interesting point. Maybe it deserves its own discussion (couldn't find one; has there been any?)

The way Hack pipe is specced right now, it keeps intermediate topic values alive for the whole duration of the pipeline. That makes it kinda unsuitable for chaining awaits.

mAAdhaTTah commented 2 years ago

I'll be frank, I don't see this as "arduous":

class Comment extends Model |> Editable(^) |> Sharable(^) {}

This fits into the same category to me as the multiply / add function examples seen elsewhere: they're small examples with marginal differences in the syntax. It's marginally worse than the point-free version, sure, but I don't see that as rising to the level of "arduous", hence the reason I'm interested in more involved examples.

The connect example is more interesting because while yes, the current API would require you to write it as:

class AppRoot extends LitElement |> connect(store)(^) {}

My suggestion is that the API itself should change to an uncurried alternative:

class AppRoot extends LitElement |> connect(^, store) {}

It's only necessary to write const connect = store => element => ... because of its interaction with the pipe function and the tools currently available for function composition in JavaScript. As you mentioned up top, functions need to be designed for composition, requiring its own calling style which looks weird when used outside of the pipeline. An operator which leans into mainstream call syntax doesn't require functions to be designed for it, so you can go back to writing connect such that it takes all its arguments at once.

This dovetails with @lightmare's comments, although I concede the Hack version of RxJS specifically (and importable methods generally) is worse than using .pipe in that example and prefer dedicated syntax for that case.

benlesh commented 2 years ago

Funny enough, I think this reads more fluently:

class Comment extends Sharable(Editable(Model)) {}

And this reads better too, IMO:

class AppRoot extends connect(LitElement, store) {}

I don't think I'd even want the F# pipeline for either of those, TBH. Even if it would still be a little cleaner to read than the current proposal. Just doesn't seem like a good use for a pipeline.

mAAdhaTTah commented 2 years ago

@benlesh I agree with that first one, in that it literally reads as a "sharable, editable model". I think if we combined the two examples, we'd see an advantage to piping them (setting aside the fact that we're mixing two paradigms here):

// Unpiped
class MyModel extends Sharable(Editable(connect(LitElement, store))) {}

// With pipe
class AppRoot extends LitElement |> connect(^, store) |> Sharable(^) |> Editable(^) {}

The latter version (imo) makes the base class clear by putting it first, then listing the mixins it's composed with, while the first one starts to suffer from the "inside out evaluation" issue pipes are intended to address.

lozandier commented 2 years ago

@benlesh I agree with that first one, in that it literally reads as a "sharable, editable model". I think if we combined the two examples, we'd see an advantage to piping them (setting aside the fact that we're mixing two paradigms here):

// Unpiped
class MyModel extends Sharable(Editable(connect(LitElement, store))) {}

// With pipe
class AppRoot extends LitElement |> connect(^, store) |> Sharable(^) |> Editable(^) {}

The latter version (imo) makes the base class clear by putting it first, then listing the mixins it's composed with, while the first one starts to suffer from the "inside out evaluation" issue pipes are intended to address.

@mAAdhaTTah @benlesh Yeah, this example is much more real world; I try to keep my examples small to a fault. Pipelining making clear the base class which is why I primarily pursue pipeline syntax with mixins that is somewhat marginal in benefit if there are a small amount of mixins needed.

That said, I prefer

class AppRoot extends LitElement |> connect(^, store) |> Sharable |> Editable {}

My chains often are 5+ tasks long–this is especially the case for data transformations common in data science tasks involving things like normalization; as mentioned in my comment up top, solving such problems with a pipeline-oriented approach is common [with data-science-oriented languages like Julia already having a pipeline operator (|>)].

With RxJS / IxJS in mind, it's not common to have multiple instances of map(), tap(), in a data processing pipeline or pipeline processing involving asynchronous UI code.

It accordingly quickly adds up needing to type (^) for first class composition. When I say it's arduous to use the hack-style syntax for mixins here, I mean it is so physically and ergonomically.

Even as a Kinesis Advantage 2 and ZSA Moonlander user, I'm legitimately concerned about my pinkies throughout a workday having to type (^) every-time I want to do first-class composition. :)

So much so, I would daresay rather type at first

Sharable(Editable(connect(LitElement, store)))

For ease of typing instead of rewriting it with the hack-style pipeline operator–even though I find it far less readable than using the hack-style pipeline–because how tedious it'd be to type using (^) over and over again.

It's happened in trials already; the logic of such behavior is that you're replacing () of nested composition with |> and (^) to communicate first-class/basic composition with the hack-style pipeline operator. It's experiences like this is why I don't think it's valuable to add (^) for first-class composition.

The appeal of a dedicated pipeline operator to me is it being more fluid and potentially less characters needed (usually the same being|> vs ()) to functionally compose with unary functions which again merely represent the simplest and most essential subroutine to add to a pipeline of tasks that will be executed, in order, when data becomes immediately available or over time.

All that said, I would very much prefer not needing to rewrite connect at all as you have.

class AppRoot extends LitElement |> connect(store) |> Sharable |> Editable {}

It's also not necessarily easy or possible to rewrite connect (though connect can be curryable in a way to support both for optimal versatility) and it's problematic that I would need to because Hack's cognitive tax with its use of (^).

In my opinion, a good pipeline operator is accommodating of all functions common in composition. It's evident HOFs are a common enough type of valid function people write to compose with; it's common enough in general JS codebases that employers audit JS developers on whether or not they have an understanding of them in interviews for years.

tabatkins commented 2 years ago
// latter should be same as async (x) => await bar(x);
value |> foo |> await bar

Note that this is not true; this is why the F#-style proposal had to have a special "bare await" syntax. Just putting an async function in a pipeline means the next step in the pipeline will see a promise, not an awaited value; you've effectively written an identity function there. You have to switch over to piping via .then(), and if you want to do normal piping on the value after that point it'll be nested into the callback.

lozandier commented 2 years ago

@tabatkins That was hypothetical syntax. I was suggesting something can be hashed out to allow that syntax.

As an aside, will there be examples of the handling of Promised-based abstractions failing within chains?

tabatkins commented 2 years ago

I was suggesting something can be hashed out to allow that syntax.

Implicitly awaiting an async function isn't viable; that was discussed back when we were first adding Promises to the language and got rejected pretty hard, for code-predictability and perf reasons. It needs an explicit await somehow, and the only way to do that in F#-style is to have a syntax carve-out (or do nothing, so you'd have to use parens around the pipeline and prefix it with an await, but that's pretty bad).

As an aside, will there be examples of the handling of Promised-based abstractions failing within chains?

Sure, what sort of example are you looking for? The code's the same as without pipelines - if you await a rejected promise it's turned into an Error which you can try/catch. Or you can do .then() on the promise to handle it directly, if you prefer.

mAAdhaTTah commented 2 years ago

FWIW, this example is effectively the "Smart Mix" proposal we had:

class AppRoot extends LitElement |> connect(^, store) |> Sharable |> Editable {}

The issue with this is the RHS of the pipe changes from "function application" to "expression evaluation" depending on the presence of the placeholder, which is a refactoring hazard. It also wouldn't have supported LitElement |> connect(store) because "bare style" was limited to bare identifiers in order to minimize those hazards, but the champion group eventually dropped Smart Mix in favor of Hack.

It accordingly quickly adds up needing to type (^) for first class composition. When I say it's arduous to use the hack-style syntax for mixins here, I mean it is so physically and ergonomically.

My disagreement here is in privileging the typing of the pipe over the reading of it. If code is generally read far more often than it is written, then we should be privileging reading it. The version with the placeholder to more readable (to me/imo) because it makes explicit that which is implicit in the other examples.

This is admittedly my issue with point-free function composition in general. While the initial writing of said code can produce elegant looking syntax, I have found testing & debugging code written in that style extremely challenging. By extension, while I can see the physical tax of literally type ^, I find the cognitive tax of point-free code to be much higher, especially for anyone transitioning into it from non-point-free style code.

As an aside, I suspect we'll end up back at % as the token (see latest on #91), although I don't think it changes the point you're making.

In my opinion, a good pipeline operator is accommodating of all functions common in composition.

This is the thing tho: F# pipe doesn't actually do that. It specifically, narrowly accommodates functions written in a unary, point-free style. If I want to compose anything else, even other functions, I have to rewrite it in a particular way, or use a curry helper, or wrap it in an arrow function. We can accommodate function composition with functions that aren't written in narrowly in this style but are otherwise perfectly composable with Hack pipe.

lozandier commented 2 years ago

I was suggesting something can be hashed out to allow that syntax.

Implicitly awaiting an async function isn't viable; that was discussed back when we were first adding Promises to the language and got rejected pretty hard, for code-predictability and perf reasons. It needs an explicit await somehow, and the only way to do that in F#-style is to have a syntax carve-out (or do nothing, so you'd have to use parens around the pipeline and prefix it with an await, but that's pretty bad).

As an aside, will there be examples of the handling of Promised-based abstractions failing within chains?

Sure, what sort of example are you looking for? The code's the same as without pipelines - if you await a rejected promise it's turned into an Error which you can try/catch. Or you can do .then() on the promise to handle it directly, if you prefer.

Showing errors using then to handle the error directly would suffice what I'm looking for.

lozandier commented 2 years ago

This is the thing tho: F# pipe doesn't actually do that. It specifically, narrowly accommodates functions written in a unary, point-free style. If I want to compose anything else, even other functions, I have to rewrite it in a particular way, or use a curry helper, or wrap it in an arrow function. We can accommodate function composition with functions that aren't written in narrowly in this style but are otherwise perfectly composable with Hack pipe.

This is where we're at an impasse often about, and the common reason why the response to the hack-style has been so contentious regardless how frequent someone "functionally programs": The simplest and most essential representation of a subroutine in a pipeline is one that takes the starting data or result of a previous subroutine in the pipeline. That's what a unary function is.

This is accordingly why there's severe frustration why there's a need for (^) when you know the result of the previous task/subroutine must pipe to the following subroutine.

|> alone is sufficient to communicate this; the (^) is very redundant to be tackled on to unary functions when you know the previous result should be passed on to it. From there, it is desired only when you need to redirect the result to be a parameter other than the first parameter of the following subroutine using a placeholder token such as ^ makes sense for efficient typing and clarity:

2 |> double |> adjustStatOfCharacter("Avery", "speed", ^) 

The simplest function you can write to directly integrate in a pipeline is a unary function. It is accordingly strongly desired for unary functions to be tacit in representation accordingly. Everything else that isn't a function being represented as (<piped data>) => <expression> seems pretty clear to me.

arendjr commented 2 years ago

Glad to see the discussion advanced healthily :)

I only have a few specifics left I would like to raise: after reading all the replies, I’m still squarely in the camp where I prefer no pipe operator over the current proposal.

However, what really surprised me was someone’s claim that temporary variables are an anti-pattern. I mean, what?! Sure, I see examples with excessive amounts of temporaries that are supposed to justify the need for the Hack proposal, but I’m not buying it. Most of those examples could be cleaned up with a slight amount of common sense and no new operators.

In fact, I see temporaries as something valuable for readability, because they provide context for what is going on. They provide breathing room in trying to understand how a function works.

Let’s look at an innocent example of fetching some data using fetch (in following one of the themes used above):

const headers = { Accept: “text/csv” };
if (token) {
    headers.Authorization = `Bearer ${token}`;
}

const url = `${baseUrl}/data/export`;

const csv = await fetch(url, { headers })
    .then(response => response.text());

return parseCsv(csv);

I would say that’s fine code that doesn’t need any intervention by new operators.

But some would apparently rewrite it like this:

const headers = { Accept: “text/csv” };
if (token) {
    headers.Authorization = `Bearer ${token}`;
}

return `${baseUrl}/data/export`
    |> await fetch(^, { headers })
    |> await ^.text()
    |> parseCsv(^);

But what’s the benefit here? The code was already linearized, is it really so we can eliminate all temporaries? In fact, I think we already lost something here: context as to what all the temporaries are! It might be hard to see, because I take it everyone in this thread is familiar enough with the fetch() API the above poses no real problems. But imagine you’re a junior and you have to parse the above: you just cannot catch a break. You have to understand the whole thing to even make sense of the individual lines.

In a pipe, you need to understand the lines before a step to understand what the input is and you need to understand the lines after to understand what the output is. If you’re not familiar with the API being used that can be a real challenge. One that named temporaries help you with.

But if elimination of temporaries is really the goal, why not go all the way:

return Object.fromEntries(
    [[“Accept”, “text/csv”]]
        .concat(token ? [[“Authorization”, `Bearer ${token}`]] : [])
)
    |> await fetch(`${baseUrl}/data/export`, { headers: ^ })
    |> await ^.text()
    |> parseCsv(^);

I hope no one thinks this is desirable. So where do we draw the line? Apparently not all temporaries are an anti-pattern. And it’s just a subjective matter based on our values where we draw the line for each of us.

And it’s with those values that I think the crux lies. Maybe my values are out-of-line, but I suspect the values of this proposal’s champions are out-of-line. I can very well imagine how after championing a pipe proposal for years, you might feel so inclined towards it that common sense becomes an anti-pattern.

I find it telling that when I posted my article to Reddit I got several dozens of responses that could be roughly grouped into two camps: people that were happy because they read it as an endorsement of F# and people that were appalled because they don’t want any pipe operator at all. Not a single person spoke up in defense of Hack.

Maybe Reddit doesn’t represent a majority opinion, but maybe the champions don’t either. Personally, I think I’m a rather mainstream guy doing React and Redux at work. As far as I know, none of my current or previous co-workers are wishing for Hack, but maybe that’s not a majority opinion either.

But yet I keep seeing these claims from the champions that Hack is better for the language overall and it is better for “everyone”. But they make these claims through abstract reasoning, where they assume “everybody” (or at least a majority) shares their values. And they might feel emboldened because TC39 shares their view when it comes to F# vs. Hack. But they might vastly overestimate how many in the community really want a pipeline operator (if it’s not F#) to begin with.

So my advice is: we should find quantitative evidence that there is a real majority that would find this proposal more helpful than harmful (because of readability concerns or otherwise).

arendjr commented 2 years ago

To illustrate the value of temporaries, I looked at some code snippets from my open-source projects and rewrote them using Hack:

return `(${BLOCK_ELEMENTS.join("|")}|br)`
    |> new RegExp(`^<${^}[\t\n\f\r ]*/?>`, "i");
    |> ^.test(string.slice(index));

(Original: https://github.com/arendjr/text-clipper/blob/master/src/index.ts#L556)

index = findResultItem(resultItems, highlightedResult.id)
    |> resultItems.indexOf(^) + delta;

(Original: https://github.com/arendjr/selectivity/blob/master/src/plugins/keyboard.js#L54)

const room2Ref = rooms.get(getId(portal.room))
    |> {
        description: "",
        flags: ^.flags,
        name: "",
        portals: [],
        position: portal.room2.position,
    }
    |> await sendApiCall(`object-create room ${JSON.stringify(^)}`);

(Original: https://github.com/arendjr/PlainText/blob/main/web/map_editor/components/map_editor.js#L99)

This is the type of the code you would be running into in the wild if the Hack proposal were accepted, and frankly I think we'd be off for the worse.

Another readability downside: note how all of these expressions move the actual result away from what we're trying to do with them. In the first example, the return is moved away from the call to RegExp#test(), making it harder to see what is actually being returned. In the third example, const room2Ref = is moved away from await sendApiCall(), making it harder to see what room2Ref ends up being, or (the inverse) to see what we are actually doing with the result of the API call. To some extent, the Hack operator turned already linear code into less linear code.


Interestingly, while doing this exercise I quickly realized how few code snippets actually lend themselves to being "pipelined" at all. I ran into multiple snippets where I thought I could transform them, only to realize a temporary was referenced by more than one expression afterwards, making a pipeline unsuitable. While this is not a readability argument against pipelines, it is a maintainability one: imagine needing to make a modification to a function, only to realize some intermediate value is now needed more than once. Suddenly you'll be stuck tearing open the pipeline, moving results back into temporaries, and rewriting the code.

boonyasukd commented 2 years ago

Interesting discussion regarding class declaration. As mixin itself is an OOP concept, and we're nesting them to achieve code composability and readability in the first place (Sharable, Editable are adjectives and expected to nest), I would never have thought of using them with a pipe operator on my own.

That said, if people want their BaseClass to be leftmost for clarity, while retaining the readability of mixin in general, the solution is to mix (heh) styles to get the best of both OOP/FP worlds; the trick here is to find just "the right blend":

class AppRoot extends LitElement |> Sharable(Editable(storeConnected(^, store))) {}

The above basically reads: "AppRoot is a LitElement that is sharable, editable, and store-connected." And, to me, it reads like a regular English already. It's true that you still need type 3 more chars toward the end, but the difference between F#/Hack styles is minuscule and doesn't affect understandability or maintainability of this code. If anything, people from either FP or OOP background can understand such code and have no trouble maintaining it. And at 6 months, 3 years, or 6 years from now, the code will still be as readable as it was first written.


From my experience, the mistake people often make when utilizing any programming style is to go "too deep in" to the point of absurdity. Yes, if you abuse hack pipe operator hard enough, you reach the point of diminishing returns. But that very same statement can also be made to both F#/Hack styles, or even arrow functions/HOF in general, for that matter. None of the syntax or programming style in existence is invulnerable to abuses. Like many other aspects in life, moderation is key.

arendjr commented 2 years ago

From my experience, the mistake people often make when utilizing any programming style is to go "too deep in" to the point of absurdity. Yes, if you abuse hack pipe operator hard enough, you reach the point of diminishing returns. But that very same statement can also be made to both F#/Hack styles, or even arrow functions/HOF in general, for that matter. None of the syntax or programming style in existence is invulnerable to abuses. Like many other aspects in life, moderation is key.

In my opinion, this is a rather half-hearted attempt to dismiss concerns with the Hack proposal (paraphrasing): "anything can be abused, so the fact Hack can be abused is no argument against it". Whereas my argument is not so much that it can be abused, but the fact that to my readability standards, most of the examples touted by the champions as "improvements" would already constitute "abuses" (ie. being worse than what we can write today with the status quo and some common sense), and the fact that being so widely applicable to any expression, it encourages itself to be used in scenarios that I would clearly characterize as abuse. In fact, between the status quo and the various abuses encouraged by this operator, I see only a very thin line of legitimate usage for it, and that is what makes me say its value is outweighed by its harmfulness.

Just because both spoons and guns can be abused, doesn't mean we should allow or disallow both to an equal extent.

SRachamim commented 2 years ago

In general, twisting the pipeline operator to support anything that is not a function is abuse. A pipeline operator should just pipe a value through one function or more. Doing flip-flops to support things like new and await just shows that we didn't understand the simplicity of a pipeline operator.

Please hold the proposal ASAP, so we'll all have a chance to understand it's simplicity, before we'll introduce irreversible damage to the language.

shuckster commented 2 years ago

From @mAAdhaTTah :

My disagreement here is in privileging the typing of the pipe over the reading of it. If code is generally read far more often than it is written, then we should be privileging reading it.

From the proposal:

It is often simply too tedious and wordy to write code with a long sequence of temporary, single-use variables. It is arguably even tedious and visually noisy for a human to read, too.

Temporary variables have a chance of being renamed on refactor. Actually, has refactoring been considered at all?

Code that is hard to intuit can at least be helped in the first-pass of a refactor by renaming. Perhaps we can overcome this in Hack, but there's still no chance of developing the same kind of intuition that is already writ-large in the minds of FP developers clambering for the feature in the first place; that everything in a pipeline is a function that takes on the result of the previous as its only argument.

Come to think of it, intuition is extremely important for readability. It's easy to forget the intuitions we've built-up over time. Reading new code that is written to convention, coupled with good naming practices, rewards our intuitions with the delight of quickly grokking what it does.

Learning a new convention also rewards us with revelatory understanding of code that was abstruse only 5 minutes ago.

Do Hack and F# offer equivalent opportunities at learning conventions of this kind? By themselves they both conceal information, one with a token, the other with tacitness. But the learned conventions that overcome this "mystery meat" are not equivalent.

In Hack's case, the conventions are not even transferable. It may be easy in principle to learn the purpose of the Hack placeholder, but it's an obstacle to developing intuitions about pipelines generally, and the first-class functional nature of the language specifically.

Learning what the Hack tokens mean will reward the developer with an understanding of Hack pipes, but really nothing else. In what sense then is the Hack proposal "privileging reading code"?