Open eborden opened 7 years ago
This is a fair question, so I’ll answer point by point.
Have the contributors lost interest?
No. As far as I can tell, all of us on the committee are still very much interested in seeing this through. I can say with a 100% certainty that I am and several others that I’ve spoken to about this.
Is there a part of the committee process that is broken, preventing PRs from never advancing past discussion?
This is hard to say. My gut says no. I think the current issue we are facing is paralysis due to diffusion of responsibility. Additionally, no one wants to step on anyone else’s toes, so I think we’ve started off a bit more cautious than is ideal.
I’m going to personally take the blame for this as I’ve been meaning to be more proactive and haven’t been.
Are meetings happening without notes being posted for transparency and public consumption?
Not that I know of.
I have spoken to other committee members about it and about the process, but nothing that would be considered a ‘meeting’. It’s usually just two colleagues discussing their thoughts.
I hope that gives you some perspective. Speaking only for myself, I plan on being much more active in the coming months (summer tends to be a very productive period for many of us).
It’s been quite silent over the past months, yes. A couple of (mostly cosmetic) discussions went quite far, to the point where we could alter the Report and talk about the actual (as in Latex) change. What we’re still clearly missing though are the »big« proposals that change the type system, e.g. RankNTypes
, but I’m not familiar with the type theory and implementation details to create such a proposal.
This is hard to say. My gut says no. I think the current issue we are facing is paralysis due to diffusion of responsibility.
I had a feeling that this was the likely culprit. Currently the committee is composed of 20 equal members. This seems like a recipe for paralysis since there are no clearly defined roles.
A committee can easily be seen as an elongated meeting. It is then likely that tactics for having effective meetings would play well within this paradigm. There are 3 basic roles (there are many more) that are helpful to assign to have an effective meeting:
This is obviously just skimming the surface, but by defining roles the Haskell 2020 committee can ensure success. Individuals can contribute at any level, but there are blessed individuals who have accepted a very specific responsibility designed to keep the project on track.
I'm with @jmct. I would love to see this move forward, but I have not had the ability (i.e. time) to move it along myself. The suggestions above about specific roles seem apt.
I guess there is also the question of how does an item advance through the stages of the process? Does it require unanimous consent, simple majority, some kind of quorum?
honestly, at least for me, i've found the github workflow a bit too heavy weight over a mailing list for the time and energy i have available.
theres value in organized record keeping and compilation of changes but it is an overhead when trying to propose any communication within the committee
I for one admit to be derelict in my duties, but I hope to have some more time over the summer.
A couple of (mostly cosmetic) discussions went quite far, to the point where we could alter the Report and talk about the actual (as in Latex) change.
Perhaps the way forward is to bring to completion at least these cosmetic discussions. Personally I'm not clear on the process of making the actual LaTeX changes. Where is the repository, who has the write permissions, who can submit patches/pull requests, and who merges them in? I'd like to have something concrete to do, no matter how trivial.
Any updates on this issue? Has the process stalled?
it moves in spurts, and not all preliminary work by folks is online afaik ;)
On Wed, Oct 10, 2018 at 3:23 PM Evan Borden notifications@github.com wrote:
Any updates on this issue. Has the process stalled?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/haskell/rfcs/issues/15#issuecomment-428699306, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAQwlAtMLLv5EOvi3BysgR-7Ym04OpNks5ujkkVgaJpZM4NeC2E .
Is there any interest in producing a status report for the community (100 - 300 words on current progress)? I've seen many signs that the community has lost faith that is process will succeed. A status report would do much in the way of engendering trust and increasing transparency in the process.
Hi,
On 10/11/2018 12:07 AM, Evan Borden wrote:
Is there any interest in producing a status report for the community (100 - 300 words on current progress)? I've seen many signs that the community has lost faith that is process will succeed. A status report would do much in the way of engendering trust and increasing transparency in the process.
Yes, definitely. This would also be useful for the committee, I think. And I would like to do that. But I think it needs to be more substantial than 100 - 300 words to be meaningful and genuinely useful.
Best,
/Henrik
This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment.
Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.
To foster the voting/decision process:
You can use underrated but ingenious Tricider web voting system. It has previews, ideas, arguments and counterarguments and comment threads, and votes to ideas & (counter)arguments (small example). It can be used for closed summing-up discussion & voting. Note: login in particular through Google (I mean, through OpenID stolen by them) currently doesn't work "on my machine".
Also, I would remind - there is a solid & easy to follow the mathematical algorithm for helping people to make a solid weighted choice in multi-criteria complex decisions between any number of alternatives. It is called "Analytic hierarchy process" by Thomas L. Saaty, and there is even some orthodox implementation in yours truely. It is pretty loved in business. I used the algorithm for myself to make a decision in complex situations a couple of times, it offsets the human approximations that it asks to enter, and it works really scientifically great.
It is known that the committee is de jure, but GHC is de facto.
Currently, there are >100 language extensions.
I have a rule of thumb:
when (size (elements :: Set extentions) > 100) $ pickMostUsefulSubsetOf $ setOfSets $ classify elements
The path of least resistance is to at least recommend a set of them. And the upstream seems constantly resisted to chaise the diagram arrows of the naturality path.
To get the safest ones from the top of: https://mail.haskell.org/pipermail/ghc-steering-committee/2020-November/001876.html and to form a HaskellYYYY
recommended extension. That is all. & everything below derives from it for free.
There was and is Haskell98
, and approach works, so why we would not go further?
So, people can start reducing the extension lists, and some maybe would need to learn 1-2 new useful language semantics they wanted to learn for the last 10 years.
Forming the yearly casually recommended extensions subset helps to solve the dichotomy between the de jure and de facto powers, bounds together the processes of de jure management power and de facto GHC working implementation power.
The de jure is top-down power. For example, no one wants in 10-20 years of the Foundation for Google taking the power and start forcing the community to self-interest decisions of "new W3C approved what is a Haskell language" on developers. Because of that, the processes need to be established beforehand to lift the informal extension work done in GHC up and up into the more official and more official standard. - it is a counter-balance down-up power of the people.
1.1. Someone has an idea. 1.2. That someone discusses it. 1.3. Develops a language extension. 1.4. Goes through the patch merge process to merge it into GHC. 1.5. News and docs on it get published in GHC docs. 1.6. People start using it. 1.7. People start discussing it. 1.8. Start discussing "does it fit the current recommended set of language extensions". 1.9. Then when decided - the entity can include the extension into the de facto recommended subset. 1.10. If the extension goes great in the defacto recommended subset - it gets lifted into a standard.
That lifting is great for opt-in seamless improvements. They hoist great through the structure.
If it is really needed to do a language semantics breaking change on everybody - it can be done only in a top-down forceful way through the de jure Report standard.
If the de jure "de facto recommended subset" organs would want to be polymorphic to the compiler, it always can be said, it is currently unofficially recommended to have such language behavior as this GHC extensions subset provides.
Everyone would know what standard set of extensions is recommended to what year. It would be easier to remember for people than the diff of the official Report documents.
And no one needs to write a big Report document with a stamp on it and rewrite the compiler to new standard, extension is a completely casual unofficial programmers way of working on it. There is an already opt-in existing set of language extensions, the only thing that needed is to form a recommended opt-in subset of a set.
And all who opt-in to use HaskellYYYY
recommended extension (almost everybody) would be focused on optimizing that set of extensions.
It creates a track of history and a braid of extensions through time that can be worked on and followed by anybody. All the Reports is hard to enumerate and to list their contents on one page, while the chosen subsets can be formed yearly and enumeration of those sets would fit on one page.
After the set is proclaimed, the official document is still always can be formed to make language changes and standardization more official.
The constant discussions on the current extension subset direct towards making the new year extension subset to include discussed changes. And when Report would be decided to be formed - the core of it would be the year extension subset + the conclusions of discussions on it.
It is known it is hard "to write a scientific paper", especially a Report one, but just "starting doing something and writing something" (discussing extensions and forming "new this years list" one).
Forming extention subset is a much easier task to start processes from. The first time still would be harder because of a choice, but when the discussion would roll - year to year "open source distro news" everyone is happy to hear and know and participate in, report informally writes itself, the data hashed itself out, and then take the readily available data and write the paper.
It is harder to actively participate in Linux distributions (there is a lot of meta situation + a random subset of technologies and languages), in programming language everybody knows the language under discussion, so the yearly "distro" process would polish itself through the volume of people that can participate, so in a big lazy list of people there always would being someone that is active and currently does the role job.
If to do the report first - there lion share of it is discussing extensions subset and no escaping of merging the language extensions into the base when forming a standard. So you see, writing the Report before forming a subset of extensions is a backward process.
But when there is an informal subset of extensions recommended year by year, grab a current set and body of discussion, form the Report, and proclaim a standard officially by merging the extensions into the base language.
All these things you already know, just wanted to remind them.
And so I would not need to ship my own extension to myself to reduce the extension list in the project we have.
And by the way, Haskell98 was in the year 1999, so Haskell2020 still typechecks.
So we started quite in advance to form an extension subset.
Are you familiar with the GHC2021 proposal?
Well, it is the right way. But, as said, GHC internal kitchen is not enough. There is Foundation, Committee, and GHC. And they love to pretend that they are separate and not related. The GHC work & recommended extensions -> Haskell Standard process needs to be eventually formally established at least vaguely. The absolute up-down form of power is tyranny, absolute down-up demos power is anarchy. With the better forms of processes are in between, that means that not only up-down form of power should be formalized but down-up also, that formalization preserves a lot, for example, keeps a balance from situation falling into one or the other way.
There are several reasons that Haskell standardization has stalled, in my personal opinion:
-XNumericUnderscores
could be standardized, and maybe a few more lexical tweaks. Even -XMultiParamTypeClasses
is difficult: there are corner cases where what type to infer is unclear. When we started thinking about standardization a few years ago (I was part of the committee that failed to deliver), we realized that there were walls up in every direction we looked. Some of these walls are scalable, but at considerable cost, to really hammer out all the details of corner cases of extensions.So maybe the first step to getting this done is articulating why standardization is worth spending more than $100K on.
- Standardization is hard. I know of few extensions beyond Haskell2010 that we understand well enough to standardize. -XNumericUnderscores could be standardized, and maybe a few more lexical tweaks. Even -XMultiParamTypeClasses is difficult: there are corner cases where what type to infer is unclear. When we started thinking about standardization a few years ago (I was part of the committee that failed to deliver), we realized that there were walls up in every direction we looked. Some of these walls are scalable, but at considerable cost, to really hammer out all the details of corner cases of extensions.
- I don't know of a canonical place I could point to that explains why standardization is worth the amount of time it would require. If time is money, then anything more than a small update to H2010 would cost over $100K. Why is it worth spending this? I agree that standardization is good hygiene, etc., (I would indeed love to see this done) but that, by itself, isn't enough to motivate the expense.
Good answer.
Why it may be worth spending this? - When formalization and making of some top-down breaking changes would be worth the hassle. In the light with GHC2021 proposal seems it is worth to wait at least several years, until the current language consensus stabilizes. Youngsters would have time to caught-up to the constantly going train. Thank you for your work & response @goldfirere. Acquainted with some of your work, in absentia. Know that committee work may look in vain, but no such work is in vain, people that already participated in the process would transfer knowledge, as you transfer, and people would directly be building upon your work and experience, the question would be solved. The real estimated costs info already in itself worth a lot.
Any updates, please... How can I keep track of the progress made?
Thanks!
There is no progress or plans to make progress with regards to Haskell Report 2020. The community seemingly accepted the fact that Haskell = GHC, and GHC2021 proposal has been merged and released in GHC 9.2.
I'd love to see this be successful, but it seems it has stalled.
If this process has not stalled it would be lovely to get an update from the team. Thanks for all the hard work!