tc39 / tg5

TC39-TG5: Experiments in programming language standardization. https://ecma-international.org/task-groups/tc39-tg5/
10 stars 0 forks source link

Proposal to study by TG5 #4

Open ecmageneva opened 2 months ago

ecmageneva commented 2 months ago

Study of Interplay between long-term Ecma- and short-term_R1.pdf

ecmageneva commented 2 months ago

This is about a proposal for TG5: interplay between long-term standardization what is typical for Ecma as SDO and the current TC39 Standardization. The experiments and "next steps" proposed are at the moment for TC39 JavaScript (ECMAScript standards work only), but I guess it can also be extended for other language standardization.

bebraw commented 2 months ago

Out of curiosity, I asked about this in my developer community. I've included main points below:

ecmageneva commented 2 months ago

Hi Juho, before studying and reflecting your points please allow me that I give a bit of background of my starting presentation. 1) We started to discuss an idea of a new TC in Ecma (on computer languages standardization) actually last July in Bergen. The idea was - if I recall it correctly - that computer language design and standardization should go hand in hand - actually if you have designed and implemented a new language that should also be formally standardized. I myself had had a failure 40 (years ago at the TU in Graz in Austria when we forked an Ecma Basic standard and failed to standardized the results with Ecma (due to ignorance) and the project went South after a few years. It was the Austrian Bildschirmtext System (Vtx - with the Mupid intelligent videotex decoder), actually that part was the forerunner of what became JavaScript 15 years later. Anyway, my feeling was that though Ecma is still very active in computer language standardization but we could have much more and actually only a fraction of languages get into the stange of formal standardization. So a TC on caretaking of language standardization in Ecma would make sense. 2) Now we ended up as a TG under TC39 (of course a separate TC would still be possible and may be it will at some stage), which means you need subjects that have links to JavaScript and the TC39 topics. So one of such topics I address in my slides also that the way how we do ECMAScript standardization in Ecma is based on an Interplay between what I call "long-term" Ecma standardization (that includes general Ecma tools, policies and practices) and "short-term" TC39 standardization (that includes also TC39 tools, policies and practices) and how they interplay with each other. We want to write a paper that also includes that... Not sure where it standards right now.. 3) But in my slide above I address a more general issue which is relevant also for other Ecma TCs, that the way how standardization is being done has been gradually changing since about the beginning and middle of 1990s (away from paper and slow mail based standardization in discussions in physical meeting to digital multimedia, email, conferencing based continuous working). So this has of course a serious impact on the standardization tools, policies and parctices. At the moment we are still mixing in many respects the old and the new work, so it is time that this transition process now slowly finishes. 4) The last part of the slides, like the "Human Heritage Standards" is related to that but it is a different idea and project I am playing with for some years. The idea is a) Formal standards are in practice usually short live (even a successful one, like 3G mobil networks) but a very few ones (especially in multimedia data/documents/still and motion pictures - maybe also JavaScript (?? - need answer for that) will live "forever". The billions and billions of JPEG pictures that are being taken day by theoretically never go away, except that we have no realy solution for long-term archival and storage at the moment in my opinion. As example, I would give a following examples: Unicod, html, JPEG, MPEG-4, MP3, PDF, ODF, OOXML, etc... So output that are simply too many for conversion into new formats etc. Of course long-term storage and archival is also a big part of that, not only the higher layers... So, precise definition, selection and maintaining of those would be essential. E.g. by whome? Unesco, EU, etc...? 5) So, these weird projects and ideas would fit into TG5 work, I just do not know yet.....

bebraw commented 2 months ago

Hi Istvan,

I've included further responses below as I have more bandwidth for articulate discussion now.

  1. We started to discuss an idea of a new TC in Ecma (on computer languages standardization) actually last July in Bergen. The idea was - if I recall it correctly - that computer language design and standardization should go hand in hand - actually if you have designed and implemented a new language that should also be formally standardized. I myself had had a failure 40 (years ago at the TU in Graz in Austria when we forked an Ecma Basic standard and failed to standardized the results with Ecma (due to ignorance) and the project went South after a few years. It was the Austrian Bildschirmtext System (Vtx - with the Mupid intelligent videotex decoder), actually that part was the forerunner of what became JavaScript 15 years later. Anyway, my feeling was that though Ecma is still very active in computer language standardization but we could have much more and actually only a fraction of languages get into the stange of formal standardization. So a TC on caretaking of language standardization in Ecma would make sense.

I wrote a paper related to your observations last year (the paper looks into ECMAScript specifically) and one of the things I noted was that there are multiple success models for programming languages. I believe this could be a good topic for more formal study as there are good questions like why PHP is so dominant while missing an official language definition even (you can see this in how tooling around it has been implemented at technical level). TypeScript is another good example of a language running without standardization (although it builds on top of ECMAScript and there might be pressure to standardize a subset at least).

I can definitely see value in capturing best (or good) practices related to programming language standardization as the past has shown it doesn't always work out and some models seem to work better than others. Now we have the added challenges of online collaboration as you pointed out in your presentation.

  1. Now we ended up as a TG under TC39 (of course a separate TC would still be possible and may be it will at some stage), which means you need subjects that have links to JavaScript and the TC39 topics. So one of such topics I address in my slides also that the way how we do ECMAScript standardization in Ecma is based on an Interplay between what I call "long-term" Ecma standardization (that includes general Ecma tools, policies and practices) and "short-term" TC39 standardization (that includes also TC39 tools, policies and practices) and how they interplay with each other. We want to write a paper that also includes that... Not sure where it standards right now..

From my perspective, I can likely be useful on the programming language side of things. I believe for the larger task within ECMA we would have to find some people that understand the big picture as you do. In terms of formulating a research plan likely starting from a corner like figuring out success models for programming language standardization would be a good starting point.

  1. But in my slide above I address a more general issue which is relevant also for other Ecma TCs, that the way how standardization is being done has been gradually changing since about the beginning and middle of 1990s (away from paper and slow mail based standardization in discussions in physical meeting to digital multimedia, email, conferencing based continuous working). So this has of course a serious impact on the standardization tools, policies and parctices. At the moment we are still mixing in many respects the old and the new work, so it is time that this transition process now slowly finishes.

That's a great and valid point and ties to above - i.e., figuring out the current best practices. I guess another way to frame it is what it should look like in terms of process and outputs. There's always friction in these things and that's even good as standardizing too fast isn't good especially on something you cannot change fast. For something like ECMAScript, it's even dangerous to standardize the wrong thing as it will be close to impossible to remove (deprecation is an option but the feature will remain there to implement).

  1. The last part of the slides, like the "Human Heritage Standards" is related to that but it is a different idea and project I am playing with for some years. The idea is a) Formal standards are in practice usually short live (even a successful one, like 3G mobil networks) but a very few ones (especially in multimedia data/documents/still and motion pictures - maybe also JavaScript (?? - need answer for that) will live "forever". The billions and billions of JPEG pictures that are being taken day by theoretically never go away, except that we have no realy solution for long-term archival and storage at the moment in my opinion. As example, I would give a following examples: Unicod, html, JPEG, MPEG-4, MP3, PDF, ODF, OOXML, etc... So output that are simply too many for conversion into new formats etc. Of course long-term storage and archival is also a big part of that, not only the higher layers... So, precise definition, selection and maintaining of those would be essential. E.g. by whome? Unesco, EU, etc...?

I think there's an interesting problem hidden here - genealogy of standards. Technically it all starts from standards for measures (formal definitions and even physical ones) on top of which the rest is built. I tried connecting you with Ken Krechmer as he has written nice papers related to the topic but your ECMA email bounced.

Archival itself is an interesting problem as it comes with the problems of what to store and how and also who is responsible for storing. This is where the initial weird list comes in as it likely has material for a paper towards this direction in case you wanted to speculate the topic.


In case you want to try to formulate a research plan, feel free to get in touch. I'm currently finishing up the last pieces for my PhD (not much to go) but I imagine authoring more in standardization space wouldn't be a bad thing if we can figure out good places where to publish. That's bit of a challenge for standardization papers in my experience as it's quite niche but I imagine you have better view of where to try at least. In case you are curious, you can find some of my research output here. I can also try to push simple problems to my students. Especially BSc-level seems to work well for basic surveys and literature research while MSc can do more advanced work.

I have some good contact in standardization space in case you need support from theoretical side of things. I approach things from a practitioner angle (inside out) so it's quite biased but with good collaborators some decent papers could come out of this if we figure out a plan (and perhaps some funding 😄 so it's not from our own pockets).

ljharb commented 2 months ago

(fwiw php continues to have the best language docs ever, afaik, which is likely far more important to end users than a language definition)

bebraw commented 2 months ago

@ljharb That's a great point! PHP is definitely doing a lot things right. 👍

ecmageneva commented 2 months ago

Juha, on you following point: " I believe this could be a good topic for more formal study as there are good questions like why PHP is so dominant while missing an official language definition even (you can see this in how tooling around it has been implemented at technical level). TypeScript is another good example of a language running without standardization (although it builds on top of ECMAScript and there might be pressure to standardize a subset at least)." In my opinion every successful language development effort produces what I call a "standard" (as minimum a BNF-type notation), but that standard has different types like: "No standard", "de facto standard", "formal standard". (Actually I am working of a definition of a new category as I wrote "Human Heritage Standard" - which I just mention here, but I do not want to complicate the subject here). Out of curiosity - with about a 30 minute work) I have put together with the help of ChatGPT the following Q&A, which supports what I could imagine Ecma could do about language standardization (not necessarily here in TG5 I just mentioned this here because that was how the discussion started in Bergen a year ago.

ChatGPT_Replay_on_questions to Programming_Languages.pdf

1) To carry out studies and document the results (in Ecma Reports, external publications) on programming language development in relation to standardization (with all above mentioned possible categories). (BY the way I have read the your article about the JavaScript (ECMAScript) standardization, a very good paper. Good example for this point... I encourage you to make a "Part2" for the about last decade until 2024, this is missing from the AWB study and yours. Understandably...) 2) To assist programming language developers (if they require) to find the optimal ways towards standardization (or no standardization) 3) Serve as "incubator" within Ecma for launching in ECMA new TC/TGs for programming language standardization.

As I said I am not sure if currently this fits into TC39 TG5, but I just want to capture the idea.

ecmageneva commented 2 months ago

Juha: This is continuation on the comments: On 2) above now I am waiting for Mikhail, the ball is in his court... On 3) that will be by next contribution I think. I will go through on some of the relevant Ecma TCs about the status of the transition process. So that will be another set of slides... On 4) I have an old presentation from 2018 that I gave in Bern to the "national archival community" I will dig that out and see if I should distribute here. Regarding Ken Krechmer I know him very well, actually a good friend mine, we have had many old common "standards war" memories in the ITU in the 1990s. I know more his work about the criteria about "Open Standardization" (he has a very good definition on that"), but not on the genealogy of standards. I guess this must based on his later work at the University of Boulder, where we did not have so close working contacts.

bebraw commented 2 months ago

In my opinion every successful language development effort produces what I call a "standard" (as minimum a BNF-type notation), but that standard has different types like: "No standard", "de facto standard", "formal standard".

I completely agree with this point and I believe it's supported by the standards literature. There are some transitions (de facto -> formal) that may or may not occur.

Out of curiosity - with about a 30 minute work) I have put together with the help of ChatGPT the following Q&A, which supports what I could imagine Ecma could do about language standardization (not necessarily here in TG5 I just mentioned this here because that was how the discussion started in Bergen a year ago.

That list is an interesting one as it ties to the genealogy of programming languages and their influence on each other. Understanding the trends brings up the interesting question what's next. There's a related problem of how to make languages that are somehow "easy" for AI (maybe there's a link to formal methods here) and if this even makes sense.

There was a related question at the recent Future Frontend conference that went something along "Could ECMAScript be replaced on the web with another language?" and the answer was a clear no as it's entrenched to the web platform. Essentially you would have to replace the platform with something else entirely as far as I understand. There were attempts like Dart but those never took off due to existing userbase (specifically existing hardware that will likely never get updated).

  1. To carry out studies and document the results (in Ecma Reports, external publications) on programming language development in relation to standardization (with all above mentioned possible categories)

Something like this could be a good target for the next year's Euras conference (typically held around mid-June). I think it's one of those problems I could potentially provide for a BSc student to research as it can be completed based on literature to get the ball rolling. In that case it's likely a Fall thing as I likely get a new batch of students to instruct.

  1. To assist programming language developers (if they require) to find the optimal ways towards standardization (or no standardization)

This is an interesting one. I know there have been experiments in using AI to help in some tasks but I believe you'll need a restricted context and strong validation for this type of work to make sense (the gotcha is that you need some level of expertise for validation).

The test suite around ECMAScript is another great example and there's some connection to documentation as eventually it has to be derived from specification to disseminate the standard to users in an accessible way (i.e., MDN is doing tons of this for ECMAScript and then there are more unofficial sources and medias).

  1. Serve as "incubator" within Ecma for launching in ECMA new TC/TGs for programming language standardization.

I guess this works together with 1. as it's possible to look at languages specifically within ECMA to capture good practices that make sense for language standardization. As we know, it took a while for TC39 to find a productive working model and it's still being refined (addition of stage 2.7 etc.).

ecmageneva commented 2 months ago

Juha, there are many interesting points in your comments: Let me just take out two:

1) From your first long comment under 3):

"There's always friction in these things and that's even good as standardizing too fast isn't good especially on something you cannot change fast. For something like ECMAScript, it's even dangerous to standardize the wrong thing as it will be close to impossible to remove (deprecation is an option but the feature will remain there to implement)." The very last line from your longer comment 2): "As we know, it took a while for TC39 to find a productive working model and it's still being refined (addition of stage 2.7 etc.)."

Actually in TC39 GitHub "how-we-work" there is nothing about such issues as to your first remark. In addition I may add that since the very active standardization goes on since beginning (1996) I am not sure if it is possible to take out anything from the standard because of compatibility reasons. So it gets longer and longer and more and more complex, of course which make life more difficult. This might also make sense if a feature e.g. is blocking something else or no one has ever used in practice. In the ITU in SG16 in Multimedia Communications Systems (ITU-T H.32x) we had similar problems, where in certain cases we could take back in a new release a "normative feature" to "informative" with an additional note: "It is intended to take out this feature e.g. in the next release". Not sure to which extend such things could be done in practice with JavaScript/ECMAScript.

2). This suggestion is really interesting: "Understanding the trends brings up the interesting question what's next. There's a related problem of how to make languages that are somehow "easy" for AI (maybe there's a link to formal methods here) and if this even makes sense."

bebraw commented 2 months ago

Actually in TC39 GitHub "how-we-work" there is nothing about such issues as to your first remark. In addition I may add that since the very active standardization goes on since beginning (1996) I am not sure if it is possible to take out anything from the standard because of compatibility reasons. So it gets longer and longer and more and more complex, of course which make life more difficult. This might also make sense if a feature e.g. is blocking something else or no one has ever used in practice. In the ITU in SG16 in Multimedia Communications Systems (ITU-T H.32x) we had similar problems, where in certain cases we could take back in a new release a "normative feature" to "informative" with an additional note: "It is intended to take out this feature e.g. in the next release". Not sure to which extend such things could be done in practice with JavaScript/ECMAScript.

That's a valid point. From my point of view, the current process seems effective as plenty of work gets done even quite fast although bigger changes can take years. Due to the fact that it's not possible to remove anything (only deprecate), we're in a position where the standard can only grow.

There are good recent examples that show how painful breaking changes can be. I think Python (2 vs. 3) comes into mind as a recent example as it was truly a difficult process for the language to get the ecosystem to follow.

These type of models are a good topic for research alone as it feels like there are some better and worse ways of handling it. I have a feeling a part of it can, and even should, be solve through tooling but there are also social aspects to consider as there have to be strong benefits to encourage moving onto a new major version.

I don't know if it will happen one day but I imagine it might be even useful to define "ECMAScript Lite" with a limited set of features that is easier to implement and that drops compatibility with features considered outdated and obsolete. It sort of goes beyond this discussion but simultaneously might follow the current trends as we see these types of moves in the ecosystem level as we're moving for example from CommonJS to ESM at module level and in newer approaches (Deno, JSR) latter is used to a great extent already. Especially JSR is an interesting example as it's approaching package management from a new perspective that allows rethinking technical practices.

The problem with "ECMAScript Lite" might be that it could be difficult to define as I imagine developers prefer to use different subsets of the language. I recall there were attempts, like somehow security enhanced ECMAScript, in the past, but I don't think they went anywhere. The current way of solving this seems to be sandboxing with a couple of well documented limitations which is an ok way to solve it in the user space.

ecmageneva commented 1 month ago

Certainly theoretically possible, but I am not sure how to measure and decide that point (and features) when such a new, reduced profile of ECMAScript would be required. At the moment we do not have anything, or?

bebraw commented 1 month ago

Certainly theoretically possible, but I am not sure how to measure and decide that point (and features) when such a new, reduced profile of ECMAScript would be required. At the moment we do not have anything, or?

The closest I know is https://developers.cloudflare.com/workers/runtime-apis/web-standards/ .

I imagine other providers might have similar restrictions.

mikbar-uib commented 1 month ago

A relevant paper on governance of PLs: https://dl.acm.org/doi/10.1145/3357766.3359533

ecmageneva commented 1 month ago

I have taken out from the above article "Executive Sumary" the following summary: "General Programming Languages (GPLs) continuously evolve to adapt to the ever changing technology landscape. The evolution is rooted on technical aspects but it is ultimately decided by the group of people governing the language and working together to solve, vote and approve the new language extensions and modifications. As in any community, governance rules are used to manage the community, help to prioritize their tasks and come to a decision. Typically, these rules specify the decision-making mechanism used in the project, thus contributing to its long-term sustainability by clarifying how core language developers (external contributors and even end-users of the language) can work together. Despite their importance, this core topic has been largely ignored in the study of GPLs. In this paper we study eight well-known GPLs and analyze how they govern their evolution. We believe this study helps to clarify the different approaches GPLs use in this regard. These governance models, depicted as a feature model, can then be reused and mixed by developers of new languages to define their own governance."

I have the feeling this is in general also true for the Ecma standarized languages. But if it is really true could be subject of further studies. This I mean for all our languages (active in standardization or closed). For example in TC39 I observed that for the individual single projects we have a good governance (stages 1-4), but I am not sure how it this true for general long-term evolution of the language itself - as a whole - which is still very active since the very beginning (1996). May be I am wrong... but I would be glad to learn if we have and such governance.

ecmageneva commented 1 month ago

Study of Interplay between long-term Ecma- and short-term_R1.pdf

Actually, I worked a bit further on my first comment from May 29 above and came to the following presentation. Status_of_multimedia_input-output_documents_and_AI_tools_in_ECMA.pdf

I realize that the presentation is more addressing Ecma as a whole (like the ExeCom, the GA), rather than TG5 alone...but certainly would have also impact on TG5 work. E.g. Mikhael has made a presentation to the GA on June 27 where he suggested as a TG5 project a tool for analysing, monitoring, comparing Programming Languages developing process, where he suggested as basis the use of the TC39 "how we work process", which of course is dependent on other Ecma policies (like Ecma Bylaws and Ecma Rules), and what I am proposing would extend especially the Ecma Rules with a new type of input/output document format which is multimedia (MP4). So that is the reason for sharing the new slides here for information, comments if you care...

bebraw commented 1 month ago

@ecmageneva It looks like Zoom supports live transcriptions these days so it might be interesting to try it out. I imagine giving the transcription to something like GPT might be enough for a decent summary. It's probably up to Mikhail to make sure these kind of things can happen.

I guess the next level would be to start connecting these things using bots (direct commits to GitHub etc.) while having some human driven validation passes in between to ensure quality.

ecmageneva commented 1 month ago

Juho, correct practically all conferencing systems support both transcriptions and also conference video captioning, but everything is in the hand of the conference host (e.g. I guess in the June meeting by the university in Finland). Currently in TC39 meetings hosts are using their favorite conferencing system (so we see a couple of difference systems in practice) and decide also what they permit to the participants. Of course every participant (and actually some do it, including myself, who likes to get a feeling where the technology currently stands... ) can run its own screen capture program and use its favorite transcription tools. My experience is that as we speak it gets better and better. On that bases I may say, that if you have the multimedia capture of a meeting then practically the after meeting AI types of processing (transcription, summary and conclusion generation) is fast and easy (within 2 days after a TC39 meeting you can have good results). Of course these are tools, so the human who uses those tools must stay in the final control and the control is not by the AI tools. How these things can be automatically connected to GitHub I have no idea, my knowledge about GitHub is rather rudimentary.... :-(

bebraw commented 1 month ago

@ecmageneva I found https://github.com/tokuhirom/meetnote2 (MacOS only) that's handling meeting minutes and summaries. There's another solution at https://github.com/zoom/meetingbot-recall-sample and I'm sure more exist as it seems like a niche of its own.

I'm not even sure if a bot is needed initially as I imagine the meeting host could handle running a tool and then create a pull request to review with the summary and other info.

ecmageneva commented 1 month ago

Many thanks Juho. Yes, I am 100% sure that many more similar solutions exist, also on GitHub with Zoom and/or other conferencing platforms. in the last 1-2 years since we have this AI boom one can not really follow them. If you look what the paying version of Zoom can do (and the paying host can switch on or off these features) we can have in my opinion already everything that we need. Store the multimedia stream locally or in the cloud via different ways (even with separated audio streams for each participants - which is important because e.g. the AI tool for summary has to know that). In Zoom for example - that is the claim - you get the recorded video right after the closure of the session, the summary and conclusion with very little delay after closing. There is even a function if you join late the conference, Zoom tells you, if your name was already mentioned and gives you summary of the meeting what happened so far. As we speak new similar features are added, that is my feeling. Of course for the testing the free GitHub tools like what you have given the link is more interesting for me personally. Also, to see that independently from those solutions my impression for the solution is identical to the way how these GitHub links solved the problem (well the speaker identification is not there yet, but probably coming soon). So the key always is get the multimedia screen capture, separate the streams, audio, video, chat etc. and apply AI tools on them. Definitely this is the way how to do it, you get defacto immediate results. Currently e.g. TC39 doing the same with heroic effort and e.g. for the Helsinki meeting early June 24 we still the results in the Meeting Notes could not be shared yet. Of course the practical difficulty is that TC39 uses different systems (Google Meet, Zoom, Microsoft Teams, Webex, etc. also they are different systems, but the paying version's functionality is the same, and what feature is switched on and off is up to the host's internal policy. In my opinion Ecma and TC39 should issue a guideline what features it expects when using conferencing systems.
Well, what is the link of all of this with TG5? Certainly, that the TC39 "How we work" GitHub document in the TC39 GitHub - which is a good and useful document, but of course can and should be extended according to need, does not say anything about this. The same is true for the complementing "Ecma Rules" document. e.g. Multimedia is not considered as formal document. Then according to Ecma Rules a technical proposal can be only advanced in a meeting if the proposal is published 3 weeks before meeting. Now what about a multimedia input? Is it enough if only some slides are submitted before the meeting (without a former TC39 number) and the explaining audio comes in a meeting, no recording only real-time capturing etc... This is currently just not covered yet in those policy documents... Which needs if we want to use as the guiding document both for TC39 and the other programming languages. Just my 2 cents on this...

kenkrechmer commented 1 month ago

In response to the more general standardization issues discussed above: Isology (see isology.com) is the study of references, standards and standardization. Currently isology is a developing theory, not a completed theory. With this background, isology still has value. Adaptability (https://www.nae.edu/248425/The-Role-of-Technical-Standards-in-Enabling-the-Future) identifies a mechanism that negotiates between different compatibility forms. ITU V.8 (which I initiated) in modems and T.30 in facsimile are successful examples of adaptability protocols. Internet Session Initiation Protocol (SIP) is a protocol that includes useful adaptability, but lacks independence from the protocol it negotiates.

An adaptability protocol is good way (IMHO) to address negotiating the large number of compression protocols that have been and continue to be created. Perhaps it could be used to apply different programming languages. It can also negotiate APIs (see nae ref above). If any of this is of interest I can point out papers which define adaptability in more useful, but still high level, detail

In isology standardization is an evolutionary process. Where the term evolution means: try everything and continue what works. Then successful standardization is not always what a standardization committee (public or private) published but is what a market (open or controlled) continues. It may be useful to achieve some form of SDO standardization, but it is not required.

Given this evolutionary view, asking why a specific standardization project was successful or not is not likely (IMHO) to lead to instructive answers.

ecmageneva commented 4 weeks ago

I am not sure if the ITU-T V.8 or Facsimile Gr3. (T.30) type of interactive communication behaviour between two intelligent endpoints is so general that hey can also be applied all type Programming languages (like Fortran, C, JavaScript etc. and within that e.g. between two different versions of the same language like ECMAScript ES2016 and ES2024). If my memory is correct V.8 is for communications between two modems on POTs, T.30 ist for communication also over POTs of two Gr.3 facsimile devices. In interactive communication like T.30 indeed the two endpoints first "introduce" themselves what (standardized) functions they can satisfy and then they select a standardized common mode. Also often a concrete communication "training" at the beginning helps to identify that. But can that concept be applied also for the many different Programming Languages. I just do not know.

kenkrechmer commented 4 weeks ago

Your view of T.30 sounds right to me. The isology conception of adaptability protocols is not based upon lower layer functions. Adaptability may be a session layer (layer 5 of the 7 layer OSI model) function. See https://www.isology.com/pdf/fundtec.pdf for a paper published in IEEE Communication Magazine in 2000 (under the Section Etiquettes) which details the functions of adaptability (which I termed etiquettes in this early paper).

ecmageneva commented 3 weeks ago

Thanks Ken,

Your 2000 IEEE Communications Magazine paper is indeed interesting with the different categorization of standards: "Unit and Reference Standards", "Similarity Standards", "Compatibility Standards", "Etiquettes".

What might be interesting if someone who knows the many existing and past programming languages well (like Fortran, Cobol, C, Java, JavaScript etc...,) makes a big table where he applies that concept for the different components of a given language standards for the 4 categories. My feeling is, that one would get depending on the concrete language standard a rather mixed picture. Maybe that would allow to draw certain interesting conclusions for the programming languages standards. But I am not an expert on that....so I leave this idea to somebody else...

bebraw commented 2 weeks ago

In terms of programming languages, I would also consider the generational aspect (1-5, arguably 6th) as that's the classic way to categorize them and I imagine there might be some level of mapping to types of standards. Genealogy of programming languages is interesting itself as you can see that languages tend to be inspired by each other and good features tend to find their way to others. Languages, such as Haskell, feel like incubators this way (list comprehensions is a good feature for example).

I think if you look closely, you'll find different levels at programming languages and their principles. That's where you'll find standards (or de facto standards) and can work out some influence chains. Occasionally what tends to happen is that a language gets forked (for example CoffeeScript) and the main features get merged back making the fork obsolete. There's also a form of specialization (OCaml -> ReScript/Reason, Erlang -> Elixir) that's one way to do it as it gives ecosystem benefits and avoids starting from scratch.