reTHINK-project / core-framework

The main goal of WP3 is to provide the reTHINK core framework comprised by the runtime environment where Hyperties are executed and the messaging nodes used to support messages exchange between Hyperties.
Apache License 2.0
1 stars 0 forks source link

Core Framework standardisation contributions #168

Closed pchainho closed 7 years ago

pchainho commented 8 years ago

This issue is created to discuss what should be standardised from the core-framework perspective and standardisation bodies and associated groups where we should contribute to. Separated issues will be created to work on each identified contribution. Below an attempt to identify the type of standardisation topics (to be completed):

Hyperty Runtime

Extension to Browser Runtime to support execution of interoperable and trustful Microservices/Hyperties

Orange: Id Module, QoS

W3C seems to be the right standardisation body

Hyperty Messaging Framework

IETF seems to be the right standardisation body

APIZEE, DT FOKUS might be interested on data schemas/models

Hyperty synchronisation mechanism

Not so clear what could be the right standardisation body. whatwg.org is an option

pchainho commented 8 years ago

To consider mmark tool to edit IETF draft.

pchainho commented 8 years ago

initial attempt

what's the problem

Currently, Service interoperability requires common usage of network protocols and service semantics that have to be agreed between service providers or developers, before services are deployed. Interoperability agreements, namely standards in big standardisation bodies, are difficult to achieve, requiring lots of effort and time. Examples:

to be adapted:

why is commercialy relevant

what is the solution

emmelmann-fokus commented 8 years ago

Thanks for getting the ball rolling.

I'll jump in the discussion by asking some more questions to nail the problem statement down:

what's the problem

  • Standardisation "kills" innovation and time to market but provides interoperability
  • Proprietary solutions "kills" interoperability and promotes unbalanced markets and economies (income inequality) but foster innovation and accelerates time to market

This is a well formulated statement which describes a problem (people might agree or disagree that this is a problem, but thats another issue).

Now: What is the resulting technical problem?

  • Users have few control on its data and no freedom to select independent Identity providers

Same as above; based on this statement, we additionally have to be very precise on the formulation of the technical problem.

  • Having multiple Apps providing the same set of features has a negative impact on user experience (often usage context change)
  • Current full based cloud based solutions spends significant resources and increases latency

Why ist that so? Can we describe and classify in a 2nd sentence what the technical problem is that makes today's solutions so low. We need to narrow down the problem that we want to tackle.

rebecca-copeland commented 8 years ago

What has started this going-back-to-school discussion? Both standardisation methods AND open-source evolutionary methods have room in our world... To be down-to-earth, all H2020 projects are expected to contribute to standards AND to open source... no problem there! Now - the question is what and how can reTHINK contribute towards both?

The 'elephant-in-the-room' (which we avoid talking about) is the involvement of the called-party's service in setting up the session. The discovery process uses it to find the UserID domain and the 'active' endpoint, but that service is not involved in the session initiation and policy setting or the user's GUI, because reTHINK uses 'push' technique to drive the caller's hyperty to the called-party's endpoint. If the called-party service is to be 'consulted', either standards or common open source components must be prepared, which will allow policy negotiation while maintaining each service's own user interfaces (each party sees its own service GUI).

Is this what it is all about anyway, or are we still on Mars?

Rebecca

pchainho commented 8 years ago

@rebecca-copeland Hyperty/Protofly in order to enable interoperability between different domain still requires a few standards including Runtime APIs, Well-known URIs and some data schemas.

What we are developing in WP3 and WP4 are open source implementations of this potential standards. By the way, this is something that is also planned in the DoW.

Regarding the questions you raised about the involvement of service providers in a call session, currently both parties domains are involved. In case you have more questions pls open a separated issue.

@emmelmann-fokus

Now: What is the resulting technical problem?

Could you give an example for what you mean by "technical problem"? would a statement like "currently service interoperability requires common usage of network protocols and service semantics ...", be closer to a technical problem statement?

  • Having multiple Apps providing the same set of features has a negative impact on user experience (often usage context change)

Why ist that so? Can we describe and classify in a 2nd sentence what the technical problem is that makes today's solutions so low. We need to narrow down the problem that we want to tackle.

yes, this one is more difficult to describe I'll try to improve it later. But this is mainly a consequence of the first problem (lack of interoperability) from the user perspective. Perhaps we could skip it at this point and describe it somewhere else.

Current full based cloud based solutions spends significant resources and increases latency

This is the problem of Cloud Computing that edge and fog computing are addressing. We could just have a look on some references and adapt them here

emmelmann-fokus commented 8 years ago

Hi @pchainho,

I will give a try to clarify things.

Now: What is the resulting technical problem?

Could you give an example for what you mean by "technical problem"? would a statement like "currently service interoperability requires common usage of network protocols and service semantics ...", be closer to a technical problem statement?

You talk about the tension field between standards vs. vendor-specific, non standardized interfaces (i.e., incompatibility between implementations). So basically -- from one view on this tension field -- "currently service interoperability requires common usage of network protocols and service semantics ...". To assure interoperability, people need to agree on a common interface / data schema / etc. This can be achieved via a standard but also via any other means (outside standards) as long as people agree.

This agreement is a (political) process and not a technical problem.

From the other perspective, you state that this "agreement" takes too long and hence prohibits a timely market entrance & innovation. I might be provocative here, but just to illustrate and making a strong statement that means if you want to address the problem of "standards / agreement on things take too long", this means we have to find a "technical solution in which people can seamlessly exchange information / communicate without agreeing on anything to be exchanged in terms of protocols, data formats etc". So how to you technically solve this problem of "not having an agreed exchange format / protocol etc?"

Also, there seems to be contradictions in the (elaborated) problem statement. On one hand, the first part claims that "currently service interoperability requires common usage of network protocols and service semantics ..." which seems to imply that the required interoperability is a bad thing. But then, we state that "lack of interoperability from the user perspective" is an issue / bad thing as well. So where do we stand?

Nailing that down to a technical question: How can we assure interoperability between two components with any standards (= agreement on anything)? How do you solve this technically.

Standardization is one one means of achieving agreement; as @rebecca-copeland said, there might be more in the middle of the standardization-vs-open-source-development view. And im my personal view, open-source has nothing to do with standardization; not even with agreeing on anything.

Some further thought and questions:

Hyperty/Protofly in order to enable interoperability between different domain still requires a few standards including Runtime APIs, Well-known URIs and some data schemas.

So we still need standards. I could provocatively say: we avoid one standard by defining another one. If we still need a standard, why do we believe that throwing the old standard away is a good thing? What are technical (or political) reasons for our believe that this is a good idea? What are the problems with the existing standards that would make it worth to replace them by a new one?

As @rebecca-copeland pointed out:

If the called-party service is to be 'consulted', either standards or common open source components must be prepared, which will allow policy negotiation while maintaining each service's own user interfaces (each party sees its own service GUI).

So in order to make reTHINK work, we heavily depend on a (global) agreement of people to use our open source components (or at least the very same for all future developments for a specific functionality). We can call this "agreement" a standard, best practice accepted by everyone, etc.; but in the end it is a (de facto) standard.

So the initial problem statement, i.e., standard (=interoperability) vs time to market, cannot be the problem we try to solve, right? So which (technical) problem are we addressing then? (Again, provocative formulation to help to clearly identify a problem statement by countering :-)

pchainho commented 8 years ago

Thanks @emmelmann-fokus

You talk about the tension field between standards vs. vendor-specific, non standardized interfaces (i.e., incompatibility between implementations). So basically -- from one view on this tension field -- "currently service interoperability requires common usage of network protocols and service semantics ...". To assure interoperability, people need to agree on a common interface / data schema / etc. This can be achieved via a standard but also via any other means (outside standards) as long as people agree. This agreement is a (political) process and not a technical problem.

Right, but in our case the assumption is that the agreements we (and I guess the EC) want are through standards.

I might be provocative here, but just to illustrate and making a strong statement that means if you want to address the problem of "standards / agreement on things take too long", this means we have to find a "technical solution in which people can seamlessly exchange information / communicate without agreeing on anything to be exchanged in terms of protocols, data formats etc".

yes, that's the ultimate challenge but of course our solution still requires a few standards otherwise we would not need to work on this contribution :)

So how to you technically solve this problem of "not having an agreed exchange format / protocol etc?"

that's the answer for the third question.

Also, there seems to be contradictions in the (elaborated) problem statement. On one hand, the first part claims that "currently service interoperability requires common usage of network protocols and service semantics ..." which seems to imply that the required interoperability is a bad thing. But then, we state that "lack of interoperability from the user perspective" is an issue / bad thing as well. So where do we stand?

I don't think the first statement can be interpreted like that but we can have an initial statement about the benefits of interoperability just to avoid missunderstandings.

So we still need standards. I could provocatively say: we avoid one standard by defining another one. If we still need a standard, why do we believe that throwing the old standard away is a good thing? What are technical (or political) reasons for our believe that this is a good idea? What are the problems with the existing standards that would make it worth to replace them by a new one?

of course, as mentioned above and already previously discussed. But the point is not about replacing one standard by another standard. The point is that we leverage existing standards like the ones used by web browsers and add a few extensions, and then a huge amount of standards would be unnecessary including signalling protocols for H2H or IoT/M2M related standards.

So in order to make reTHINK work, we heavily depend on a (global) agreement of people to use our open source components (or at least the very same for all future developments for a specific functionality). We can call this "agreement" a standard, best practice accepted by everyone, etc.; but in the end it is a (de facto) standard.

No, one thing is the standard, another is its implementation. We have decided to have the potential reTHINK standard implementation in open source to promote adoption but it could be done in closed source. And other implementations are welcome and desirable. We won't force anyone to use our implementation since it is compliant with reTHINK standards. If we are successful, the runtime and the message node could be provided by many different vendors.

So the initial problem statement, i.e., standard (=interoperability) vs time to market, cannot be the problem we try to solve, right?

Why not? I think it is a question of improving the problem statement perhaps giving some examples of existing ongoing standardisation work that would be avoided or emerging services/technologies eg Message Bots. We could also try to give some rough estimation of time to market. But more ideas / contributions are welcome.

(Again, provocative formulation to help to clearly identify a problem statement by countering :-)

I appreciate that :)

sbecot commented 8 years ago

It seems that the discussion goes in many directions, we won't rewrite the document of work. There are many technical problems we try to solve, that are not to be reduced to standards or no standards. @emmelmann-fokus an example of technical problem: in IMS (let's call it old fashion standardized telephony), Identity is tightly coupled to the signaling protocol, because it has been designed for telephony services, mainly. But we are facing services deployed on the Web, where no signaling is designed. Identities are multiple, and use different authentication/authorization protocols. Plus the WebRTC security model assumes that the communication service provider is uncorrelated from the Identity provider. This is a chance for the user that can choose his/her identity, and this is the way services can work on Internet, but it comes from a culture where identity management was design to authenticate on a single service, or for authentication delegation, where interaction was only with someone already logged on the same service, that you assume to trust, as you have chosen it. This is rather different for a communication with a person logged on a other service that you don't even know. This may be a Telco view, but at least there are technical issues. Interoperability, as Paulo says, is a technical challenge and protocol on the fly is an answer.
We have also the problem of managing latency and throughput of data using realtime communication services on a network that is not specialized, with client that implement congestion control algorithms that are not known by the network provider and subject to changes without notice. Another thing is to deploy software at the edge of the network or in the devices, to avoid to have a centralized architecture, and be cloud compatible.
I think a lot of these problems are addressed in reThink: Id model, Protofly, etc... Then comes the Hyperty concept which is above all of this.
The Hyperty is an architectural concept, very related to software design. You bring the services in the browser. Of course they need some kind of standardization, more related to W3C than ITU. The problem with the Hyperty concept is probably that we could solve interop, id, cloud etc.. issues without hyperties, so we'll need a strong argumentation to justify the way it is designed, but I'm sure Paulo can provide this.
The fact is that we are facing technical problems, but also political, societal issues. The standards that take long time is only one of the facets of the problem. We want to be more agile than Telcos usually are, that's not a so big challenge to go faster ;-)

emmelmann-fokus commented 7 years ago

closing 1-year old discussions.