As soon as the Runtime is instantiated, the P2P Handler Stub is deployed and activated.
The path to receive P2P Data Connection creation requests from P2P Requester Stubs is set in the MN with the P2P Stub URL.
Hyperty instance registration in the Domain Registration containing its Hyperty Runtime URL, its P2P Handler Stub instance URL and the catalogue URL of P2P Requester Stub.
[x] Hyperty Instance model with Hyperty Runtime URL, its P2P Handler Stub instance URL and the catalogue URL of P2P Requester Stub.
Hyperty discovery is performed through the Runtime Registry which returns the Hyperty Registry entry containing its Hyperty Runtime URL, the P2P Handler Stub instance URL and the catalogue URL of P2P Requester Stub. It should be possible to rule this discovery according to policies enforced in the MN ie the user should be able to control who is able to have access to its P2P connections. The Runtime Registry saves the returned Hyperty Registry entry.
registry.resolve( ) will trigger the deploy of the P2P Requester Stub, in case the runtime registry has a previously saved registry entry as a result of the previous step and in case it is not deployed yet. Otherwise, it would query the Runtime Registry for the Hyperty P2P Requester Stub URL
P2P connection usage: when registry.resolve( URL ) is invoked by the msg bus, if the URL is in the P2p table the associated connection URL is returned. Otherwise it would look for the MN stub associated to the Hyperty URL domain as it is done today without p2p connections.
[x] Message Routing extend to support P2P Stub addresses resolution
runtimeUA.loadStub( stubURL, p2pconfig ) is used to deploy the P2P Requester Stub. The new "p2pconfig" should include the remote P2P Handler Stub instance URL and the "local MN StubURL" connecting to the MN from the remote runtime that will be used for the P2P setup signalling. No changes are required in the current stub deployment mechanism, including adding the stubUrl address in the msg.bus and in the sandbox.bus but adding the "p2pconfig" json object to the "configuration" deployment parameter.
[x] deploy Protostub extend to support optional p2pconfig parameter.
P2P connection setup: As soon as the P2P Requester is activated, it starts the setup process which will be done through messages sent through "local MN StubURL". On the remote peer, messages are handled by previously deployed P2P Handler Sub.
routing path setup at handler side: since the Handler Stub may manage several p2p connections, we should have a listener per p2p connection to avoid having routing logic inside the stub itself. Thus, we should have one address per connection registered in the buses (msg bus and sandbox bus) and in the runtime registry. As soon as the connection is successfuly setup, the Handler Stub fires a new "p2p connection is established" event to the protostub status listener.
As soon as the runtime registry receives the "p2p connected" event it:
it generates the p2p connection stub URL eg -: the p2p-connectionUrl.
add listeners to msg bus - msgBus.addListener("p2p-connection-URL", handlerSandbox.postMsg).
creates a new P2P connection entry in the P2P table containing the remote runtime URL, executing Hyperty URLs and reporting data object URLs. It could even subscribe the remote runtime registry to be updated with executing hyperty URLs and reporting data objects as soon as the p2p connection is established.
Asks the Handler Stub to add its listener in the minibus sandbox by sending it a subscribe request message. (note: this mechanism can also be used by the Reporter sync manager when accepting subscription requests from Observers. In this way Stubs don't have to receive and filter all messages received by the sandbox)
[x] new MSC describing the P2P Connection setup.
[x] Messages spec
P2P Reporter-Observer communication is supported by only extending Observer subscription procedure
Following discussion at:
https://github.com/reTHINK-project/dev-runtime-core/issues/103
Impact analysis: