crossbario / crossbar

Crossbar.io - WAMP application router
https://crossbar.io/
Other
2.05k stars 274 forks source link

Invoke-all invocation policy #480

Open oberstet opened 9 years ago

oberstet commented 9 years ago

Implement the autobahn.wamp.message.Register.INVOKE_ALL endpoint invocation policy.

Under the "invoke all" invocation policy, one or more procedure endpoints may be registered under the same URI. The procedure endpoints are expected to have the same (or compatible) procedure signature. An incoming call is dispatched to all endpoints, the results accumulated and the accumulated result returned to the callee.

The intermediary results may also be returned as progressive call results if the caller supports and requested that behavior.

--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/27310437-invoke-all-invocation-policy?utm_campaign=plugin&utm_content=tracker%2F462544&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F462544&utm_medium=issues&utm_source=github).
oberstet commented 9 years ago

This is a new feature not in the spec (not even in alpha status, see chapter "13.3.9. Shared Registration").

mfojtak commented 9 years ago

Would it be possible to invoke just one endpoint from the list of "the same" endpoints? Let's say I want to implement load balancing. I have two machines and I expose a procedure "wamp.test.expensive_operation" on both of them. Now I have two callees registered for this procedure. Let's say that both machines expose a procedure which gives me back a current CPU usage. Now I want use the CPU information to decide which of the two "wamp.test.expensive_operation" instances to call (lower CPU usage wins).

In other words - can I implement a custom invocation policy?

singlecheeze commented 9 years ago

@oberstet from your explanation, are you trying to do a "big data" strategy here and distribute a callers work to many endpoints and then have the endpoints "results" aggregated before a response is sent back to the caller? If so, I would much rather see more load balancing options explored like @mfojtak mentions before anything this complex is implemented. Currently round-robin, random, first, and last seem like a great start but to @mfojak's point a "round-robin" based on CPU load would be much better and seemingly easier to implement with a meta event that does a quick check on CPU load of registered endpoints (maybe even has a last 1 min 5 min 15 min config option).

The above makes sense especially for those using asyncio AND parallelism/multiprocessing. Asyncio is a great CONCURRENCY framework but does not do parallelism out of the box like async/await does for .NET 4.5 (to at least the extent of running anything after await on a background thread).

With Python 3.5's async/await handling concurrency well, and it's ability to use ThreadPool and ProcessPool Executors for parallelism, I don't see a huge need for "RPC Parallelism" (for lack of a better phrase) as well. Instead why not just spin up another VM with more endpoints in order to distribute work with shared registrations? I see how fun it might be to use an RPC parallelism model to be able to add literally any endpoint that is capable of processing to also be eligible for processing that is a piece of work from a "bigger call" but this gets messy. Use hadoop as an example. It requires standby caches and "witnesses" for job meta in case a node fails. Crossbar would have to implement the same... just my .02 cents and maybe this isn't where you were going at all, not sure...

sehoffmann commented 9 years ago

@mfojtak @oberstet I think having http://autobahn.ws/js/reference.html#receiver-black-whitelisting for this ALL-policy would be really nice to have. Basically this would be useful for situations where you want to address multiple possible peers (e.g what we can currently do with events/subscriptions) but want to get a result back.

oberstet commented 9 years ago

@mfojtak WAMP already has the round-robin, random etc invocation policies. Load aware balancing: a simple way would be for the router to measure the time taken by the different callees sharing the registration, and adjust selection based on that. No need for custom invocation policies, load metering procedures or what.

@singlecheeze This has nothing to do with Python asyncio, multiprocessing or what. This is a WAMP that works with components written in any language. But I'm not sure I get the point you want to make.

@Paranaix We had a discussion somewhere (can't find it right now) about whether we should have black/whitelisting also for RPC. What's the use case? Do you have an application level example?

useful for situations where you want to address multiple possible peers

So what's an example of such a situation?

FWIW, the need for an ALL invocation policy comes from this use case: https://github.com/crossbario/crossbar/issues/479

mfojtak commented 9 years ago

@singlecheeze I see what you mean. WAMP could be used like Hadoop for RPC distributed calls.

@oberstet The time measured for different callees doesn't solve the problem. You can have a callee which has a CPU with 1 core and second with 8 cores. One call might take the same time on both. But you can load the second callee 8 times more than the first. Only a CPU load information helps you to distribute the work correctly.

The simpliest solution is to allow custom invocation policy and let the community to implement whatever. The simple way is to allow to register a invocation proxy for a procedure. Once the procedure is called, dealer first calls the proxy procedure and lets proxy to get a result and return to dealer which returns it to the caller. The proxy would decide to which individual callee or callees it would distribute the call. The invocation proxy could be remote (procedure uri) or local (Python plugin in crossbar). The problem with remote is how the invocation proxy would call the individual callees because precedure name is not unique, it represents the whole group of callees. One solution is that the CALL would accept registrationId. Not just uri as it is now.

Everytime the procedure with multiple callees is called, dealer calls proxy procedure with a list of registrations and procedureUri as the parameters. Proxy might have different implementations for different procedureUris. This gives users a good flexibility.

oberstet commented 9 years ago

The simpliest solution is ..

No, not simple.

The proxy would decide to which individual callee or callees it would distribute the call.

Nope. The router decides.

mfojtak commented 9 years ago

Anyway. Judging from your swift answer it looks like custom invocation policy is a no-go :-)

oberstet commented 9 years ago

@mfojtak don't get me wrong .. I welcome all these comments, and I am not saying I am strictly opposed / couldn't be convinced .. but it's simply overwhelming.

If you want to see completely new features (which I'd book that under), please consider working on concrete PRs for the spec.

Polishing up the spec only with the features it already has is a LOT of work. The spec has 140 pages already! And it is nowhere complete text wise. There are whole features which are described in literally 3 sentences;)

sehoffmann commented 9 years ago

@oberstet Yes we actually have usecases for this. We need this for something which i guess you could call telemetrics. Imagine many peers providing a GetX() method. The problem currently is, that without a Invoke-all policy and callee black/whitelistinening (which again would be very consistent in design with subscriptions) we would need to identify the peer we want to address using the uri (e.g attach the wamp session).

Now why is this a problem? Because it complicates authorization for us quite a bit. We currently use some.method.[USER_ID] as our uri/authorization scheme and at the same time authid == USER_ID holds true for regular users. This makes authorization really easy for us. Additionally or solely attaching the wamp session id would result in one extra query and atleast 4-5 more lines per uri/method in our case. Of course you could eventually abstract that to some degree away for multiple methods, but I still think it makes things unnecessary complicated and ugly.

Callee black/whitelisting would provide an elegant solution for this. It also allows us to query multiple (specific) peers at a time without any boilerplate code (without it, this means building multiple uris, calling every single one seperately and joining all deferreds with e.g a DeferredList; quite alot boilerplate code compared to a single line IMO).

oberstet commented 9 years ago

Let's try to put it like this: for the foreseeable future, I (personally) will work on the spec and limit my comments to actual PRs.

maxMidius commented 8 years ago

@oberstet - calling multiple end points with the same signature for load sharing - where you have to aggregate results after all are done is quite different from calling end points for executing the same function - like for example checking how much disk space is available on each end point . This is a lot simpler to spec out than trying to do the big data load distribution

goeddea commented 8 years ago

@maxMidius - this is all yet to be specced anyway, but your use case falls within the general scope of "invoke all".