uber-archive / multitransport-jsonrpc

JSON-RPC Client (Node.js & Browser) and Server (Node.js) aim at "natural looking" server and client code.
116 stars 22 forks source link

Dynamic methods or a single function handler #47

Open moll opened 10 years ago

moll commented 10 years ago

Hey,

How about skipping the whole scope limitation and just allowing a single function to be passed to the server to handle method calls? That would make it easier to delegate to preexisting objects and handle larger API more conveniently.

Something like:

new JsonRpc.server(transport, function(req, callback) {})

I do see it's possible to overwrite handleJSON with some trickery (given that it's bound to this in the JSONRPC constructor), but that would require also duplicating the error handling functionality.

dfellis commented 10 years ago

Doing that would break the rpc.methodList built-in mechanism that the client uses to auto-populate the methods you can call.

I suppose I don't see what's so onerous about binding all of your methods to their parent objects and attaching them to a hash table? You could even do it with an IIFE:

new JsonRpc.server(transport, (function() {
    var scope = {};
    // All methods to be added
    Object.keys(obj1).forEach(function(key) {
        scope[key] = obj1[key].bind(obj1);
    });
    // Subset of methods to be added
    scope.foo = obj2.foo.bind(obj2);
    scope.bar = obj2.bar.bind(obj2);
    return scope;
})());

Also, once you're making your JSON-RPC server in the fashion you've described, you're not really doing anything "RPC" at all, and you may as well use a simple REST interface with an express / browser-request pair.

REST has some overhead involved, but its more universally compatible than JSON-RPC, you can automatically load-balance with HAProxy and cache with varnish, so the horizontal scaling story is better.

I view JSON-RPC as primarily for when you want to affect the state of this particular server. The entire server can be viewed as an object inside your code that you instantiate (and it has some state on "initialization" in your code) and the object has methods you can call. What these methods are should be explicit just like any other object in your code, and that's why I don't really understand the desire for a generic request handler for such a beast.

I'm willing to be convinced, though, and it should be trivial-enough to add the functionality. :)

moll commented 10 years ago

Hey again. Thanks for the prompt reply.

I'm exposing my whole domain's repositories via RPC and there are too many methods and functions (incl. deeply nested) to bother iterating over them. Plus they're promise based and not in the callback style, so that would mean a buttload of pre-generated functions that might never be needed.

I'm not too keen on the overhead of HTTP just so I could tag it REST and drink the koolaid alone. :) JSON-RPC seems more or less also something one could use from different internal services regardless of languages. At least it's not a custom binary format. (Although Multitransport's implementation of TCP seems to be. :)) I'm going for internal RPC for SOA.

Anyways, I managed to subclass the server and rewrite handleJSON with a simpler implementation that did just what I wanted and beautifully so. Disabled the autoregistering thing as well. So far so good.

It's up to you if you feel like adding it to the core, though. :)

dfellis commented 10 years ago

Ahh, so you're one of those promise people... ;)

So on a few points you bring up:

  1. Yes, JSON-RPC's TCP implementation is a custom binary format because there's no other way to handle long-lived TCP connections -- you have to have some sort of format to segment multiple messages from each other. Otherwise you open a connection for each req-res pair and you just have HTTP-lite. (Which may not be bad, but part of the allure of raw TCP was reducing the socket usage from unpredictable to 1 per external service.)
  2. I don't really know what you mean by "whole domain's repositories via RPC".
  3. RPC for SOA is understandable to me, but it's a siren song. You'll get trapped in services never being allowed to be more than 1 process large because you'll come to depend on some sort of state. REST on the outside of a service and RPC for inter-peer communication has worked pretty well for us when your service has to have internal state to function. (And most of the time that internal state is better stored in something like Redis, Riak, Postgres, etc, and no RPC happens at all in your service.) Beyond REST, pubsub streams like Kafka have also worked well when many services need to get the exact same data.
  4. As for your subclassing, I'll take a look. Support for promise-based APIs seems like something that should be done, but I still think JSON-RPC is actually only useful in very particular situations, where you need to treat the remote service as an object and your client is willing to make sure its using the "correct" object if you need to scale horizontally.
pparavicini commented 9 years ago

Thank you for a great project, I was up and running with the express middleware transport in a short time.

I don't understand your point 3. You seem to be implying that json-rpc should only be used in a corba-style rpc call that targets a stateful model instance, but certainly that's not the case, is it?

I like json-rpc as the core SOA protocol because it allows me to easily expose the stateless service facades of my back-end without having to do semantic contortions to fit all the calls into the CRUDish POST/GET/PUT/DELETE semantics of a REST-style interface. I still plan on exposing a rest API, as a convenience to the web services consumer, in those cases when the semantics make sense.

Sorry for the digression in this bug report. Thanks again for a great project.