meteorhacks / cluster

Clustering solution for Meteor with load balancing and service discovery
https://meteorhacks.com/cluster-a-different-kind-of-load-balancer-for-meteor.html
MIT License
632 stars 80 forks source link

Using collection from web service in separate service #74

Open eventrio opened 9 years ago

eventrio commented 9 years ago

So I am currently working through converting much of the auxiliary code base into microservices after completing the architecture lesson in BulletProof Meteor and I was wondering how I could get data from a collection in the web service on a separate service? I am currently able to get the data only inside of a connection.subscribe callback like so:

Cluster.connect("mongodb://localhost/service-discovery");
Cluster.register("logging"); 
var webConn = Cluster.discoverConnection('web');
Companies = new Mongo.Collection('companies', webConn);
webConn.subscribe('exhibitors', function(){
    console.log("inside subscription");
    console.log(Companies.find().count());
});

What I am having trouble figuring out, is how can I always have access to the Companies data without having to resubscribe to the data? I envision a scenario where I can do Companies.find({}) on the server in the logging service without needing to wrap each call in a subscribe. As an example of what I am talking about I would have something like this in the logging service:

Cluster.connect("mongodb://localhost/service-discovery");
Cluster.register("logging"); 
var webConn = Cluster.discoverConnection('web');
Companies = new Mongo.Collection('companies', webConn);
webConn.subscribe('exhibitors', function(){
    console.log("inside subscription");
    console.log(Companies.find().count());
});
Meteor.methods({
    findByName:function(name){
        return Companies.find({name:name});
    },
    findById:function(id){
        return Companies.findOne({_id:id});
    }
});

Then I would call it from the 'web' service using the connection.call method like normal. It seems that if I had to wrap each db access in a subscription there would be significant overhead in the subscription that could possibly be less efficient.

My actual situation is using a microservice to process invoices for orders on a scheduled basis which requires having quite a few methods available on the "accounting" service that can be called from a scheduler on the "web" or "accounting" service. In the future the invoices will be processed in real time so it is important to make this as efficient as possible. Any help would be greatly appreciated.

Xample commented 9 years ago

Here is what I am using (a server connects to another server) without making use of cluster but this should change soon by discovering the connection instead of explicitly writing it here:

// This code is run on a slave server running on port 3000
var server = 'http://localhost:5000'; // address of the master server
var backendConnection = DDP.connect(server); // init a connection
var BackendCollection = new Mongo.Collection('backendcollection', backendConnection); // we have a local collection of the remote content.

var subscription = backendConnection.subscribe('subscriptionName'); // subscribing on the server will fill the BackendCollection collection

I can then use BackendCollection the traditional way. Now in order to cache all the stuff on the slave server, I simply create a local collection, observe the remote one and fill the local one along the events I do receive. It gives something like:

// Somewhere in both the client and server code
var LocalCollection = new Mongo.Collection('localcollection');
// Server code
LocalCollection.update({}, {$set:{"available": false}},{multi: true});

    var handle;
    var subscription = backendConnection.subscribe('videos', options,
            {
                onReady: function ()
                {
                    console.log("Subscribed on server:"+server);
                    var addedCount = 0;
                    handle = BackendVideos.find().observe({
                        added: function (document)
                        {
                            var id = document._id;
                            var model = {
                                data:document, 
                                available:true
                            };

                            var upsert = LocalCollection.upsert(id, {$set: model});
                            var wasInsert = _.has(upsert,"insertedId");
                            if (wasInsert)
                            { 
                                console.log("Item added", document._id);
                                addedCount++;
                            }
                        },
                        changed: function (newDocument, oldDocument)
                        {
                            LocalCollection.update(oldDocument._id, {$set:{"data":newDocument}});
                            console.log("Item changed", newDocument._id);
                        },
                        removed: function (oldDocument)
                        {
                            var id = oldDocument._id;
                            LocalCollection.remove(id);
                            console.log("Item removed", oldDocument._id);
                        }
                    });

                    var removedCount = LocalCollection.remove({"available": false});
                    console.log("Added "+addedCount+" documents, removed "+removedCount+" documents into the collection");

                },
                onStop: function ()
                {
                    console.log("Subscription stopped");
                    if (handle)
                    {
                    console.log("Stopped observing");
                        handle.stop();
                    } 
                }
            });

Note that you could have done something much simpler, I have chosen this way of doing to avoid removing documents from the localCollection right before adding them again. Here I flag them as unavailable, wait for the whole subscription to be ready (which through the observer will validate the existing documents) and then only I do remove the unavailable documents (which without being validated by the observer are simply not existing on the master server anymore)