wankdanker / node-discover

Automatic and decentralized discovery and monitoring of nodejs instances with built in support for a variable number of master processes, service advertising and channel messaging.
229 stars 59 forks source link

Usecase for two discovery nodes with the same processUuid #14

Closed kostiak closed 8 years ago

kostiak commented 8 years ago

Right now, every node gets an instanceUuid and a processUuid. If you start two discovery nodes from the same process, they will get the same processUuid, for example:

var Discover = require('node-discover');

var d1 = new Discover();  
var d2 = new Discover();

d1 and d2 will have the same processUuid. They will not discover each other, but a discovery node outside that process will discovery them both. So that limits each node process to one discovery node. Is there a usecase for this, or another reason why I wouldn't be able to have multiple discovery instances (that are able to discovery each other) as part of the same node process?

I was thinking of adding something like a 'multiInstance" flag to the constructor but wanted to see if I'm missing something and what the purpose of processUuid is.

wankdanker commented 8 years ago

Hi @kostiak,

I can't remember if there was a specific use case for this or if it is an artifact of how node used to work. At the time I was originally developing this node (v0.6ish) worked such that you could only bind to a specific UDP port once per process. All subsequent binds to the port within the process would basically detach earlier binds from their event listeners. I may be fuzzy on those details, but I do recall discussing with the core devs the behavior of UDP and they said that you could only bind once per process.

So, I think I developed this around the assumption that one would only ever have one discover instance per process. But, along the way things were added like instanceUuid which seem to counter what I just said.

At any rate, I can not think of a specific use case for this. I think what I would prefer to do is change the ignore flag that you implemented before to ignoreProcess and then have a new flag for ignoreInstance.

Then we would have something like:

else if (obj.pid == procUuid) {
    if (self.ignoreProcess){
        return false;
    }
    if (~reservedEvents.indexOf(obj.event)) {
        return false;
    }
    if (obj.event && obj.data) {
        self.emit(obj.event, obj.data, obj, rinfo);
    }
    else {
        self.emit("message", obj)
    }
}
else if (obj.iid == self.instanceUuid) {
    if (self.ignoreIstance) {
        return false;
    }

    //TODO: do the reserved events check?

    if (obj.event && obj.data) {
        self.emit(obj.event, obj.data, obj, rinfo);
    }
    else {
        self.emit("message", obj)
    }
}
else if (obj.event && obj.data) {
    self.emit(obj.event, obj.data, obj, rinfo);
}

Or something similar where there is not so much repeated code.

The reservedEvents may need to have the hello event removed in order for that to work. I tested removing hello from reservedEvents and with ignore set to false I was able to discover instances from the same process.

wankdanker commented 8 years ago

The reservedEvents was something that I believe you implemented and I'm not quite sure of it's importance, I'll defer to you on that currently.

If it was not ultimately that important, the code above could be simplified to:

else if (obj.pid == procUuid && self.ignoreProcess) {
    return false;
}
else if (obj.iid == self.instanceUuid && self.ignoreIstance) {
    return false;
}
else if (obj.event && obj.data) {
    self.emit(obj.event, obj.data, obj, rinfo);
}
wankdanker commented 8 years ago

I recall now. procUuid was originally used to prevent an instance from consuming itself. Ideally once instanceUuid was created the code that prevented self consumption should have been changed to check the iid instead of the pid.

Anyway, the sole purpose is to prevent an instance from getting events from itself. I'm OK with changing that behavior as long as there are flags to enable backwards compatibility, and we can do a major version bump.

kostiak commented 8 years ago

I figured that procUuid was used to prevent an instance from consuming itself. Since instanceUuid was added and objects had both instanceUuid and processUuid I was just making sure there isn't another reason it was kept. Thanks a lot for the explanation and quick response.

I used reservedEvents for a similar purpose - to prevent reserved events that the node sends out to be received by itself. Correct me if I'm wrong but right now if you add the ignoreInstance flag to both the nodes in basic-self.js, they will start receiving their own reserved events.

A good way to demonstrate it is to set the ignoreInstance flags to false in basic-self.js and listen to the master event:

d1.on("master", function (obj) {
  console.log("d1: New master promoted.");
  console.log(obj);
}

Since it's receiving and evaluating it's own hello message, it sees a master that's not in it's node list and thinks it's a new master.

I would suggest maybe something like:

else if (obj.iid == self.instanceUuid && 
  (self.ignoreInstance || ~reservedEvents.indexOf(obj.event))) {

i think for now it would be enough to add the flag as a feature and just bump a minor version without breaking backwards compatibility. Maybe add some kind of warning that this behavior will be changed in the next major version bump.

wankdanker commented 8 years ago

Just updated my issue-14 branch. I think the best way to handle this is to bail early when processing the hello message in Discover::evaluateHello(). Now that I think about this more, the reserved events don't really apply to the network level. The only internal event that should be received from the network is the hello event. Unless I'm totally forgetting/missing something.

kostiak commented 8 years ago

You are completely right regarding hello being the only event from the network level, good point. And yes, bailing inside evaluateHello() solves that problem.

I'd say the issue-14 branch "closes" the issue at this point. Thanks again! :+1:

wankdanker commented 8 years ago

Pushed v0.3.0 to npm.