Brewskey / spark-server

An API compatible open source server for interacting with devices speaking the spark-protocol
https://www.particle.io/
GNU Affero General Public License v3.0
54 stars 27 forks source link

eventProvider.js not working #216

Closed haeferer closed 7 years ago

haeferer commented 7 years ago

Hi,

thanx a lot for the fast response to this request !

but it does not work. At least i tried

First, i've changed the example code from

import type { Event } from '../src/types';

import { Container } from 'constitute';
import { defaultBindings } from 'spark-server';

const container = new Container();
defaultBindings(container, settings);

container.constitute('EVENT_PROVIDER').onNewEvent((event: Event) => {
  // do piping stuff here.
});

to

import type { Event } from '../types';
import { Container } from 'constitute';
import defaultBindings from '../defaultBindings';
import settings from '../settings.js';
import logger from '../lib/logger';

const container = new Container();
defaultBindings(container, settings);

const deviceServer = container.constitute('DeviceServer');
deviceServer.start();

const evProvider = container.constitute('EVENT_PROVIDER');
evProvider.onNewEvent((event: Event) => {
  logger.info('Event',event);
});

cause there are missing libarys, and the code simply directly ends, maybe there was no devicemanager started

After all, the code Runs, but nothing happens.

Next, i've integrated the code in main.js (from spark-server)

container.constitute('EVENT_PROVIDER').onNewEvent((event: Event) => {
  logger.info({ ev: event }, 'Incomming Event');
});

No error, but nothing happens...

in the end i've changed my bunyan hack from yesterday to subscrive to '' not ```all*``` (which worked yesterday)

  deviceServer._eventPublisher.subscribe('*', (ev: any) => {
    logger.debug({ data: ev.data, event: ev.name, name: ev.deviceID }, 'Incomming Event');
  }); 

now the events are comming in ....

What's my problem :(

haeferer commented 7 years ago

I've implemented the hack from yesterday into the sample code, work!

import type { Event } from '../types';
import { Container } from 'constitute';
import defaultBindings from '../defaultBindings';
import settings from '../settings.js';
import logger from '../lib/logger';

const container = new Container();
defaultBindings(container, settings);

const deviceServer = container.constitute('DeviceServer');
deviceServer.start();

// THIS DOESNT WORK
const evProvider = container.constitute('EVENT_PROVIDER');
evProvider.onNewEvent((event: Event) => {
  logger.info('Event onNewEvent',event);
});

// THIS WORKS
deviceServer._eventPublisher.subscribe('*', (ev: any) => {
  logger.info({ data: ev.data, event: ev.name, name: ev.deviceID }, 'Incomming Event');
});
haeferer commented 7 years ago

Ok, this is the reason (or at least one problem):

 if (!event.isPublic && filterOptions.userID !== event.userID) {
        return;
      }

I need my filterOptions.userID set, otherwise i will only get public eventes :(

or subscribe must be called without filteroptions set


Here is the handling

    const listener = filterOptions
      ? this._filterEvents(eventHandler, filterOptions)
      : eventHandler;

As of this code, a filter is created if there are filterOptions. If there is a Filter you CAN consume private events, ONLY if you have the userID.

So for a "Catch All Provider" the Filteroptions should not be set. or the option "listenToBroadcastedEvents" should not be handled as a Filter!

AntonPuko commented 7 years ago

1) if you use spark-server as it is, not as lib. you don't need any extra imports, you can get eventProvider in any place where you have access to main consitute container. 2) You can't just use eventPubliseher without subuscribedEvents filter, this way if you will make eventPublisher.subscribe -> rabbitMq -> eventPublisher.publish - it will mirror events and you'll end up with an infinite cycle. 3)

if (!event.isPublic && filterOptions.userID !== event.userID) {
        return;
      }

Yes, this can be the problem. Thanks for pointing it out. I will try to fix it today. It may be not as easy as changing that line though.

AntonPuko commented 7 years ago

Ok, It was easy. Seems changing that filter line to less strict doesn't break other things. Try now, after this spark-protocol commit: https://github.com/Brewskey/spark-protocol/commit/b3f7fc088417384cf16f93c1025523b6d8008f7d

haeferer commented 7 years ago

I've seen this request "eventPublisher.subscribe -> rabbitMq -> eventPublisher.publish" in my bunyan fork (here i use the hack to log ALL EVENTS)

https://github.com/keatec/spark-server/tree/bunyan

I've seen the request und response event from spark-server in the logs.

So if you want to implement a filter to solve such circle problems, maybe its better to flag such "spark-server"-Events and set the filter to "everything except spark-server"...

Thx for your help, great work

AntonPuko commented 7 years ago

I'm not only about spark-server/request(response) events but about all events, so broadcasted and listenForBroadcastedEvents === false is exactly kind of the flag and filter.

Basically, EventProvider is just a simple abstraction on eventPublisher, for users, who don't need to know how EventPublisher and all its filters work.

haeferer commented 7 years ago

@AntonPuko Sorry, if i ask again...

I looked in the EventPublisher / EventProvider Concept from spark-protocol. Nice work.

To understand it better, please allow a question

the publish methods emits every event twice (independent from event type)

setImmediate(() => {
      this._emitWithPrefix(eventData.name, event);
      this.emit('*', event);
    });

in case of publishAndListenForResponse there is only one listener installed (for the response). For such events an this.emit('*', event); seems not meaningfull???

so if "publish" is called from "publishAndListenForResponse" the second emit can be suppressed (save for performance). And these intercom-events would not appear if i subscribe for '*'. Also there would'nt be a chance to build a circle ?

What do you think?

haeferer commented 7 years ago

@AntonPuko Also for my understanding: in case an internal function uses "publishAndListenForResponse" the function calls this.subscribe()

 this.subscribe(
          responseEventName,
          responseListener,
          {
            once: true,
            subscriptionTimeout: LISTEN_FOR_RESPONSE_TIMEOUT,
            timeoutHandler: (): void => reject(
              new Error(`Response timeout for event: ${eventData.name}`),
            ),
          },
        );

But the returnvalue (which is the subscriptionID) is not stored. So the only way a subscription is remove from _subscriptionsByID : Map in this case is a timeout. A successfull publishAndListenForResponse does not remove the subscription at all. Maybe only if unsubscribeBySubscriberID is called

I try to understand the system as much as possible to make a correct implementation of an event, so sorry for stressing you ...

AntonPuko commented 7 years ago

@haeferer I think the first is a micro-optimization, which shouldn't affect performance as much. The second is definitely a bug, thanks for pointing it out, I'll make a change.

haeferer commented 7 years ago

@AntonPuko I currently (right now) make a fork to patch the Eventprovider.js (in spark-protocol) (broadcasted : true) So if you want i can make a PR (different ones) for this ?