Open ashilen opened 4 years ago
I've noticed this as well, had to to hack my way around this with distinct()
Hey @ashilen! You're right, that is the current behavior with the global stream setup. In hindsight that was a poor design decision.
subscribe
method from inside of a GraphQL resolver, it will do the following:Returning the created stream from the GraphQL resolver allows you to use rx
operators to filter / map events that get sent to that group before they are returned to the consumer to send to the client
This way, whenever you trigger a subscription using trigger_subscription
, the following happens:
trigger_subscription
sends an event to the specified group namerx
logic present in the GraphQL consumerSo under this new model, a subscription resolver will create a unique stream for each actively subscribed client, and each client stream will receive 1 event object per event that is broadcast to the specified group.
This allows you to further reduce the number of messages each consumer is required to handle by specifying unique group names based on GraphQL arguments when calling subscribe (see the new Model Updated docs for an example of a common use case for this).
The goal of this new design is to try and be as flexible as the pub/sub system used by JS GraphQL Subscriptions implementations, while still allowing you to use the power of rx
in your resolvers.
Does that answer your question @ashilen?
@Just-Drue I'd be very interested to hear how you used distinct()
to solve this problem :)
@jaydenwindle hash curren't object's properties and call it within the resolve
def __key(self) -> tuple:
return self.a, self.b, self.c
def __hash__(self) -> int:
return hash(self.__key())
def __eq__(self, other: 'Model') -> bool:
if isinstance(other, Model):
return self.__key() == other.__key()
return NotImplemented
return root.filter(
lambda event:
event.operation == UPDATED and
isinstance(event.instance, Model)
).map(lambda event: event.instance).distinct(lambda instance: instance.__hash__())
Hi Jayden. I see that #15 already raises this question and that you have a related pr already open. I just want to confirm that the following behavior is expected, and that I'm not overlooking something.
GraphqlSubscriptionConsumer
instances exist.on_next
on the shared stream, and 4 events are pushed on to the stream.This seems like a bug to me, but since, in your reply to #15 (specifically to this point: "Aside from that, wouldn't stream_fired be called on the same event for each open websocket and cause the event to be delivered extra times to each consumer?"), you don't explicitly address the issue, I'm not certain I'm not missing something. Moreover, in your pr, correct me if I'm misunderstanding, it seems like since groups share a stream, resolvers subscribed to the same group will still receive 1 * (n open consumers publishing to the same group) event objects for each actual event. Am I missing something?
Thank you --