Open machi1990 opened 4 years ago
/cc @craicoverflow, @wtrocki Automatically generated comment to notify maintainers
This makes a lot of sense, thank you for the issue! Do you see subXXX
overriding pubXXX
if pubXXX
is false? As subCreate
on its own would not have any use.
I see them working independently.
This makes a lot of sense, thank you for the issue! Do you see
subXXX
overridingpubXXX
ifpubXXX
is false? AssubCreate
on its own would not have any use.
Taking
"""
@model(subCreate: true, create: false, update: false, pubCreate: false ....)
"""
type Note {
id: ID!
}
In this schema, subCreate
would create:
type Subscription {
newNote(filter: NoteSubscriptionFilter): Note!
}
and the corresponding resolver, subscribing to the specific Note creation queue. How events are getting publish to the queue would be up the publisher, another Graphback process, another GraphQLCrud process etc (so long as they conform to the same event contract ).
/cc @wtrocki
We also need the opposite situation:
Having subscription handlers available but do not publish any events on CREATE UPDATE AND DELETE
Workaround for now exist:
What we need is to define the model and disable all crud operations on it apart from subscription. Then we use kafka pubsub to listen to events (topics are configurable and documented) and it should work with debezium.
We need to build sample template to demo this better.
Also currently events (topics) are just internal part of the graphback - if we move to event streaming solution we will need extra capability to be able to specify topics directly in the config or schema.
Moving to generic streaming platform will enable us to process changes using debezium directly from db or external even sources.
I trelolozed this (added to trello) 7-datasync-kafka-debezium-enabled-event-streaming-approach
We also need the opposite situation:
Having subscription handlers available but do not publish any events on CREATE UPDATE AND DELETE
Yes, this is described in the issue description.
Workaround for now exist:
What we need is to define the model and disable all crud operations on it apart from subscription. Then we use kafka pubsub to listen to events (topics are configurable and documented) and it should work with debezium.
Indeed.
We need to build sample template to demo this better.
Also currently events (topics) are just internal part of the graphback - if we move to event streaming solution we will need extra capability to be able to specify topics directly in the config or schema.
Moving to generic streaming platform will enable us to process changes using debezium directly from db or external even sources.
+1 on this plus the ability to specify any pre-processing operation (e.g payload transformation ) that needs to be done before the received event is sent to the subscribing client.
For transformation we have separate feature for content mapping
For transformation we have separate feature for content mapping
Nice. Does it apply even under this context? E.g suppose that the source of the events is a DBZ (which pushes events to a Kafka topic), on Graphback side, we'll subscribing to this topic, what'll be desirable is to not merely to the subscription but to supply some sort of transformation function which is to be applied to the event before its sent to the client.
Considering this would be a breaking change and a new feature, do we see this happening for a 0.17.x release?
what'll be desirable is to not merely to the subscription but to supply some sort of transformation function which is to be applied to the event before its sent to the client.
Yep. Since this will be needed to be applied to the queries/mutations and subscriptions we can reuse this logic.
Considering this would be a breaking change and a new feature, do we see this happening for a 0.17.x release?
Post 1.0 release. https://trello.com/c/1lH9SqKu/7-datasync-kafka-debezium-enabled-event-streaming-approach
The approach will be to do POC (same as datasync) without touching core or affecting graphback releases.
This appears to be resolved, closing (reopen if I am wrong)
This solves only a part of it, but there is not ability to not publish changes from within the application.
You can play with this repository especially this commit https://github.com/aerogear/datasync-example/blob/245b324a08f6ad72ff5ed728273e9700f8b69952.
This line https://github.com/aerogear/datasync-example/blob/245b324a08f6ad72ff5ed728273e9700f8b69952/graphback-debezium-integeration/src/kafka-sub.ts#L30 should never be called.
In production ready scenario streaming platforms will never support edits (because data is stream) so we will always have 2 pubsub engines - one for the classical pubsub and one that is done specifically to some model that works for streaming only. If we going to get that into officially supported scenario separate annotations might be needed.
... If we going to get that into officially supported scenario separate annotations might be needed.
I think we should support this usecase by splitting the subXXX
annotation key into two:
pubXXX
once this is activated, we'll allow publishing of the changes. subXXX
once this is activated, we'll create the corresponding Subscription type in the schema, and subscribe to changes only (no publish).The current situation is that subXXX
is responsible of both the publishing and subscriptions.
Yep. This is how our competion seems to be doing subscriptions at the moment.
Is your feature request related to a problem? Please describe.
Being able to publish (over an external pub sub queue e.g Kafka topic) in one Graphback process and letting the subscription be handled by a complete lightweight another Graphback process(es) that's dedicated for subscriptions only.
Describe the solution you'd like
See
subXXX
knobs in https://graphback.dev/docs/next/model/annotations#argumentsHaving a fine grained configuration
pub/sub
knobs.Now we have
subCreate
,subDelete
,subUpdate
that handles publish and subscribing without being able to opt in for one or the other.Essentially:
What I'll like is to split the
subXXX
knobs into:subXXX
(generate queries and resolvers for the subscription)pubXXX
(handle the publish without generating subscription queries / resolvers)This will enable for a more lightweight processes:
The two processes will have to have a strict "event" contract between to smoothen the communication between them.
Describe alternatives you've considered
This can still be achieved with the current version but it not fine tuned as I would have hoped.