Closed brentjanderson closed 7 years ago
Refs #87 - perhaps there is enough direction and spec in place to support this now?
After reading that article, this seems like it would be very cool.
We are absolutely in the process of working on this. This is more of a roadmap issue than an immediate feature request. It is still unclear precisely the best way to implement this, and even the clientside and javascript reference implementation is in a lot of flux. The discussion is ongoing, and we're excited to see what will happen here.
@benwilson512 It may be worth following what GitHub does with this. In their engineering blog post about launching a GraphQL endpoint they specifically mentioned that a subscription model was one of their hopes/requirements.
Maybe they'll help move the ball forward for us.
Well, to be clear by "we're excited to see what will happen here," we're not going to wait to see what happens, or who will drive it forward. Phoenix (and, to speak more widely, OTP) is a particularly good fit for subscriptions (I'd argue it's a much better fit than, ie, Node.js), and we've been busy laying some groundwork to really start building this.
Any traction on this or an update on the roadmap perhaps?
@benwilson512 is currently digging into this, given we have what we need to build on top of in v1.2. Expect a report on progress by the end of this next week.
It is still unclear precisely the best way to implement this, and even the clientside and javascript reference implementation is in a lot of flux.
@benwilson512 As you consider this, it might be a good idea to keep in mind popular graphql clients that already ship with subscriptions support, like Apollo , for potential support out-of-the-box. Relatively frictionless interop with subscriptions would be wonderful.
@stemcc that should happen naturally given they are using apollo client as a GraphQL client already. So I am pretty sure the subscriptions will target apollo client.
@benwilson512 , could you update the issue with a little update on your progress on this front ? Thanks !
@benwilson512 Any update? Lack of support for subscriptions is the only reason I'm not using Absinthe!
Hey there.
I started actually writing code for this late last week after trying out a variety of approaches. I've had good success so far, and hope to have some demos next week.
That sounds great, thanks for all your work on the project.
@benwilson512 I'm starting to use absinthe and apollo on internal projects at work and would love to help with this. Let me know if there is any way I can provide assistance.
Sounds great, any updates in the past month?
Hey, I'm also very interested on having subscriptions and see how they can be integrated with phoenix channels.
BTW just release a tiny node package I made (apollo-phoenix-websocket) to run some queries using Absinthe via Phoenix channels.
Hey folks sorry for the silence, December was a busy month. I have sort of a 75% working project that I'll hopefully be uploading soon. The main work has been around document management, and making decisions about in what process those documents are executed.
When an item is published to a subscription are the documents run in the socket process? Are they run in the mutation process that created the item? How does this work in a distributed context? This kind of thing.
Right now document management is going well, I'm finalizing document execution and then publication should be made trivial by Phoenix PubSub.
What's the current status of this? And what's the story with this package that was just published? https://github.com/vic/apollo-phoenix-websocket
Hey!
https://github.com/benwilson512/phoenix_chat_example has a working example of absinthe subscriptions end to end. Subscriptions are essentially functional alpha status at this point. I will likely close this issue shortly and replace it with some concrete steps that need to be taken before we can call subscriptions 1.0
I don't know much about that project you linked. Just judging from their channel code, it basically just runs mutations and queries over websocket.
@benwilson512 thanks for the chat example - it's great to have something working end to end! One question I have is whether it is possible for absinthe to detect that a data item has changed external to graphql/without a mutation occurring? Is there any provision for this case or do you have any comments as to how/if it can be implemented? Thanks again!
@conor-mac-aoidh I think that you should use PubSub pattern. The Phoenix Channels are already use it, so you can call somethink like Endpoint.broadcast("graphql-topic", "update-data", %{data: data_to_update})
somewhere in your code when you change your data. To pass it into graphql subscription you can intercept such outgoining messages:
defmodule MyApp.DataChannel do
use Phoenix.Channel
intercept ["update-data"]
handle_out("update-data", %{data: data}, socket) do
with {:ok, result} <- Absinthe.run(subscription, YourSchema, data) do
push(socket, "grapql-subscription-message", result)
end
end
end
@cybernetlab You're dead on conceptually, but from an implementation perspective I've gone a relatively different route.
To answer @conor-mac-aoidh 's question first, there is an underlying pubsub mechanism and I'll be providing a function shortly that will let you do something like Absinthe.Subscriptions.publish_to_field(:name_of_subscription_field, value)
.
@cybernetlab so in the version you've outlined each client's channel maintains its own subscriptions and then when data is published to those subscriptions each channel independently runs the document. What if the document ends up talking to the database? Well if you have 1000 people who are all connected with the same document, you're going to hit the database 1000 times to just get exactly the same data. Thi
So instead the approach I've taken is to have a centralized store of documents, keyed by the subscription field that they're for. This gives me at least 2 major optimization possibilities:
1) I can track how many documents are in fact different. So if 1000 people all send in the same document, I can execute it just once and then push the results to 1000 people. I really only need to execute the number of UNIQUE documents that exist, not every document.
2) Included in the notion of uniqueness though is the graphql context. If certain fields are only visible if the current user is logged in for example, then it becomes impossible to re-use results from document as the result of another. However I can still get optimizations through batching. Right now batching only happens within a given document, but there's no reason this can't be extended to work on sets of documents. Thus even if you had 1000 unique {document, context}
pairs where the document looked like:
subscription CommentsOnArticle {
comment(articleId: 5) {
body
author { name }
}
}
author
database lookups would get batched together such that you would do exactly 1 lookup instead of 1000.
Right now optimization #1 already exists in the implementation I've got so far. Hopefully #2 will come along shortly.
@benwilson512 Actually data does not retrieved inside handle_out
handler. It's passed to it in second argument and then formatted within subscrition execution. So when I've change something in DB, I'm calling Endpoint.broadcast(..., updated_data)
once and then subscription executed for each client.
Also I've included a cutted code, full version here.
Because I can't find enough documentation about subscription implementation I've create my own. In my version (see subscription.ex
in gist link above) inside schema I just inform Absinthe that I want to setup subscription to some kind of documents. When I receive subscription request from client, I call Absinthe.run with context %{subscribe: true}
that gives me {:subscribe, Model, id | :any}
tuple if request was correct. After that when data changes I call Absinthe.run again with %{subscription: data}
that gives me a real result with data that I can push to client.
May be this approach is not optimal and not well designed - It's my solution as I have no other well-documented design.
Regards
@cybernetlab You're assuming that the payload of data contains literally everything in the subscription. What if the payload you're broadcasting was just %Message{body: "foo", author_id: 1}
and the document wanted the author name from earlier?
It means that you can't safely use types like:
object :message do
field :body :string
field :author, :user do
resolve fn message, _, _ -> database_lookup_of_author(message) end
end
end
If you're just using the subscription documents to format then sure you can do it in the client. Even still you're doing a LOT of redundant work. If you have N people who sent in the same doc you're doing the formatting N times instead of just once and fast-laning the json to the sockets.
I totally understand however why this is your approach, it's definitely a sane one given the lack of other options so far. I'm just pointing out how the implementation I've been working on provides some innovation around query execution that are actually a step ahead of even stuff like apollo-server
's.
@benwilson512 Thanks for your answers - I agree that my version is not well designed but It's enough for me at this moment - I've simple queries. I'll be happy to switch to your implementation when it was finished.
One another question here is - how to handle record deletions? Also I can't find any docs about standards for such payload - how to form subscription message that I can push to client to inform it about deletions? (I use apollo-client now)
That's a great question, quite frankly I don't know. Anyone have an answer there?
graphql subscriptions don't have to handle everything ;-)
About the deletion of records I think that you could go two ways, either have a deletedRecords subscription that returns the id of the deleted record, then the client uses this to delete the record from the client-store. Or you could have a updatedRecords subscription that then has a boolean (meta)field called deleted(the client has to query the field of course) that the client uses it to determine if the record should be removed. The updatedRecords subscription should basically also be able to handle new records(getting an unknown id) and of course updates. That should maybe just be called the recordSubscription, i guess it all depends on preference? I don´t think that the framework has to handle this as mentioned by @dustinfarris?
I agree with @dustinfarris and @Logisig that thinks like record deletions should be implementation-specific. My question was about absinthe
as server-side framework and for example apollo
as a client-side framework. Really if framework provides approaches for monitoring of creating and updating records then I expect that it provides me an approach for monitoring of record deletions.
Now there's a RFC for the Subscription Spec: https://github.com/facebook/graphql/pull/267.
More about it: https://dev-blog.apollodata.com/the-next-step-for-realtime-data-in-graphql-b564b72eb07b#.ohg5niu25
Is there a way to connect channels with a ws client atm? https://github.com/apollographql/subscriptions-transport-ws
I used this and it works https://github.com/vic/apollo-phoenix-websocket (i've tested queries and mutations)
Do note that while apollo-phoenix-websocket will indeed run graphql queries and mutations, it will not run subscriptions it looks like. Nonetheless that project is super handy as far as solving the JS side of the equation. I'm very excited to integrate it with the proof of concept we did of subscriptions in https://github.com/benwilson512/phoenix_chat_example/tree/absinthe
Thanks. Do you have a timeline for this? I'd be interested in helping out. Are there some clear next steps?
I've just added phoenix to my react native app and I'm going to be updating the apollo store manually whenever an update comes in from the channel. New functionality made this possible in apollo 1.0. I'll let you know how it goes.
On 10 Apr 2017 17:50, "Ben Wilson" notifications@github.com wrote:
Do note that while apollo-phoenix-websocket will indeed run graphql queries and mutations, it will not run subscriptions it looks like. Nonetheless that project is super handy as far as solving the JS side of the equation. I'm very excited to integrate it with the proof of concept we did of subscriptions in https://github.com/benwilson512/phoenix_chat_ example/tree/absinthe
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/absinthe-graphql/absinthe/issues/156#issuecomment-292973146, or mute the thread https://github.com/notifications/unsubscribe-auth/AC8oXzlT8yCQEpCE-9sxBBIacsGk1pOYks5rukG-gaJpZM4J7c7b .
ElixirConf EU is in 3 weeks. In that time we need to actually release 1.3 final, work on our talk, and finish a book chapter. There is unlikely to be public movement on this prior to ElixirConf EU, however it is our primary priority for this library afterward.
Ah cool. Wish you the best of luck getting all that done!
On 10 Apr 2017 17:55, "Ben Wilson" notifications@github.com wrote:
ElixirConf EU is in 3 weeks. In that time we need to actually release 1.3 final, work on our talk, and finish a book chapter. There is unlikely to be public movement on this prior to ElixirConf EU, however it is our primary priority for this library afterward.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/absinthe-graphql/absinthe/issues/156#issuecomment-292974726, or mute the thread https://github.com/notifications/unsubscribe-auth/AC8oXzG1S4ByUc_KBwYPpCSRfmjxQM8Fks5rukLzgaJpZM4J7c7b .
@benwilson512 What book? Is there a prerelease version?
@BryanJBryce Ben and I are signed with Pragmatic Bookshelf for a book on Elixir + GraphQL w/ Absinthe. We've been writing it for a few months, and hopefully it will be available as a Beta early/mid summer. It will be announced officially (with cover art and all that jazz) very soon.
@bruce will your book cover relay-specific conventions at all?
@cayblood Yes, especially around the facilities we've built into absinthe_relay (we're taking pains to avoid going too far into JS territory, however; the book is very focused on the backend, so even we talk about the frontend, we're talking about it in the context of supporting it from the backend. Too much to teach in one book!)
For what it's worth, Relay Modern adds more fine grained control over editing the Relay environment. It may be possible to run the standard phoenix channels alongside Relay with the channel callbacks making updates directly into the store.
Ideally, use the provided subscriptions API would be preferable but in the interim I'm pretty sure you could get something working using an alternative approach.
Also really looking forward to this feature!
Update: I have a copy of https://github.com/apollographql/GitHunt-React working with apollo / subscriptions!
I'm extracting the code into absinthe_phoenix this week, so there should ideally be a basic guide up at the end of the week for how to get going w/ apollo subscriptions.
You can track the overall progress of this feature here: https://github.com/orgs/absinthe-graphql/projects/2 EDIT: organization level projects can't be public, I'll create an issue on absinthe and cross link everything here shortly.
The link in this comment isn't working for me. Anyone else?
Excited to check out the progress on this!
Hey @ndarilek I'm seeing you replied via email, I've crossed out the link via an edit, apparently organization projects aren't able to be public. I will update with some new issues shortly.
Ah, never thought to check back in. Thanks for the update.
With the release of 1.4.0-beta.1 Subscriptions are officially out in beta! I'm gonna close this issue, but feel free to create new ones if you run into issues.
Getting started: https://github.com/absinthe-graphql/absinthe_phoenix Your schema: https://hexdocs.pm/absinthe/1.4.0-beta.1/Absinthe.Schema.html#subscription/2
Full guides are coming.
Using some pub sub mechanism (potentially built with GenStage?) it would be amazing to have Subscriptions that publish back to clients after a mutation, enabling realtime functionality. See https://medium.com/apollo-stack/graphql-subscriptions-in-apollo-client-9a2457f015fb for the Apollo stack approach to realtime GraphQL. If combined with web sockets/Phoenix channels, this would be a killer stack.