Closed hartti closed 2 years ago
Thanks @hartti - what cache policy do you typically use in your queries? It's been noted that the kind of implementation you've got in the extension wouldn't work for a cache policy of .returnCacheDataAndFetch
because of the client first returning the cached data (if present) and then return data from the network call which would crash on the second call, see Apple's documentation on CheckedContinuation
. I'm not sure yet of the solution for making that cache policy work correctly.
@calvincestari, ouch. Thanks for pointing this out! I am using the default cache policy (which to my understanding is .returnCacheDataAndFetch). I have not yet encountered this issue, but probably will later, when the app gets in the hands of real users and the API gets some real use. I think I will revert back to use the completion handlers and sort out the UI update problems through DispatchQueue.main. Probably also better to close this feature request at least for now (until solution comes available) otherwise others might end up doing the same mistake as I.
I realise this issue was closed but I think the multiple callbacks situation could be resolved by using an AsyncThrowingStream.
Hopefully the below will be useful to others looking to use async/await in this context but with a bit less danger!
I had a shot at putting together an example async fetch using this method. It seems to be working well but I may well have overlooked something so feedback would be very welcome.
extension ApolloClient {
public func fetch<Query: GraphQLQuery>(
query: Query,
cachePolicy: CachePolicy = .default,
contextIdentifier: UUID? = nil,
queue: DispatchQueue = .main
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, Error> {
AsyncThrowingStream { continuation in
let request = fetch(
query: query,
cachePolicy: cachePolicy,
contextIdentifier: contextIdentifier,
queue: queue
) { response in
switch response {
case .success(let result):
continuation.yield(result)
if result.isFinalForCachePolicy(cachePolicy) {
continuation.finish()
}
case .failure(let error):
continuation.finish(throwing: error)
}
}
continuation.onTermination = { @Sendable _ in request.cancel() }
}
}
}
extension GraphQLResult {
func isFinalForCachePolicy(_ cachePolicy: CachePolicy) -> Bool {
switch cachePolicy {
case .returnCacheDataElseFetch:
return true
case .fetchIgnoringCacheData:
return source == .server
case .fetchIgnoringCacheCompletely:
return source == .server
case .returnCacheDataDontFetch:
return source == .cache
case .returnCacheDataAndFetch:
return source == .server
}
}
}
And then from an async context you would call it something like this...
let results = Network.shared.apollo.fetch(query: MyQuery())
do {
for try await result in results {
print(result.data.id)
}
} catch {
debugPrint(error)
}
I believe this should fully support cancellation, multiple results and error handling but I've not tested it extensively yet.
Note: The isFinalForCachePolicy
extension is a bit of a hack but I couldn't see an easy way to know if more results were possible or not so this was the best proxy I could find.
Thanks @andykent !
Thanks for the input @andykent! We are planning a pretty hefty rewrite of the networking layer for later this year, so we think it's best for us to hold of on supporting async/await
formally until then. Since we can really restructure the internals of the networking API to best support async/await
performantly at that time.
In the mean time, I think this is a great reference for people who want to create their own wrapper!
Thanks @AnthonyMDev, yes , I 100% agree this shouldn't make it into the project until the networking layer is re-written to support it natively. This is a hack at best.
As you say I just wanted to leave this here as a reference for others that might find it useful in the interim.
Additionally, I haven't played with it but AsyncChannel might be another good future option worth investigating.
@andykent Would love to see a perform
extension if you have it. This helped me out while we wait for the networking update.
@afgarcia86 Perform is a lot simpler I think. I am using something like the below and it seems to be working fine:
extension ApolloClient {
func perform<Mutation: GraphQLMutation>(mutation: Mutation) async throws
-> GraphQLResult<Mutation.Data>
{
try await withCheckedThrowingContinuation { continuation in
_ = perform(mutation: mutation) { response in
continuation.resume(with: response)
}
}
}
}
@andykent thanks for getting back to me. It works great! usage example for others:
do {
let result await try = Network.shared.apollo.fetch(mutation: MyMutation())
print(result.data.id)
} catch {
debugPrint(error)
}
hey guys, just wanted to share a cancellable version inpired by this forum thread:
extension ApolloClient {
@discardableResult
public func fetch<Query: GraphQLQuery>(
query: Query,
cachePolicy: CachePolicy = .default,
contextIdentifier: UUID? = nil,
queue: DispatchQueue = .main
) async throws -> GraphQLResult<Query.Data> {
return try await withTaskCancellationContinuation { continuation in
return self.fetch(
query: query,
cachePolicy: cachePolicy,
contextIdentifier: contextIdentifier,
queue: queue
) { result in
continuation.resume(returning: result)
}
}
}
@discardableResult
public func perform<Mutation: GraphQLMutation>(
mutation: Mutation,
publishResultToStore: Bool = true,
queue: DispatchQueue = .main
) async throws -> GraphQLResult<Mutation.Data> {
return try await withTaskCancellationContinuation { continuation in
return self.perform(
mutation: mutation,
publishResultToStore: publishResultToStore,
queue: queue
) { result in
continuation.resume(returning: result)
}
}
}
}
extension ApolloClient {
private func withTaskCancellationContinuation<T>(
_ body: (CheckedContinuation<(Result<GraphQLResult<T>, Error>), Never>) -> Apollo.Cancellable
) async throws -> GraphQLResult<T> {
let cancelState = makeState()
let result: (Result<GraphQLResult<T>, Error>) = await withTaskCancellationHandler {
return await withCheckedContinuation { continuation in
let task = body(continuation)
activate(state: cancelState, task: task)
}
} onCancel: {
cancel(state: cancelState)
}
switch result {
case .success(let data):
return data
case .failure(let error):
throw error
}
}
private func makeState() -> Swift.ManagedBuffer<(isCancelled: Swift.Bool, task: Apollo.Cancellable?), Darwin.os_unfair_lock> {
ManagedBuffer<(isCancelled: Bool, task: Apollo.Cancellable?), os_unfair_lock>.create(minimumCapacity: 1) { buffer in
buffer.withUnsafeMutablePointerToElements { $0.initialize(to: os_unfair_lock()) }
return (isCancelled: false, task: nil)
}
}
private func cancel(state: Swift.ManagedBuffer<(isCancelled: Swift.Bool, task: Apollo.Cancellable?), Darwin.os_unfair_lock>) {
state.withUnsafeMutablePointers { state, lock in
os_unfair_lock_lock(lock)
let task = state.pointee.task
state.pointee = (isCancelled: true, task: nil)
os_unfair_lock_unlock(lock)
task?.cancel()
}
}
private func activate(state: Swift.ManagedBuffer<(isCancelled: Swift.Bool, task: Apollo.Cancellable?), Darwin.os_unfair_lock>, task: Apollo.Cancellable) {
state.withUnsafeMutablePointers { state, lock in
os_unfair_lock_lock(lock)
if state.pointee.task != nil {
fatalError("Cannot activate twice")
}
if state.pointee.isCancelled {
os_unfair_lock_unlock(lock)
task.cancel()
} else {
state.pointee = (isCancelled: false, task: task)
os_unfair_lock_unlock(lock)
}
}
}
}
Hey there π Could anyone please share their experience with async/await wrappers around Apollo Client? For example, the one from @andykent? Is it safe, or have you experienced some weird bugs? With our current codebase leveraging Swift Concurrency, I'd love to keep it and not fall back to completion handlers again. According to Apollo Roadmap, the support should eventually come, but no one knows when.
@ondrejkorol I totally appreciate that I am only a datapoint of 1 still but we are using this code in production and haven't had any issues with it. I can't vouch 100% for its correctness but we haven't seen any issues at all so far.
My only comment is that whilst the AsyncSequence works well it is sometimes annoying to use when you just want a one time piece of data and not continuous updates. We ended up adding a couple of (not very pretty) helper functions to fill this gap, for example:
public extension ApolloClient {
func fetchWithOfflineFallback<Query: GraphQLQuery>(query: Query) async
-> GraphQLResult<Query.Data>?
{
do {
let results = fetch(query: query, cachePolicy: .fetchIgnoringCacheData)
return try await results.first(where: { _ in true })
} catch {
let results = fetch(query: query, cachePolicy: .returnCacheDataDontFetch)
return try? await results.first(where: { _ in true })
}
}
}
Hi Andy! Thanks so much for your fast response. I appreciate your sharing of the learnings and experience π That was exactly my thought - it seems like this should work nicely, although it can be a little tedious to work with AsyncSequence when you (most of the time) expect a single response.
Anyway I feel like for our use-case β doing mostly straightforward queries/mutation β this should work! Although we need to be careful with the right cache policy.
According to Apollo Roadmap, the support should eventually come, but no one knows when.
Version 2.0 of Apollo iOS is when Swift Concurrency will be supported. I am wrapping up a few things now, then moving on to support @defer
in Apollo iOS. After that work on 2.0 will begin.
@marcosmko Does your solution work for you? I implemented a very similar solution but it just leaks the continuation since a Apollo.Cancellation.cancel()
doesn't call the completion handler #1938 #1836
@marcosmko I also found the thread in Swift forums, and was inspired to try this https://forums.swift.org/t/how-to-use-withtaskcancellationhandler-properly/54341/44.
Which in theory, can be used with Apollo like this:
extension ApolloClientProtocol {
public func fetch<Query: GraphQLQuery>(query: Query,
cachePolicy: CachePolicy = .default,
contextIdentifier: UUID? = nil) async throws -> GraphQLResult<Query.Data> {
return try await withCancellableCheckedThrowingContinuation { continuation in
let cancellable = self.fetch(query: query, cachePolicy: cachePolicy, contextIdentifier: contextIdentifier, queue: .main) { result in
switch result {
case .success(let graphQLResult):
continuation.resume(returning: graphQLResult)
case .failure(let error):
continuation.resume(throwing: error)
}
}
return cancellable.cancel
}
}
}
@fruitcoder It doesn't call the completion block on cancel, but the way that is handled is with (see swift forum link):
continuation.resume(throwing: CancellationError())
this is what we have been using. It seems to be working fine:
extension ApolloClient {
public func fetch<Query: GraphQLQuery>(
query: Query,
cachePolicy: CachePolicy = .default,
contextIdentifier: UUID? = nil,
queue: DispatchQueue = .main
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, Error> {
AsyncThrowingStream { continuation in
let request = fetch(
query: query,
cachePolicy: cachePolicy,
contextIdentifier: contextIdentifier,
queue: queue
) { response in
switch response {
case .success(let result):
continuation.yield(result)
if result.isFinalForCachePolicy(cachePolicy) {
continuation.finish()
}
case .failure(let error):
continuation.finish(throwing: error)
}
}
continuation.onTermination = { @Sendable _ in
request.cancel()
}
}
}
public func watch<Query: GraphQLQuery>(
query: Query,
cachePolicy: CachePolicy = .default,
callbackQueue: DispatchQueue = .main
) -> AsyncThrowingStream<GraphQLResult<Query.Data>, Error> {
AsyncThrowingStream { continuation in
let watch = watch(query: query, cachePolicy: cachePolicy, callbackQueue: callbackQueue) { result in
switch result {
case .success(let result):
continuation.yield(result)
case .failure(let error):
continuation.finish(throwing: error)
}
}
continuation.onTermination = { @Sendable _ in
watch.cancel()
}
}
}
public func perform<Mutation: GraphQLMutation>(
mutation: Mutation,
publishResultToStore: Bool = true,
queue: DispatchQueue = .main
) async throws -> GraphQLResult<Mutation.Data> {
try await withCheckedThrowingContinuation { continuation in
perform(mutation: mutation, publishResultToStore: publishResultToStore, queue: queue) { result in
continuation.resume(with: result)
}
}
}
public func upload<Operation: GraphQLOperation>(
operation: Operation,
files: [GraphQLFile],
queue: DispatchQueue = .main
) async throws -> GraphQLResult<Operation.Data> {
try await withCheckedThrowingContinuation { continuation in
upload(operation: operation, files: files, queue: queue) { result in
continuation.resume(with: result)
}
}
}
}
fileprivate extension GraphQLResult {
func isFinalForCachePolicy(_ cachePolicy: CachePolicy) -> Bool {
switch cachePolicy {
case .returnCacheDataAndFetch:
return source == .server
default:
return true
}
}
}
As per the last comment it is better to write the isFinalForCachePolicy like that
private extension GraphQLResult {
func isFinalForCachePolicy(_ cachePolicy: CachePolicy) -> Bool {
switch cachePolicy {
case .returnCacheDataAndFetch:
return source == .server
default:
return true
}
}
}
@andykent I'd like to point out that the need for AsyncThrowingStream
is based in bad practice.
.returnCacheDataAndFetch
causing a completion handler to be called more than once is fundamentally bad practice. Functions should have predictable single entry points and exit points. This is literally one if the reasons structured concurrency was developed.
If there's a logical need for multiple values to be emitted, then a combine publisher should be emitted, as a singular return value that "publishes updated values that replace the previous", or a binding should be passed to the query method as an unowned 2-directional place to store "results that come in sequence but are not themselves sequence since the last replaces the first"
You'll otherwise find it quite impossible to emit multiple values without using Sequences with Structured Concurrency, and that's by design.
That being said, semantically speaking, querying for a result of a singular type should not return a sequence that must be iterated over to be useful. Things that aren't lists shouldn't have to be treated like lists.
.returnCacheDataAndFetch causing a completion handler to be called more than once is fundamentally bad practice.
100% agree with this statement. This is something I'm very much looking forward to correcting with the 2.0 release. For user's that prefer completion handler based API's, we will likely still expose this (and make sure it's a well documented behavior).
We'll be doing a lot of R&D to determine what we believe the best APIs for this will be, but my hunch is that the primary way of using ApolloClient
will likely return some sort of Combine
sequence. :)
Definitely agree here.
The .returnCacheDataAandFetch policy was a point of awkwardness when we built our swift concurrency wrappers.
We'll be doing a lot of R&D to determine what we believe the best APIs for this will be, but my hunch is that the primary way of using
ApolloClient
will likely return some sort ofCombine
sequence. :)
Talking it over with a college of mine, we actually determined this is probably the cause of some transient crashes that we otherwise couldn't diagnose. We were considering alternative interfaces, and unfortunately all of them involve a publisher or binding or some keypath wizardry. What gets me the most is that all of that complexity is simply to handle just 1 cache policy case, a case that returns twice instead of once.
We actually came to the conclusion that we should just not use .returnCacheDataAndFetch
at all, and our async interfaces can return once or throw once, as desired/expected, and if we very specifically want the behavior of cache-then-network
, then manually implementing it is not only super simple with structured concurrency, but very clear and unambiguous.
self.myState = try await query(MyQuery(), cachePolicy: .returnCacheDataDontFetch)
self.myState = try await query(MyQuery(), cachePolicy: .fetchIgnoringCacheData)
Doing this, we get a visible update (assuming myState is @Published
or @State
), no need to delay an async call to defer the secondary update, and we're very clear at the call site that we're updating the value twice: once from cache and subsequently from network. Frankly, it seems like a worthwhile tradeoff to increasing complexity just for one very specific use case. (I'd rather not deal with publishers if async throws
calls can just return the value or throw an error)
This is all really great feedback that we will absolutely be taking into account to help inform our decisions on the redesign of these APIs for Apollo iOS 2.0. Thanks so much everyone for the great discussion and knowledge sharing!
We'll be doing a lot of R&D to determine what we believe the best APIs for this will be, but my hunch is that the primary way of using
ApolloClient
will likely return some sort ofCombine
sequence. :)Talking it over with a college of mine, we actually determined this is probably the cause of some transient crashes that we otherwise couldn't diagnose. We were considering alternative interfaces, and unfortunately all of them involve a publisher or binding or some keypath wizardry. What gets me the most is that all of that complexity is simply to handle just 1 cache policy case, a case that returns twice instead of once.
We actually came to the conclusion that we should just not use
.returnCacheDataAndFetch
at all, and our async interfaces can return once or throw once, as desired/expected, and if we very specifically want the behavior ofcache-then-network
, then manually implementing it is not only super simple with structured concurrency, but very clear and unambiguous.self.myState = try await query(MyQuery(), cachePolicy: .returnCacheDataDontFetch) self.myState = try await query(MyQuery(), cachePolicy: .fetchIgnoringCacheData)
Doing this, we get a visible update (assuming myState is
@Published
or@State
), no need to delay an async call to defer the secondary update, and we're very clear at the call site that we're updating the value twice: once from cache and subsequently from network. Frankly, it seems like a worthwhile tradeoff to increasing complexity just for one very specific use case. (I'd rather not deal with publishers ifasync throws
calls can just return the value or throw an error)
This is an interesting take, I never considered not using the .returnCacheDataAndFetch
option and just doing two separate async fetches.
I'm probably going to steal this idea from you as dealing with async sequences adds far more boiler plate than just doing two separate fetches.
I'm probably going to steal this idea from you as dealing with async sequences adds far more boiler plate than just doing two separate fetches.
That's really what bugged the the most... For the simplest of cases when not using cache we have to deal with an async sequence?!
From your sample code, however, the AsyncThrowingStream
does make perfect ideal sense for 1 use case... clearly subscriptions benefit from that design. Their intent is exactly what it's for, "yield continuous new results as they come in". :D
I'm going to politely disagree with some of the points here and give an alternative perspective. Hopefully the balance can help feed into the discussion.
Whilst I do agree that the behaviour of returnCacheDataAndFetch
calling the callback multiple times is not a nice API design I do think the approach in use here of returning an AsyncThrowingStream
is quite a nice API solution when there is a stream of response values over time, this is exactly what AsyncStream is for and you can see it used in Apple's own APIs. The reasons I think this are:
returnCacheDataAndFetch
it is very clear at the call site that you will be receiving a stream of values over time as data becomes available. Agreed it's clumsy for other query types though (see below).fetchWithOfflineFallback
example above. We actually have multiple of these in our project, like fetchAttemptingCacheFirst
- These always return a single async value but building them from the stream API gives a lot of flexibility.fetch
query but very quickly have to migrate to a watch
query to be able to keep view data fresh, the fact that both return an AsyncStream
in this implementation makes that migration much easier. I would say at this point the vast majority of our Apollo/SwiftUI code is based around watch
not fetch
and having watch
return an AsyncStream
feels really nice. We have a lot of SwiftUI code that looks like the below and has been working really nicely:.task {
let results = API.apollo.watch(
query: getArticleQuery,
cachePolicy: .returnCacheDataAndFetch
)
for await result in results {
switch result {
case let .success(data):
if let article = data.data?.article {
state = .loaded(article)
} else {
state = .error("No article data found.")
}
case let .failure(error):
state = .error(error.localizedDescription)
}
}
}
I think basing any new API (that provides sequences of values over time) around Combine
rather than AsyncStream
would be a mistake at this point.
That said I do 100% agree that a convenience API should be provided for the calls that can guarantee they will only ever return a single result. One nice way to do this might be to have an alternative form of fetch
that takes a new Source
enum, something like fetch(query, from: .remote)
(equiv of fetchIgnoringCacheData
) and fetch(query, from: .cache)
(equiv of returnCacheDataDontFetch
). I think this would fill the higher level API gap that is needed. Here's an (entirely untested) example wrapper implementation built on top of the above API extensions.
public extension ApolloClient {
enum FetchSource {
case remote
case cache
var cachePolicy: CachePolicy {
switch self {
case .remote: return .fetchIgnoringCacheCompletely
case .cache: return .returnCacheDataDontFetch
}
}
}
func fetch<Query: GraphQLQuery>(
query: Query,
from source: FetchSource
) async throws -> GraphQLResult<Query.Data>?
{
let results = fetch(query: query, cachePolicy: source.cachePolicy)
return try await results.first(where: { _ in true })
}
}
With this in place if you wanted to you could then do:
myState = try await fetch(query, from: .cache)
myState = try await fetch(query, from: .remote)
I'm going to politely disagree with some of the points here and give an alternative perspective. Hopefully the balance can help feed into the discussion.
Whilst I do agree that the behaviour of
returnCacheDataAndFetch
calling the callback multiple times is not a nice API design I do think the approach in use here of returning anAsyncThrowingStream
is quite a nice API solution when there is a stream of response values over time, this is exactly what AsyncStream is for and you can see it used in Apple's own APIs. The reasons I think this are:
- In the case of
returnCacheDataAndFetch
it is very clear at the call site that you will be receiving a stream of values over time as data becomes available. Agreed it's clumsy for other query types though (see below).
I have to disagree with calling it "not a nice API design", it's VERY bad practice to call completion handlers more than once, period. The fact that so many people do it mistakenly is again part of the drive for structured concurrency in the first place.
I also will disagree with you stating that's how Apple is using it. Apple is using it as a "Steam of always new continuous data". This is perfect for subscription, or watch()
, as it similarly "always emits new elements to an inherent sequence".
But for the purpose of returnCacheDataAndFetch
, this does not have that same semantic meaning. We are not expected to have 2 elements at the end of the call, we are expected to have 1 state for the value, and it be updated after the network call. This, frankly, is exactly what Combine is very explicitly for, stating "I have a single state, that may update, and I want to notify subscribers when it changes".
- It is easy (and I would agree needed) to add higher level wrappers for when you only need a specific value. See
fetchWithOfflineFallback
example above. We actually have multiple of these in our project, likefetchAttemptingCacheFirst
- These always return a single async value but building them from the stream API gives a lot of flexibility.
I would agree here. The default API's should not provide publishers for the cases that don't have result values that may update. fetchCacheAndNetwork()
as its own call, as the only one returning a Publisher, rather than just the actual result value, would stand out to users as a distinctly differently functioning API.
- In my experience most of the time in SwiftUI land you start off with a
fetch
query but very quickly have to migrate to awatch
query to be able to keep view data fresh, the fact that both return anAsyncStream
in this implementation makes that migration much easier. I would say at this point the vast majority of our Apollo/SwiftUI code is based aroundwatch
notfetch
and havingwatch
return anAsyncStream
feels really nice. We have a lot of SwiftUI code that looks like the below and has been working really nicely:I think basing any new API (that provides sequences of values over time) around
Combine
rather thanAsyncStream
would be a mistake at this point.
Again I have to disagree. AsyncStream
is for streams of data. watch
should be used for things that come in as streams of continuous, concatenating, new data. It doesn't really make sense, for instance, to use it for a discrete entity type lookup. If I'm calling self.movie = try await query(GetMovieQuery(id: "12345")
that reasonably should be returning a single entity, that is a movie with the ID 12345
. Again we're not expected to have multiple elements at the end, whereas with AsyncStream
, even from all of Apple's examples, that's the intent.
I personally err on the side of a structured concurrency equivalent over combine for any use case, but I have to give it to Combine for this one particular edge case, it provides a facility to give what we're expecting with proper semantics. You don't have to iterate over a combine publisher, just sink it, and when it updates, it updates, exactly what's desired. (The only argument I could see for not using Combine is if this were to be used on non-ios/macos platforms, but as this is apollo-ios, not apollo-swift, that shouldn't be an issue, Combine is considered part of "iOS Swift Standard Library")
That said I do 100% agree that a convenience API should be provided for the calls that can guarantee they will only ever return a single result. One nice way to do this might be to have an alternative form of
fetch
that takes a newSource
enum, something likefetch(query, from: .remote)
(equiv offetchIgnoringCacheData
) andfetch(query, from: .cache)
(equiv ofreturnCacheDataDontFetch
). I think this would fill the higher level API gap that is needed. Here's an (entirely untested) example wrapper implementation built on top of the above API extensions.public extension ApolloClient { enum FetchSource { case remote case cache var cachePolicy: CachePolicy { switch self { case .remote: return .fetchIgnoringCacheCompletely case .cache: return .returnCacheDataDontFetch } } } func fetch<Query: GraphQLQuery>( query: Query, from source: FetchSource ) async throws -> GraphQLResult<Query.Data>? { let results = fetch(query: query, cachePolicy: source.cachePolicy) return try await results.first(where: { _ in true }) } }
With this in place if you wanted to you could then do:
myState = try await fetch(query, from: .cache) myState = try await fetch(query, from: .remote)
This basically is the example I proposed above, which I agree actually CachePolicy should probably just be reduced down to those 2 options (maybe named .network to match Apollo, and maybe a 3rd .networkDontCache? /shrug). I'm honestly a bit shocked that I found this doing a code review of some of my colleagues async helpers, he came here after writing unit tests and .returnCacheDataAndFetch
not working as expected obviously without doing a stream/publisher. I dug into it... I can't find any other docs that mention it, but I was just shocked to see a completion handler being called twice. We (were) using some Futures based code in some places and clearly this wasn't expected and was causing some unknown crashes to us. There's an implicit assumption that something named completionHandler
is only ever going to be called once with a result value or error. Twice causes threading crashes! Imagine how many people are leaving DispatchGroups on the first call! (I know we were... and for now we're not using that cachePolicy).
In all honesty, I would recommend that be prioritized before 2.0 even. At the very least, clearly documenting everywhere the current behavior that .returnCacheDataAndFetch
has and noting it is not ideal, and probably also ensuring its not the default anywhere.
Im new to Apollo-iOS, and the multi callback behavior got me to this thread, and I concur completely with @apocolipse's synopsis and reasoning above (for what ever little that matters ha)
Especially with the dispatch group / leave semantics which is a very common pattern. Having groups left more than expected is a huge issue, especially when it arises from multiple invocations of a completionHandler which is presumed to fire once, and only once, as is convention pretty much every where else on the planet (not being shady, just stating my experience with the pattern as ive seen in graphics / video / multi media solutions).
To clarify I have no relationship to the Apollo project beyond being a user of the library and trying to offer up some code in this issue to help those who might have stumbled here looking to integrate the current release into structured concurrency codebases. I did not author the existing API.
The above solution was a quick sketch over the existing API, not a well thought out future API. The interesting point I think is that the AsyncStream return value makes it very explicit that the current API can in some situations return multiple values over time. IMO whilst this might not be a nice API to use it does make the current behaviour clear and in that way it exposes the weirdness in the current API design.
I have to disagree with calling it "not a nice API design", it's VERY bad practice to call completion handlers more than once, period.
I get the sentiment, and I do agree, in the callback world this unexpected and error prone, I'm not a fan of the API but I was trying to be gentle with my words ;) there are real people here and presumably the authors/maintainers had their reasons for going that route at the time. I think we should be respectful of that.
I actually think we probably all agree with more than this thread makes it seem:
The main area I think it sounds like we disagree on is delivering watch
results as an AsyncStream
is a good/bad API. Coming from a reactive background it felt quite natural to me to have changing values over time modelled as a stream. This is also the approach that GRDB uses for observing DB changes which I think is probably the closest I have found to the watch
behaviour. I'm not sure you're going to change my mind on this one.
The last remaining piece is the returnCacheDataAndFetch
behaviour/API. Honestly here you have given me some things to think about so thanks for that. Maybe it shouldn't exist at all, but maybe it's a common enough pattern that it should be handled. In my mind this is the bit that needs the most thought API wise.
Hopefully this thread has given the project maintainers some valuable feedback on how a future API could look and the issues people are facing with the existing API but for my own sanity I am going to step away from it now. β€οΈ
I get the sentiment, and I do agree, in the callback world this unexpected and error prone, I'm not a fan of the API but I was trying to be gentle with my words ;) there are real people here and presumably the authors/maintainers had their reasons for going that route at the time. I think we should be respectful of that.
Thank you for calling this out @andykent. All of us on the Apollo iOS team greatly appreciate discussions/input from the community and helping out in areas that aren't our immediate focus. This is what makes the open source community so great!
The networking API predates any of us currently working on Apollo iOS. I don't say that to disparage the prior work nor to pass the blame; it is what we have for now though. Our current team is aware of the limitations and that is why you'll see version 2.0 on our public roadmap with the goal to deliver a well designed, reliable, and modern networking API. That is no small task, so we ask for some patience from the small team that we are.
Oh word on all of that. To be clear, I meant no disrespect/shade to be thrown. Apollo is awesome, and its enabled me to get a ton done I wouldn't have been able on my own. So mad props as the kids say. Theres bound to be hiccups, and everyone is looking forward to improvements. None of this is easy!
Hi guys, bringing this up because I have been using your wrapper recently and getting the crash SWIFT TASK CONTINUATION MISUSE: actualFetch(query:cachePolicy:queue:) leaked its continuation!
. Has anyone faced the same issue? And how did you solve it? I tried using NSLock but seem like the crash still remains.
Here is my wrapper
private func actualFetch<Query: GraphQLQuery>(query: Query, cachePolicy: CachePolicy, queue: DispatchQueue) async throws -> Query.Data {
let lock = NSLock()
return try await withCheckedThrowingContinuation({ continuation in
var nillableContinuation: CheckedContinuation<Query.Data, Error>? = continuation
apollo.fetch(query: query, cachePolicy: cachePolicy, queue: queue) { [weak self] result in
lock.lock()
defer {
lock.unlock()
}
switch result {
case .success(let data):
let error = data.errors?.first
let code = error?.extensions?["code"] as? String
if let error = error {
nillableContinuation?.resume(throwing: error)
} else if let data = data.data {
nillableContinuation?.resume(returning: data)
} else {
let errorUn = NSError(domain: "Can't get data at this time", code: 403)
nillableContinuation?.resume(throwing: errorUn)
}
nillableContinuation = nil
case .failure(let error):
nillableContinuation?.resume(throwing: error)
nillableContinuation = nil
}
}
})
}
Hi guys, bringing this up because I have been using your wrapper recently and getting the crash
SWIFT TASK CONTINUATION MISUSE: actualFetch(query:cachePolicy:queue:) leaked its continuation!
. Has anyone faced the same issue? And how did you solve it? I tried using NSLock but seem like the crash still remains.Here is my wrapper
private func actualFetch<Query: GraphQLQuery>(query: Query, cachePolicy: CachePolicy, queue: DispatchQueue) async throws -> Query.Data { let lock = NSLock() return try await withCheckedThrowingContinuation({ continuation in var nillableContinuation: CheckedContinuation<Query.Data, Error>? = continuation apollo.fetch(query: query, cachePolicy: cachePolicy, queue: queue) { [weak self] result in lock.lock() defer { lock.unlock() } switch result { case .success(let data): let error = data.errors?.first let code = error?.extensions?["code"] as? String if let error = error { nillableContinuation?.resume(throwing: error) } else if let data = data.data { nillableContinuation?.resume(returning: data) } else { let errorUn = NSError(domain: "Can't get data at this time", code: 403) nillableContinuation?.resume(throwing: errorUn) } nillableContinuation = nil case .failure(let error): nillableContinuation?.resume(throwing: error) nillableContinuation = nil } } }) }
Are you using cache policy: .returnCacheDataAndFetch
?
Continuations can only return once.
.returnCacheDataAndFetch
returns twice.
You will have to use an async stream instead if you use that cache policy.
Are you using cache policy:
.returnCacheDataAndFetch
?Continuations can only return once.
.returnCacheDataAndFetch
returns twice.You will have to use an async stream instead if you use that cache policy.
No, I avoid using .returnCacheDataAndFetch
by replacing it with .returnCacheDataElseFetch
when calling the actual fetch
function.
Are you using cache policy:
.returnCacheDataAndFetch
? Continuations can only return once..returnCacheDataAndFetch
returns twice. You will have to use an async stream instead if you use that cache policy.No, I avoid using
.returnCacheDataAndFetch
by replacing it with.returnCacheDataElseFetch
when calling theactual fetch
function.
Sorry I spaced out when reading your initial error message. At a glance, I think the issue is that you are capturing the continuation. There is probably some check in place that makes sure you don't do that.
Also you don't need to use NSLock.
If you MUST capture the continuation, you might want to try using withUncheckedThrowingContinuation
instead but be warned. When using unchecked continuations, you don't get a lot of the safety checks.
Try this instead:
private func actualFetch<Query: GraphQLQuery>(query: Query, cachePolicy: CachePolicy, queue: DispatchQueue) async throws -> Query.Data {
return try await withCheckedThrowingContinuation({ continuation in
apollo.fetch(query: query, cachePolicy: cachePolicy, queue: queue) { [weak self] result in
switch result {
case .success(let data):
let error = data.errors?.first
let code = error?.extensions?["code"] as? String
if let error = error {
continuation.resume(throwing: error)
} else if let data = data.data {
continuation.resume(returning: data)
} else {
let errorUn = NSError(domain: "Can't get data at this time", code: 403)
continuation.resume(throwing: errorUn)
}
case .failure(let error):
continuation.resume(throwing: error)
}
}
})
}
Try this instead:
private func actualFetch<Query: GraphQLQuery>(query: Query, cachePolicy: CachePolicy, queue: DispatchQueue) async throws -> Query.Data { return try await withCheckedThrowingContinuation({ continuation in apollo.fetch(query: query, cachePolicy: cachePolicy, queue: queue) { [weak self] result in switch result { case .success(let data): let error = data.errors?.first let code = error?.extensions?["code"] as? String if let error = error { continuation.resume(throwing: error) } else if let data = data.data { continuation.resume(returning: data) } else { let errorUn = NSError(domain: "Can't get data at this time", code: 403) continuation.resume(throwing: errorUn) } case .failure(let error): continuation.resume(throwing: error) } } }) }
This is actually my origin approach, and it has the same issue.
What I'm suspecting is that my apollo
has a check inside where it does not call the completion, which would cause the leak because we must always resume the completion at least once.
Here's the code that may causes the problem. If the request is cancelled, it does not call the completion
.
We actually came to the conclusion that we should just not use
.returnCacheDataAndFetch
at all
I agree/came to a similar conclusion. They are fundamentally different things (reading from disk and the network), and should be treated differently IMO. I would have zero issues if .returnCacheDataAndFetch
was dropped as an option all together, since it can be replicated easily for existing users (in a POLA-compliant way) with two calls.
calling the callback multiple times is not a nice API design
Calling a completion handler multiple times is asking for trouble, almost no iOS developer I've ever worked with is going to expect that to happen.
especially when it arises from multiple invocations of a completionHandler which is presumed to fire once, and only once, as is convention pretty much every where else on the planet (not being shady, just stating my experience with the pattern as ive seen in graphics / video / multi media solutions).
I don't think this is overly-snarky. If a "completion" handler is called more than once then the first call was, by definition, not "complete".
@apocolipse @RamblinWreck77 Can you guys please give me some hints as to why you think calling the completion handler in this instance is so wrong. Firstly the closure parameter is called resultHandler
and not completionHandler
. The way I reason about this is the closure is a substitute for a delegate - the delegate method could be called many times - why is the closure not supposed to be called many times ? Any input is appreciated
I think the issue is (at least, as I understand it) is that depending on the cache policy parameter the function behaves differently with different result call counts or completions.
Its also not clear from the method signature that the result or completion handler
After thinking about this a bit - I think the issue is it breaks conventions and isnt wrong per se, but just is opinionated in a way the rest of the platform tooling (ie URLSession etc) treats results/completion, so all of this is a surprise.
Secondly, this pattern can complicate some threading behaviors as pointed out elsewhere. When drawing to the main queue from a result, legacy code might use a dispatch async of NSBlockOperation which can leverage locking, or semaphore logic to do the right thing.
That breaks idempotent result/completion logic in some cases, due to releases being over released by just changing a cache policy.
This sounds silly, but might help legibility at the cost of verbosity (a change im always willing to make for the record)
Provide 2 callback locations, a cached result and a new network result.
This makes semantics clear, they can be nil, and it forces the caller to reason about the locking / semaphore concerns, and it also makes it very clear some of the enumerators will result in multiple results being called.
Its not perfect, nor pretty, but its clear, which I'll take over elegant and confusing any day of the week?
Also sorry I know you didnt ask me :D
Firstly the closure parameter is called resultHandler and not completionHandler
This may be true, however you can't ignore that:
1) Default style/formatting is such that the name will not be visible. Ex:
.perform(arguments) { result in
}
Almost no one is going to see your anonymous "resultHandler", and the few that do would then have to realize that name implies different behavior than the default, and wasn't just a name preference by the devs.
the delegate method could be called many times
Well that's a big problem to, I deal with this a lot in CoreBluetooth and it is maddening. iOS will sometimes call the didConnect delegate method twice for the same peripheral (same UUID) and break the flow. That was a fun one to track down...
If you want to build something that can provide multiple results it needs to be structured in a way that is obvious to even a junior dev that that is what will happen.
I think the issue is (at least, as I understand it) is that depending on the cache policy parameter the function behaves differently with different result call counts or completions.
Quoting @vade , consistency is key as well. Something that performs one way on all your (fresh) test devices then suddenly behaves different in the wild (say: when the cache is filled) is asking for trouble.
Can the Apollo team please post an official workaround in the docs. I would say this functionality is fundamental
Hey all! Thanks everyone for the discussions in this thread! I've been wanting to use Swift concurrency in a project.
From what I've gather on this thread, we must avoid using .returnCacheDataAndFetch
when we wrap the service call.
Anyone having issues with wrapping the apollo service in a withCheckedThrowingContinuation
?
I think the approach of @andykent on places where we need the cache we make back to back call to first the cache only and then fetching for the server sounds fine π€
Hey all! Thanks everyone for the discussions in this thread! I've been wanting to use Swift concurrency in a project.
From what I've gather on this thread, we must avoid using
.returnCacheDataAndFetch
when we wrap the service call.Anyone having issues with wrapping the apollo service in a
withCheckedThrowingContinuation
?I think the approach of @andykent on places where we need the cache we make back to back call to first the cache only and then fetching for the server sounds fine π€
You can't use withCheckedThrowingContinuation if you need an async function to return twice. Continuations are meant to return once.
If you wanted to make a swift concurrency api that would return twice, use an AsyncThrowingStream instead.
In our codebase we explicitly run the query twice. Once to fetch the cache, once to fetch from the server.
Something like this:
var response = await graphQLClient.fetch(query: query, cachePolicy: .returnCacheDataDontFetch)
// update ui with cached data
response = await graphQLClient.fetch(query: query, cachePolicy: .fetchIgnoringCacheData)
// update ui with data from server
Hey all! Thanks everyone for the discussions in this thread! I've been wanting to use Swift concurrency in a project. From what I've gather on this thread, we must avoid using
.returnCacheDataAndFetch
when we wrap the service call. Anyone having issues with wrapping the apollo service in awithCheckedThrowingContinuation
? I think the approach of @andykent on places where we need the cache we make back to back call to first the cache only and then fetching for the server sounds fine π€You can't use withCheckedThrowingContinuation if you need an async function to return twice. Continuations are meant to return once.
If you wanted to make a swift concurrency api that would return twice, use an AsyncThrowingStream instead.
In our codebase we explicitly run the query twice. Once to fetch the cache, once to fetch from the server.
Something like this:
var response = await graphQLClient.fetch(query: query, cachePolicy: .returnCacheDataDontFetch) // update ui with cached data response = await graphQLClient.fetch(query: query, cachePolicy: .fetchIgnoringCacheData) // update ui with data from server
Hey Brent! Thanks for the reply; and yes, that's what I read; that you can't use the .returnCacheDataAndFetch
because it calls the continuation twice and crashes the app.
Was wondering if using withCheckedThrowingContinuation
with .fetchIgnoringCacheData
or .returnCacheDataDontFetch
was working fine π€
Feature request
ApolloClient implementation could support Swift structured concurrency (async/await) instead of allowing the users only use completion handlers
Motivation
This would align Apollo iOS client with new Swift concurrency practices (support throwing, @MainActor, etc) and does not add that much complexity to the API.
Proposed solution
I am using the extension below to implement fetchAsync and performAsync. Similar convenience method implementations could be added for other relevant methods. These extensions work for me, but these might not be 100% correct and there might be room for improvement. As a workaround, anyone can add these extensions to their own codebase to enable them to use async/await approach in their code.
Outstanding Questions
Not sure if this implementation covers all the behaviors of ApolloClient and if they are 100% correct. Also naming could be better aligned to Apollo practices (maybe just fetch instead of fetchAsync)