Closed molily closed 11 years ago
see also https://github.com/documentcloud/backbone/pull/1325 https://github.com/amccloud/backbone-safesync Several people had the same problem apparently ;)
The safesync adaptor only allows to dispose the latest request, it's a nice hardcoded solution. The approach we took in production it's live and quite stable thus far. I would gladly work in a sync method for Chaplin to use this approach, as models and collections should be adding SyncMachine to handle more complex state changes and transitioning of states already.
Could this probably be a part of SyncMachine itself? Since most syncing is async I/O which can be modeled using a Deferred, and every jqXHR object is a Deferred/promise.
Kind of related to #91
+1 for a canonical solution. Would love to see your approach.
:+1: this would be a great addition!
We solved this problem in our app the following way:
edit (paulmillr): changed this a bit
_pendingRequests: null
abortPendingRequest: (xhr) ->
@_pendingRequests.splice (_(@_pendingRequests).indexOf xhr), 1
abortPendingRequests: ->
xhr?.abort() for xhr in @_pendingRequests
@pendingXhrs = null
ajax: (options) ->
xhr = null
wrap = (option) =>
previous = options[option]
return unless previous?
options[option] = (args...) =>
@abortPendingRequest xhr
unless @disposed
previous args...
return
wrap 'success'
wrap 'error'
xhr = $.ajax options
@_pendingRequests.push xhr
dispose: ->
return if @disposed()
super
@abortPendingRequests()
This can certainly be solved in a more general manner, I just thought I would post it here for inspiration :-)
After having read up it sounds like the SyncMachine is going to be defaulted into Models and Collection again, which was removed a little while ago. Or?
Would be very thankful if anyone could provide me with examples of how to take advantage of initDeferred on models and collections!
+1
Let's get this done.
The first order is to store the requests from a sync call and ensure we call abort on a disposal.
The second has a few options. Subsequent requests. How should we deal with these?
Thoughts? I'm leaning towards exposing the second option and by default "Aborting the previous".
Agree with aborting previous calls on disposal.
abort yep
I'm leaning towards exposing the second option and by default "Aborting the previous".
Having read Jeremy's last response over on BB1325, I am inclined to agree that this is the correct way to proceed, @mehcode, et al.
Chaplin is all about sensible defaults, thus, subsequent requests should abort pending ones by default. However, Chaplin must go further by affording modification of individual jqXHRs as needed.
(Edited: corrected accidental word transposition that changed last sentence's intent, i.e. it is incumbent on Chaplin to provide extensibility.)
There are all sorts of different needs here
Example: user saves a form and the model save request is slow , the user navigates away in the app and the view and model is disposed because another controller is invoked. This request should not be aborted.
Example: We have a search form with key press events to perform "live search" model fetch with different params is used. Every new fetch here should abort existing ones. Disposal should also abort.
@andriijas I don't understand how example one is relevant to Chaplin. If the user navigates away, that's their problem. It's like navigating away from a login form before it finishes logging a user in - the expectation of the user would be that they are not logged in because they didn't let the form finish.
It seems very complicated to develop an app where the user can perform an action that instantiates a new controller, but a model from the previous controller still persists because it didn't finish.
Also, what happens if that model from example one fails? What if it is tied to a callback or event that fires on failure and those events touch the DOM but the new controller changed the view, and now those pieces of the DOM do not exist?
If you do want to give the user the ability to save but move on, that should be handled within the specific $.fail()
or $.error()
itself. Otherwise how does the app know which requests to abort and which ones not to?
Perhaps we introduce strategies.
class Model extends Chaplin.Model
# bad name but keep with me (ideas on a better one are welcome)
strategy: null # Do nothing
strategy: 'abort' # Abort in all circumstances (the default)
strategy: 'stack' # Allow N requests but keep a stack of them and abort them on disposal
We could go even farther and introduce hooks, etc. But I believe a few strategies and null
(where the user defines whatever they want) should be enough to cover all cases.
@opyh Why is this issue closed?
Sorry @chrisabrams @paulmillr, I've just clicked the wrong button. My bad!
I would +1 @mehcode's idea, as it is extensible and allows new strategies without breaking existing ones.
Aborting requests on disposal per default is a sensible idea though.
@chrisabrams: Having models that stay alive while a UI screen / view controller is changed may be complicated, but is vital for many types of applications. I think we would all throw our mail clients away if they stopped sending a mail when beginning to read another one. :)
If you do want to give the user the ability to save but move on, that should be handled within the specific $.fail() or $.error() itself. Otherwise how does the app know which requests to abort and which ones not to?
The more I think about it... the more I'm wondering if including a smart
strategy as well would be a good idea. This smart
strategy would be something like this:
sync:
read: abort previous request
create: stack requests
update: queue requests
delete: queue requests
dispose:
read: abort
else: keep in memory... do something magical about removing the event handlers from the request
The only way to prevent interesting problems with event handlers is to wrap the promise that $.ajax
returns and wrap all callbacks in something that we can no-op on a dispose (at least this is the only way I can think of).
I'm just thinking that the case that @andriijas mentions might come up more frequently than we think. It'd be nice to have a way to handle it.
:+1: smart ftw
Smart indeed! And if you find any way to prevent events after dispos I don't need to do return if @disposed anymore in my callbacks :)
The smart
strategy definitely looks like a winner! Further, I believe we're all in agreement that strategies is the way to go here: provide a couple out-of-the-box Chaplin ones and allow the developer to create custom ones as their application requirements dictate.
Also, @andriijas , et al., regarding the use case where the user is attempting to navigate away from something that is in a "dirty" state (meaning its changes haven't yet been persisted), I agree that Chaplin should provide a hook for this. Perhaps, it does already and I'm just missing it. From my Google WebToolkit days (shudders), I used this approach.
@opyh good point.
@mehcode @lukateake :+1:
In production, we found it important to safe jQuery Ajax Deferreds and reject them once the model/collection is disposed. Otherwise some handlers will be fired on disposed objects, which may throw exceptions and which foils our whole event handler cleanup and memory management.
Chaplin doesn’t provide a solution for that yet. @rendez is working on a solution for moviepilot.com which stores the Ajax Deferreds and rejects them on disposal.
[Edit, from other issue:]
Deferreds and SyncMachines might integrate better with fetch/save/sync If you’re using SyncMachine, you have to call @beginSync() and @finishSync() manually. Same goes for @resolve() when using Deferreds. We need to investigate if it makes sense to do that automatically by providing a Model#sync which adds a success handler to the options. The handler would call the original handler and then @finishSync() or @resolve().