PaulUithol / Backbone-relational

Get and set relations (one-to-one, one-to-many, many-to-one) for Backbone models
http://backbonerelational.org
MIT License
2.34k stars 330 forks source link

getAsync() fails when retrieving to many models using set URL #580

Open champ opened 6 years ago

champ commented 6 years ago

Issue

In cases of a model having a HasMany relation, when trying to fetch related models asynchronously using getAsync(), the set URL used (if your collection and backend supports it) can exceed the maximum length allowed by typical web servers (Apache has a limit of around 8k by default).

Solution

A simple approach to solve this issue would be to add an option chunkSize to getAsync() to split requested models in different set requests (here), similar to the default behavior of generating one request per model.

bpatram commented 6 years ago

@champ I'm not sure if this is something that should be this library's responsibility. Also, I'm a bit confused as to what is being described here, can you go into more detail or provide an example use case that if affected by this problem? (you can fork this jsfiddle).

Are you talking about request body limitations or URL length limitations?

If this is related to URLs being to long then you can define custom url on your models and see if there is a better approach to how you write urls. Unless I am understanding your application setup incorrectly, this is not something that can be broken into multiple requests to solve this issue (as your solution suggestions).

champ commented 6 years ago

@bpatram Thanks for a quick reply.

The issue is related to URLs generated behind the scenes in ‘getAsync()’ becoming too long causing an error on the proxy or web server layer.

As you probably know, ‘getAsync()’ has two strategies to fetch models:

• Single request - if the collection can generate a URL representing the models, assumed by checking if the collection generates different URLs when passing models as to when not passing any models (base URL) • One request per model - fallback strategy

The issue reported concerns the first strategy since it will use a single URL which size depends on the number of models to fetch. This is how Backbone-tastypie work, for example (linked in the docs). In typical cases, this is not a problem, but with hundreds of models to fetch, the URL can exceed existing limits allowed by web servers or proxies, even browsers.

The proposed solution is to make ‘getAsync()’ split the models to fetch in chunks and generate one request per chunk instead of a single request.

Since this a mostly backend-related issue, it is difficult to prove in jsfiddle without an online API backend.

bpatram commented 6 years ago

@champ I understand what you are attempting to do here and I think this does in fact make sense as a feature in this library.

In your proposal does chunkSize refer to URL string length or number of models attempting to be fetched?

Here is a simple jsfiddle which I am assuming mimics your setup at a smaller scale: http://jsfiddle.net/xnq7xbyz/1/

champ commented 6 years ago

@bpatram The jsfiddle mimics the setup - we use Backbone-tastypie which generates the set URLs in a similar way.

The idea with ’chunkSize’ was to indicate the maximum number of models to fetch in each request; i.e. if we have 150 models to fetch and we call ’getAsync()’ with the option ’chunkSize: 100’, it would generate 2 requests: the first with a URL generated passing the first 100 models, and a second with URL for the remaining 50.

Although ’chunkSize’ is a quite common term for this kind of logic in other libraries, maybe ’maxModelsPerRequest’ would be a more explicit name for such option. Any thoughts?