Currently we push resources to Transifex all at once, which causes issues if the resources are too many or if the articles themselves contain too much content.
This happens because browsers allow certain HTTP slots to take place at once (https://developers.google.com/web/tools/chrome-devtools/network-performance/reference#waterfall), blocking the rest until one of those slots is available again. If those calls take too long to return a response (eg. if parsing takes too long or are just too many) the rest of the calls get timeout out.
In order to solve this, we push resources one at a time, uploading the next one only when the previous one has been completed. This will make uploading articles slower but it will ensure that none will timeout due to too many requests being blocked from the browser itself.
Currently we push resources to Transifex all at once, which causes issues if the resources are too many or if the articles themselves contain too much content.
This happens because browsers allow certain HTTP slots to take place at once (https://developers.google.com/web/tools/chrome-devtools/network-performance/reference#waterfall), blocking the rest until one of those slots is available again. If those calls take too long to return a response (eg. if parsing takes too long or are just too many) the rest of the calls get timeout out.
In order to solve this, we push resources one at a time, uploading the next one only when the previous one has been completed. This will make uploading articles slower but it will ensure that none will timeout due to too many requests being blocked from the browser itself.