Open hermanfma opened 6 years ago
Hello,
Do you have more details on your install, there is obviously another problem as it is sooo slow. How did you install iceScrum? How do you start it? Is it the same server as the r6?
Regards,
Hi,
The loading speed is very slow because, unfortunately, we currently have a very bad internet connection even for Brazilian standards (150 KBps upload, usually. At the day I recorded the video it was much worse).
Our installation is using the official docker container, installed on Archlinux. Icescrum starts using a systemctl service that starts its docker instance.
About the server, yes, it's the same server that we used before upgrading IS to v7, with the same bad network connection, but we never had this problem with v6.
In fact, we have other services running on this server, like a mediawiki (also running in a docker instance), and using the same setup. On the same day, I was able to load a 12 MB wiki page without errors (It took around 3 min to load on that day =/).
After testing a lot, we noticed that the problem can happen in very fast connections (we were testing in the same lan as the server), although its really hard to reproduce it under these conditions.
It seems to me that the problem is somewhere in the http server, it's using a timeout or other rule to stop sending the assets to the clients, or it can't handle interruptions / broken pipes properly.
In the meantime, we are trying (very hard) to get a better internet connection (with 600 KBps upload, hopefully stable) and maybe it will be enough to solve our issue.
Thanks.
Hi,
Thanks for your detailed answer. We are very sorry that you experience such issue.
We recently reduced the size of the assets through compression in order to help users who experience such bandwidth limitation... Static asset compression came in iceScrum 7.25 so if you use a version below that, upgrading could help (https://www.icescrum.com/documentation/upgrade-guide/).
I don't have statistics on the asset size differences between R6 and v7 but there are very good chances that the v7 ones are significantly larger, thus explaining the worsened experience. The reason is that we applied a modern software architecture that consists in putting way more logic on the front-end, thus requiring bigger libraries and code.
As you noted, assets are cached. As far as I know the cache should be kept in your browser as long as you use the same: browser + user account + iceScrum version. However, you apparently have trouble even filling the cache in the first place because of timeouts. How long does it take for the timeout to occur?
Since you host your own server, internet bandwidth could be taken out of the equation if most of your users are on the same local network. In such case, you could install the iceScrum server on the local network and be limited only by the local network bandwidth. However, the downside would be that accessing iceScrum from the outside would suffer the limits of your internet connection upload bandwidth...
In any case, we sincerely hope that you will manage to get a better internet connexion.
Hi.
I'm reopening (by creating a new) issue #34 (Icescrum loading failures on remote access) that was closed some days ago.
Sorry about not responding, but we were busy the entire month with management tasks. In the meantime we updated Icescrum to v7.26, to see if it solved the issue, with no success.
We uploaded a video with remote access attempts of our page, for reference. https://www.youtube.com/watch?v=Hxhg3oDYKwI
In the video's beginning, a force reload that clears the cache was used, followed by normal reloads. The application file (application-xxxx.js) and other files fail download, and their errors are reported in the browser console.
Normally when the page opens after several attempts and the file caches, it works without this problem. But after a while the cache probably gets deleted and we have to pass through this process of reloading the page again, involving more page reloading attempts.
Here is the link to our Icescrum page, if you need it. REDACTED
Thanks.