In general, this aids in diminishing issues seen when trying to load large files or jobs. In particular I've found this to be an issue on latest Chrome running on Windows 10 whereas with Firefox (latest) it's not an issue. Essentially, Chrome bombs out of even trying /load POST calls before it ever reaches the backend if the amount of data being sent is too large. Firefox doesn't seem to have this problem at all, but Edge/IE does though at a slightly higher size than Chrome. I've attacked this problem in a few ways:
Getting rid of whitespace: The default JSON generator for DBA jobs includes newlines and tabs to make it more human readable. With the exception of .dba file export, this is unnecessary. Simply removing the whitespace decreased the data size by over 60% in many cases.
Data send method: Originally all data is sent via a POST call in a way such that it is simply URL encoded. By switching to using a file upload POST method, Chrome seems happier with larger data sets as it has a max limit on total URL length.
Compression: While bottle doesn't directly support gzip compression it was easy enough to include it. This is the only additionally javascript import I added (pako.deflate) as it allowed for simple client-side gzip compression of the mostly text data. On average I found that this decreased the total data size to only 15% of original. With the exception being any job that has embedded images as those are, in general, already compressed and cannot be further compressed. Server-side, it automatically detects if the data was sent compressed and loads the file resource, decompresses it (python has gzip included) and continues loading the job as it always has.
Note: These updates were written in such a way that backwards compatibility has been maintained with the original API. I've added a new option to the user config, enable_gzip, which, if set to false, causes the application to go back to the old way of doing things. The file upload method and gzip compression are merely a secondary route by which the data can make it to the backend. But, when enabled, should allow ~10x larger files to be loaded. As it was there were some of the library jobs that couldn't even be loaded using chrome, and this update fixes that.
In general, this aids in diminishing issues seen when trying to load large files or jobs. In particular I've found this to be an issue on latest Chrome running on Windows 10 whereas with Firefox (latest) it's not an issue. Essentially, Chrome bombs out of even trying /load POST calls before it ever reaches the backend if the amount of data being sent is too large. Firefox doesn't seem to have this problem at all, but Edge/IE does though at a slightly higher size than Chrome. I've attacked this problem in a few ways:
Note: These updates were written in such a way that backwards compatibility has been maintained with the original API. I've added a new option to the user config,
enable_gzip
, which, if set to false, causes the application to go back to the old way of doing things. The file upload method and gzip compression are merely a secondary route by which the data can make it to the backend. But, when enabled, should allow ~10x larger files to be loaded. As it was there were some of the library jobs that couldn't even be loaded using chrome, and this update fixes that.