Open shaman-apprentice opened 1 year ago
We need the correct file size to handle the visualization of big maps and to avoid crashes of CC due to performance issues. Hence, it should be fixed.
In more detail: We try to scale down the resolution of the rendered city map according to the map size to prevent rendered maps with a resolution, for instance, of 50.000 x 50.000 pixels.
@ce-bo we probably do not need the file size for that as it is actually not representing the actual size of the map (long metric names and lots of metrics with longer values would not be an issue while vewer metrics with short names might be an issue if the map is big and the file size is identical. The implementation of @RomanenkoVladimir takes an absolute value for the calculated size and makes sure the map does not become any bigger. That should suffice in this case. Is the file size maybe also used for hash value? We could in fact add that to reduce computation time (no need to calculate a hash in case a file size is not identical).
I was not aware, that this will change in the implementation of @RomanenkoVladimir, nice! So, we do not need it anymore. Your suggestion to check the file size before calculating the map hash is fine though. @RomanenkoVladimir did you already remove the file size calculation in #3077?
I am not sure we can remove that, my issue only addresses TreeMap, but what about StreetView maps?
AFAIK, we do not use the scaling for StreetMaps.
but we call the mapResoultionScaling in StreetView algorithm and flatly removing it leads to crashes in firefox
I think we need to investigate this once #3077 is merged
Seems like this is unblocked now? :tada:
Seems like this is unblocked now? 🎉
Indeed. I would like to check, if we can just remove it or fix it to use it as a pre-condition for calculating the md5 hashes (map checksum). I increased the priority accordingly.
Bug
Expected Behavior
Given a gzipped file in url When loading the page Then the calculated file size is correct.
Given a cc.json file in url When loading the page Then the calculated file size is correct.
Actual Behavior
The calculated file size is always 13 for gzipped files and 15 for not gzipped files.
Steps to Reproduce the Problem
npm run dev
)Technical analysis
Calculated file size for gzipped files comes from
response.body.toString().length
.response.body
is a blob in that case. Thereforeresponse.body.toString()
is always the string"[object Blob]"
and its length is13
Calculated file size for not gzipped files comes from
response.body.toString().length
.response.body
is an object in that case. Thereforeresponse.body.toString()
is always the string"[object Object]"
and its length is15
Note that previous urlExtractor.spec.ts mocked the http get request falsely as below. Therefore some tests had a different file size. But in reality initially it receives always a blob or an object. Also note that this complete parsing gets only called for initial map loading. Manual uploading a map doesn't utilize the
urlExtractor
.Open question
Do we need the calculation, when it currently is a "random small hardcoded number" and no one has noticed?
What is actually the correct calculation? Should it be the length of (unzipped) text content of file? It seems to be used for map size resolution somehow? (see
getMapResolutionScaleFactor
within codeMapHelper.ts.Specifications