CesiumGS / cesium

An open-source JavaScript library for world-class 3D globes and maps :earth_americas:
https://cesium.com/cesiumjs/
Apache License 2.0
13.03k stars 3.51k forks source link

Force tileset to never exceed maximum memory usage #6226

Closed lilleyse closed 1 year ago

lilleyse commented 6 years ago

For a Cesium3DTileset, the maximumMemoryUsage property controls the size of the tileset's cache but does not actually limit how many tiles are loaded. As the doc states:

If tiles sized more than maximumMemoryUsage are needed to meet the desired screen space error, determined by Cesium3DTileset#maximumScreenSpaceError, for the current view, then the memory usage of the tiles loaded will exceed maximumMemoryUsage. For example, if the maximum is 256 MB, but 300 MB of tiles are needed to meet the screen space error, then 300 MB of tiles may be loaded. When these tiles go out of view, they will be unloaded.

The name is a bit misleading and should really be called cacheMemoryUsage.

Really maximumMemoryUsage should force the tileset to never consume more than that amount of memory. This could be done by sorting all tiles by screen space error and only loading those with the greatest error until the memory limit is hit. We could also prioritize requests based on SSE as well so we don't end up requesting tiles that won't fit. Unfortunately we don't know a tile size until it is downloaded but we can save that value for later use if the tile is unloaded.

We'll also have to deal with the issue of request thrashing when the maximum memory usage is nearly reached. It may help to reserve some of the memory as a cache.

Going beyond, we should have a global maximumMemoryUsage that is shared by all tilesets loaded in the scene.

PolanZ commented 3 years ago

Why not include the memory size of tile in the tile information of JSON

lilleyse commented 3 years ago

@PolanZ that's a good idea and could be something that's included in some future extension to 3D Tiles, e.g. 3DTILES_tile_metadata. Just note, the extension there is a very early draft.

asir6 commented 3 years ago

@lilleyse May I know if there's any update on this? We have several large models and crash after loading for a period of time. It's ideal to have an option to limit global memory usage.

heylying commented 3 years ago

@asir6 Hi! I wanna know how big your model is. I try to load a large 3D tiles model more than 200GB now. The page runs of memory and crashes all the time. Have you ever been in the same situation?

iiixxxiii commented 3 years ago

@heylying Have you found a solution? I have a 40g file to load

heylying commented 3 years ago

@iiixxxiii On the one hand, we are trying to optimize the model files. On the other hand, I upgrade our old Electron. At the same time I use --max-old-space-size=xxx to increase the memory limit for Node.js. Now it can load the model successfully at least.

apgk commented 2 years ago

@iiixxxiii一方面,我们正在尝试优化模型文件。另一方面,我升级了我们的旧电子。同时我使用 --max-old-space-size=xxx 来增加 Node.js 的内存限制。现在它至少可以成功加载模型了。

大佬,有最近解决方法么。 先说配置

  1. 华为屏 外加ops (i7-8800 gtx1050ti 4G 16G) 模型
  2. 1个园区的(3dtiles 总文件大小12G上下) chrome 100 版本 1.模型大概配置 maximumScreenSpaceError 32 maximumMemoryUsage 128 问题 基本加载到 显存占用3.6G上下,放置一会儿移动下就崩溃了。cesium 1.89
UniquePanda commented 1 year ago

Hi, I also just ran into this "trap".
Especially with point clouds it's hard to optimize every tileset in a way that prevents the browser from crashing when big datasets are loaded. So any solution that would enforce a real memory limit would be very helpful, although obviously the render quality will suffer in cases where visible tiles can't be loaded or have to be removed again. Still way better than a crashing browser tab. :D

Did anyone ever work on this? I'd be happy to have a look on it, "hack" something that works for us and then maybe provide a more general solution or at least some starting point for someone else.

ggetz commented 1 year ago

Hi @UniquePanda,

No one is currently looking into this to my knowledge. We'd be happy to discuss a solution, or review a PR if you get to that point. Thanks!