AcademySoftwareFoundation / OpenImageIO

Reading, writing, and processing images in a wide variety of file formats, using a format-agnostic API, aimed at VFX applications.
https://openimageio.readthedocs.org
Apache License 2.0
1.95k stars 578 forks source link

Extend ImageCache to handle deep files #1236

Open lgritz opened 8 years ago

lgritz commented 8 years ago

A few people have suggested that it would be helpful to extend ImageCache to properly manage "deep" files.

lgritz commented 8 years ago

@johnhaddon @HughMacdonald Adding you guys to the conversation.

I'm curious what you guys think about the following problem:

Deep files are often huge! In fact, they are commonly (individually) as large as an entire ImageCache "max_memory_MB" that we can use to render a big movie frame using hundreds of ordinary textures. So a common issue, especially if the system encounters untiled deep files, is that as soon as a large deep image's pixels are requested, everything else gets booted from the cache, and ping-ponging between a couple deep images can lead to terrible thrashing.

I can think of a few possible approaches:

  1. You get what you deserve -- if you will be using deep files with your IC, it's wise to specify at least a few GB of cache, or you will get bad perf.
  2. Upon encountering a deep file, bump up the cache size if it's "too small" (whatever that means) to some minimum that will avoid thrashing, even if the app requested a smaller max_memory_MB. This is akin to how you currently CAN'T request <10MB cache, it will clamp it, but perhaps the clamp needs to be automatically much higher if and only if it notices that there are deep files in play?
  3. Separate accounting limits for deep and flat files, e.g. max_memory_MB=500, max_deep_memory_MB=2000, they don't compete against each other.

Other suggestions?

I'm kinda leaning toward (2), but very open-minded about what would be the right solution.

HughMacdonald commented 8 years ago

I think I prefer (2) as well. If you're already setting a minimum cache size, then using the same logic for deep data, just having the cache size be larger sounds reasonable. I wouldn't suggest going too high with it - on the whole, though, I think that it should be the user/developer's responsibility to set the cache to a reasonable size, ala (1).

MrKepzie commented 8 years ago

From my point of view 2) may lead to the application being unaware of the sudden memory starving of the ImageCache. This does not apply only to Deep files but more generally to images that have radically different content: e.g: the cache may contain 640x480 tiled images and then at some point the user tries to load 4K untiled EXR files (or deep files... or any other huge file)

There's no perfect solution to this, but what must be absolutely respected BY OIIO is the memory limit imposed by the application, otherwise the application can hit the swap relatively easily and may lead to terrible performances.

To respect the memory limit there are 2 solutions upon encountering a huge file:

1) Trash out everything and make sure this file is cached 2) Do not cache this file

To my experience, scenario 1) is likely to suit any application where the user is actively editing a "current image" (needing specific tiles or layer on-demand etc...): in this case the application cannot know the actions that are going to be made by the user and must provide fast results, that is reading the bare minimum from the file and cache it.

Instead scenario 2) would be suited for any kind of playback situation where anyway we cannot fit 2 consecutive images together and when computing a frame, the application in that case know all layers/tiles that will be requested onto that particular image via the settings of the application.

HughMacdonald commented 8 years ago

Hi Larry et al.,

Has there been any further thought on this? I can certainly understand where @MrKepzie was coming from, and I'd feel that @lgritz's option 1) is also a very reasonable approach - leave it up to the application (an app like Gaffer might have a much higher max memory limit than a tool that is generally expecting small tiles)