Closed shubhamtagra closed 3 years ago
Another example, which might be more common, is that due to task cancellations coming in CachingInputStream might not be updating BookKeeper even though it downloaded the data. This could be fixed by ensuring updateCacheAndStats is scheduled irrespective of Interrupts coming in or not in CachingInputStream but taking the proposed approach of moving file weights updation work to getCacheStatus
api would solve this case too.
Would one potential solution be to rejects further loads into the cache above a given threshold (90% of available disk space)?
Would one potential solution be to rejects further loads into the cache above a given threshold (90% of available disk space)?
Even if we add that new threshold, same condition of delayed accounting can occur and cause this same problem.
It seems like to be completely accurate you'd need to evict at the same time you add to the cache.
Right @JamesRTaylor. The proposal tries to do that by doing evictions before addition of data because "evict at the same time you add to the cache" cannot be guaranteed given the asynchronous nature of addition to cache and updation of metadata.
There is another option which can get us improved accounting (not accurate though) at a very low dev cost:
All download of data in async warmup happens in FileDownloadRequestChain. In that process:
What this gets us is that without any intrusive changes we are ensured that cache will never cross ~1GB beyond the configured threshold (10 download threads * 100MB delayed accounting per thread)
This approach will work only with async warmup case because in sync warmup the updation of metadata is over thrift that can be too costly with number of request we will get. Given that async warmup is default and we want to deprecate sync warmup over time, this should be an acceptable solution.
Today the way disk space utilization by a file, and hence the full Cache, is calculated is when the client calls the
setAllCached
api to inform BookKeeper that new data was cached by it. In this api handler, we update the disk usage of the file.Consider a scenario where we already have 100% utilization of the space configured for Rubix. Now a new file is read, say 100MB in one inputStream by client. RemoteRRC or FileDownloadRRC would cache that 100MB first and then inform BookKeeper via
setAllCached
about the new data. BookKeeper will update the cache entry of this file with new weight=100MB and then realize that evictions are needed to accommodate this data. It will then do necessary evictions but for some time the disk usage by Rubix will go over configured percentage.We can fix this by updating weight of the file in
getCacheStatus
api itself. When this api is returningLOCAL
blockStatus then we know that client would eventually bring those blocks in. We could use this info to pro-actively update the weight of the file.Now there might be a mis-behaving client that calls getCacheStatus but never downloads the data, e.g. a task cancellation comes before the download could start. In that case we would be over estimating the disk usage. One way to fix this is to have a scheduled thread to update file weights periodically based on usage on disk, this would ensure that realtime tracking is conservative and will never cross the configured threshold and we fixed the under-estimations by the scheduled service.