-
The filesystem block cache should be reclaimed when memory is low and hence should not count towards used memory
-
```go
// https://github.com/envoyproxy/go-control-plane/blob/main/pkg/cache/v3/simple.go
type snapshotCache struct {
watchCount int64
deltaWatchCount int64
log log.Logger
ads bool
…
-
**Is your feature request related to a problem?**
- With the introduction of Lucene compatible loading layer within `NativeMemoryLoadStrategy` - the `IndexLoadStrategy.load()` takes care of loading t…
-
Hello,
Im trying to implement an in memory cache which looks at the following function. The goal is to have a cache for the same types of fromdate and todate queries that are called with the functi…
-
## Describe the issue
We're using `S3AsyncClient` to process customer data from S3. As such we create shared instance of `NettyNioAsyncHttpClient` for s3 clients. The `NettyNioAsyncHttpClient` ho…
-
Is it possible to cache it in memmory also? I know that EGOCache will cache in disk, but Im using a uitableview and I would like to cache also in memory, to make scrolling the uitableview smooth. Any …
-
### Background and motivation
We have two available memory cache libraries in .NET that are popular and well known - `System.Runtime.Caching` and `Microsoft.Extensions.Memory.Cache`. As already des…
-
### Description
When using `OrthoPolynomialBase` children that implement `_fcache`, such as `Hermite2D` or [`Chebyshev2D`](https://github.com/astropy/astropy/blob/04dfd245ac7815e20908ff7d079f5ad98dd0…
teald updated
2 months ago
-
PyTorch/XLA currently has no means to clear cached memory, i.e. something similar to `torch.cuda.empty_cache()`. This is relevant for benchmarking a model on both PyTorch CUDA and XLA:CUDA.
For com…
-
Hey All,
I have Vitality running on a 1socket/4cpu/8gb memory Ubuntu VM. All files from GoesProc write to a highspeed NAS and Vitility also mounts on same file system on NAS for processing images.…