Closed antonfirsov closed 6 years ago
About processing large images, there's a common issue in .Net which is that very large memory chunks are not moved during a GC, which can result in out of memoty exceptions even when there's enough memory available.
A solution for this I found in the past was to use a low level bitmap primitive that split images into chunks of 64x64 pixels, all wrapped with a generic IBitmap
@vpenades the reason the CLR does not move objects larger than ~85kB is the negative performance impact. Objects that big are stored in the LOH. Splitting the images into movable chunks disables that optimization and will negatively influence performance.
A large image still reserves the some amount of memory if split into chunks. Using the power of streams and the filesystem seems to be the better solution here.
@Toxantron Not everybody is looking for performance; some years ago I was tasked to process images on the size of about 20.000 x 15.000 pixels, which allocated around 1gb RAM, in a single chunk. We also discovered that when compiling for x86, even if theoretically you can allocate 4gb, the Net Framework runtime was actually limited to 2gb only, essentially limiting the load of a single image of such size.
In practice, it was nearly impossible to load such image, because any time we had any object fragmenting the LOH, it was impossible to allocate more than 800 or 900mb in a single chunk, and ended with out of memory exceptions.
We looked for every single image library available at the time (+5 years), and we found all libraries of the time allocated a single chunk.
We ended developing our own custom image library which splitted images in chunks of less than 64kb, to allow the runtime to defragment memory, at the cost of performance.
It is true that time has passed, memory is not that scarce as it was 5 years ago and x64 is commonplace these days... but don't assume people is going to use image libraries the same way, or for the same reasons as you use them. For example, there's lots of areas related to scientific analysis, or design artists working with large form factors, which work with extremely large images.
@vpenades Did you take a look a ImageMagick back then? This can handle extremely large images.
p.s. I am one of the authors.
@vpenades An MMF-based memory manager could achieve the same tradoff (running slower, but using less memory) in a more simple way.
BTW there is a new property GCSettings.LargeObjectCompactionMode
since 4.5.1 to address the LOH fragmentation issue.
@vpenades thank you for the clarification.
@antonfirsov I fand that value as well. It comes with performance impacts, but at least we can leave it to the GC then and not implement our own version.
@dlemstra I think we tried, but if I recall correctly, we had std:boost compiling issues with other third party libraries.
@antonfirsov that's great to know the GC also has the compact large objects feature, I wish I had it a while back.
@Toxantron 😄
Some feedback on this: we currently use ImageSharp for all our image resizing needs in a cloud application and the optimization of speed over memory usage seems to run counter to the idea of horizontal scaling, because of how it holds onto memory. We'd like to provision many containers with a 2 GB memory limit, but currently if we do that, we'd end up with all those containers using 2 GB all of the time. Currently, we run only a few containers limited to 4 GB and they frequently get OOM killed by Docker, even though I already limited image resizing concurrency. This high memory usage seems to be mostly caused by a few outliers in image sizes, most images we process are in the 500 KB range, but some are 10-20 MB, which then consumes 500+ MB of memory just to decode that gets cached indefinitely. Do that concurrently and memory quickly starts to climb to 2-4 GB which never gets released.
From this issue, I can tell you're working on making it configurable and pluggable, but it sounds like a lot of very fundamental work and somewhat long-term. Is there any chance you would accept a more short term quick-fix? Correct me if I'm wrong here, but from what I can tell, most of our memory usage comes from the caching of Block8x8
which is done by the Jpeg decoder, and the memory is provided by the PixelDataPool. If you were to add properties to Configuration
that allowed you to set the maxArrayLength
/ maxArraysPerBucker
that get passed to ArrayPool.Create
, you could turn off/tweak array pooling for a common scenario.
Something like:
Configuration.Default.Memory.PixelDataPool.MaxArrayLength = 0;
Configuration.Default.Memory.PixelDataPool.MaxArrays = 0;
I understand that this would expose some internal implementation details, but I think (haven't tried it yet, might later) it would also solve some real issues.
@JulianRooze I plan to realize my roadmap in February, so it will be part of our 1.0
release or maybe even some beta. Is this late for you?
The only thing I can propose as a quickfix is is to add a temporal configuration API to control the behavior of PixelDataPool
for beta-3:
[Obsolete("This is a temporal API, use it for your own risk!")]
public static class PoolConfiguration
{
[Obsolete("This is a temporal API, use it for your own risk!")]
public static int MaximumPooledBufferSizeInBytes { get; set; }
}
@JimBobSquarePants what do you think?
Question: I thought the behaviour of ArrayPool was to keep the returned arrays as weak references, so if not used, or the GC needs memory, these arrays would be reclaimed by the GC at some point.
But from what I am reading, does this means the ArrayPool keeps the arrays forever??
@vpenades
[...] if not used, or the GC needs memory, these arrays would be reclaimed by the GC at some point
The GC runs quite frequently, which means that in this case the "pooled" arrays would be GC-d instaneously in most cases. This would go against the whole concept of pooling.
@JimBobSquarePants an other (less dirty) proposal:
By lowering the value of MaximumExpectedImageSize
we could ensure that those large outlier images do not eat up the memory of users with limited environments, while still having the pooling mechanism for more common smaller images.
@antonfirsov So, from what you're saying, I understand the only proper way to use ImageSharp is in a short lived command line application that starts, does some image processing, and exits soon after?
What about these use cases:
(*) We wanted to use this technique as a trick to reduce application size on Android/iOS Apps. But, given the limited memory of these devices, it's simply not acceptable to leave an ArrayPool sitting there for the rest of the application's lifecycle... In Androd/iOS, 100mb can be the difference between staying alive or getting killed by the OS.
** It's not like I neglect performance optimizations, I really really appreciate the length of the efforts you're all doing trying to optimize and improve ImageSharp's performance... but, a memory object that, once it's no longer used can't never be reclaimed by the GC is, by all means, a memory leak.
Found this: ArrayPool's author not fond on ArrayPool.Shared for this very same reason.
@antonfirsov Thanks for the quick reply! No that's not late at all, much sooner than I expected it to be completed 👍
And in the meantime the PoolConfiguration
class would be a very welcome addition.
Limiting MaximumExpectedImageSize
would also help, but the effect would be smaller as Block8x8
buffers seem to make up the majority of cached buffers (at least in our use case of decompress JPEG > resize > encode as JPEG).
This is the cached memory after resizing a large (20 MB, 8000x5000) JPEG concurrently (5 at a time) in a loop:
@antonfirsov @vpenades @JulianRooze
This is all great input, thanks for contributing!
Regarding PoolConfiguration
I don't think adding a temporary API would be a wise idea. I'd much rather we focused on getting the design right with the help of the community.
If we continue our design discussion here and also ensure that the PR is delivered as WIP so that interested parties can chip in as it's developed then I'm sure we can deliver something powerful.
Regarding Block8x8
We'll definitely have to alter our behaviour within PixelDataPool<T>
to act a bit smarter. We currently create a block of 50 arrays at an arbitary length (matching the default in ArrayPool.Shared
) per T
. @JulianRooze You've highlighted the issues there well.
Is 50 too much in this instance? Definitely. Per jpeg we max out at 4 Components. I would probably limit this pool to 4 x Processor.Count to handle parallelism.
Whatever we go with, let's ensure individual implementations are provided enough information to make good decisions and work granularly.
I see this stuff as a major priority so am happy to delay beta 3 until we have it in place.
@vpenades I did not know that about the shared pool. We don't use that for Buffer<T>
and co but we might elsewhere still.
@JimBobSquarePants @antonfirsov Just think that one of the case scenarios in which we wanted to use ImageSharp was on an android App that is required to do some image processing at startup and then carry on with other stuff.
On 1Gb Ram Android devices, apps throw OufOfMem when they go close 700mb usage, So after ImageSharp processing is completed at startup, we need the GC to free every single used byte for the App, even small byte arrays for small images... on Android/iOS, every byte counts.
My suggestions:
@vpenades you are totally right about the issues you pointed out! I share all your concerns and fixing up our internal memory management is my top priority as soon as I get back to work on ImageSharp in February!
I just want to point out that having expensive resources retained by a pool for a longer period is just normal pooling behavior by definition. (Just have a look at other pooling mechanisms in the .NET framework. ThreadPools, connection pools etc.) So I disagree with your suggestions: having temporal "pools" is not pooling. It would hurt performance for the majority of our server users. (The vast majority of our user base!)
The big issue is that our pooling mechanism is not configurable at the moment, which makes the library to perform poorly in many scenarios, like yours. As I said, this is a top concern for me! I believe that solving this by providing generalized memory management is a very worthy strategy in long term, because it will enable cool features, big flexibility + integration with new Microsoft API-s & other libraries.
Re ArrayPool<T>.Shared
:
For me it was clear from the beginning, that it's a design mistake :P We are not using it AFAIK. We concentrated all our "Memory as a Resource" logic into PixelDataPool
and Buffer<T>
classes, so it would be easier to refactor + customize this behavior library-wide.
@JulianRooze it's strange to see Jpeg decoder to still eat up that much. Are you using beta-2?
Btw. the logic in CalculateMaxArrayLength
is totally stupid. Gonna replace it in a lightweight PR in a way that will also help on the jpeg decoder + Block8x8
issue.
@antonfirsov yes, we're running beta-2. It's a rather extreme image though, at 8000x5000.
@antonfirsov I'm hoping some of the new SIMD API's coming will allow us to run DCT without having to use singles. We could drop the whole thing by 75% then.
Gonna replace it in a lightweight PR in a way that will also help on the jpeg decoder + Block8x8 issue
@antonfirsov When you expect to do this?
In my case the problem is the same - we run a web service in Docker container with very limited memory resources, so I will be highly appreciated for a quick changes. We're mostly using png and jpeg decoders.
For now is there any way for me to make some kind of PoolConfiguration
(as you proposed earlier) and make a private build?
The way I had envisoned the updated IMemoryManager
based api working would be as so.
We would add a new interface IMemoryManager
and expose a public property on Configuration
called MemoryManager
. Also add to IImageEncoder
IMemoryManager MemoryManager {get;set;}
for overriding the memory manager on a per encoder bases.
All places that currently call Buffer<T>
or Buffer2D<T>
should be replaced to calls that back onto the MemoryManager
sourced from eather the encoder or the images configuration.
public interface IMemoryManager
{
IBuffer<T> Allocate<T>(MemoryUsageHint hint, int size);
}
public enum MemoryUsageHint
{
// very short term buffer, things like a temp buffer for
// passing around a single row of an image
Tempory,
// lifetime for an entire ImageProcessor action we should be only requesting
// a few of per-image prrocessor run, but will usually be a
// larger size, most likely the size of the image buffer
Process,
// this memory will live for the lifetime of the image
//itself (unless switched out during resize)
Image,
}
// this is basically an interface wrapper around some owned memory
public interface IBuffer<T> : IDisposable
{
// this is used to effeciently switch the backing data
// from one buffer with another, we do this in some ImageProcessors
// some backign providers could to a pixel copy, others could
// just return false for not supported, where as other would acutally
// switch out the backing array from each.
bool SwitchBuffer(IBuffer<T> buffer);
int Size { get; }
Span<T> Span();
Dispose();
}
We would then have extension methods targeting IMemoryManager
that exposes Buffer2D<T> Rent2DBuffer<T>(this IMemoryManager mng, MemoryUsageHint hint, int width, int height)
and probably just some simpler IBuffer<T> TemporyBuffer<T>(this IMemoryManager mng, int size)
, IBuffer<T> ProcessBuffer<T>(this IMemoryManager mng, int size)
, IBuffer<T> ImageBuffer<T>(this IMemoryManager mng, int size)
to simply internal usage.
Our inital implementation for this should probably do the following.
MemoryUsageHint
== Tempory
the we use a array pool if size < a tempThreashold
else we use new arrayMemoryUsageHint
== Process
the we use a array pool if size < a procThreashold
else we use memory mapped file (or just a simple array for now)
if MemoryUsageHint
== Image
new array if size < imgThreshold
else we use memory mapped file (or just a simple array for now)The Array bool can acutally be backed by a single byte[] pool the the IBuffer<T>
can use Span.UnsafeCast<T>()
(or what ever the api is) to convert the backing byte array into a Span<TPixel>
allowing us to use a single array pool for all all the memory across all the pixel types.
For the few places its acutally not possible to get a IMemoryManager to (Regions
spring to mind) then we can pobably jsut get away with using a separate, less used, array pool that's entirely divorced from the memory manager until the API can be amended to get access to it.
As we would be required to expose IBuffer<T>
on the public API then we should probably add new constructor to Image
Nice. @tocsoft, the thing that jumps at me is the SwitchBuffer API, though -- given that an IBuffer<T>
can come from any number of sources, does it make sense for a MMF backed IBuffer to switch backing buffers with an ArrayPool backed IBuffer?
It depends on the actual implementation of the memory manager but most likely no it wouldn't and if the actual backing store is different/incompatible then they it would most likely do a Span -> Span pixel copy instead.
Basically if the 2 IBuffer<T>
instances where sourced from a single IMemoryManager
then I would expect then to be able to switch out some internal store reference, but if they where sourced from different IMemoryManager
instances then they would probable do the pixel copy instead of fail/return false if the copy can't happen because the left hand buffer doesn't have the internal capacity to handle the size of the right hand buffer.
@tocsoft @rytmis There are a few non-trivail limitations making it impossible to back everything with byte[]
+ single ArrayPool<byte>
+ unsafe cast:
T[]
array from IBuffer<T>
** because several API-s need it (SIMD, streams). Newer, Span<T>
and Memory<T>
based API-s are on the way, but not yet ready + unsupported in current .NET Framework versions. (Most likely it gonna be NETStandard 2.x). byte[]
-> T[]
conversion? Maybe this?
byte[] a = new byte[42];
T[] b = Unsafe.As<T[]>(a);
Tried it, doesn't work! :( Would make SIMD and stream API-s to fail. (We actually had a SIMD + 32 bit VM related issue earlier because of this trick.)
* "extract, or mimick the extraction"*: With unmanaged memory buffers we need to copy the bytes into an array + work + copy it back to the Span<T>
. I have an idea to do this in a uniform safe way, but it's really tricky, gonna explain it later.
Actually .. we can probably make this work! Or at least if it's true that all our decoders read their data into temporal byte[]
arrays, and we don't need that IBuffer<T> <-> Stream
interop, we can get rid of all array usages, and go with Span<T>
everywhere!
@tocsoft @rytmis If you also think it's possible, than never mind my previous comment :)
I haven't worked with Span<T>
yet, but doesn't it have the limitation of a stack-only type? Meaning, the compiler won't let you have an IBuffer<T>
interface that exposes a Span<T>
because it would be stored on the heap and Span<T>
could be a wrapper around stack allocated bytes. You can only return it from methods or inside value types that are also stack only. But I could be wrong here, that's just what I remember reading.
@JulianRooze I think you're right... I have the feeling that Span<T>
is intended to be used internally by advanced developers to improve performance, but not to be used as a type to be passed around as part of a public API.
@vpenades I think Memory<T>
is supposed to serve that purpose:
https://github.com/dotnet/corefxlab/blob/master/docs/specs/memory.md
@JulianRooze @vpenades Yeah Memory<T>
rocks!
Span<T>
should be always constructed on the fly when returned from properties/methods, this way you can ensure its kept on stack. We have to be very careful in order to never store it as a member + never capture it by a lambda.
The official guide on perf-centric API-s:
Span<T>
on synchronous API surfacesMemory<T>
on asynchronous onesThe problem is that the second one is not yet available in the official System.Memory
beta package. This is exactly the reason why we removed all Span<T>
API-s from our public API surface until it's released.
@denisivan0v, take a look at #431 -- if you set Configuration.Default.MemoryManager = new NullMemoryManager();
, all allocations should be regular GC heap allocs with no pooling. Note that currently this PR comes with no warranty attached. ;)
[edit]
Most allocations, that is. There's still some direct ArrayPool usage left even after I killed off PixelDataPool.
I've been trying to understand how ArrayPools are being used along ImageSharp... and I would be glad to get some more insights on it...
For example, I've seen they're extensively used in OrigHuffmanTree
, they're statically created, so once you create a single OrigHuffmanTree tree, they're created and there's no way of disposing them, even if you're not going to load any JPEG for the rest of the application's lifetime.
I understand they give performance reasons, and the fact they're statically created give performance benefits when loading multiple jpegs, one after another.
So, what about a compromise solution ??
I mean, at CreateHuffmanTrees()
, we could create non static arraypools, and when constructiong the tree, we could pass the instances of the pools to each OrigHuffmanTree
object.
So loading a single jpeg would benefit from ArrayPools, and when decoding is finished, the GC would eventually reclaim the ArrayPools along with the OrigHuffmanTree
objects.
It's true that multiple jpeg loads would recreate the array pools, so some performance would be lost in here... that's why I called this a compromise between performance and memory management.
@vpenades I think you're really missing the concept of pooling. Lifecycle of a pool should be bound to session or application lifecycle. For per request object-reusing: I won't call it pooling.
Don't get me wrong, we have several issues with our current implementation which are gonna be fixed with #431. I'm quite sure that in a server application, you don't need stuff like temporal "pooling" or disabling pools. Fine-tuning the parameters should be a knife sharp enough to allow optimizing your service throughput.
I'm unsure however what's the deal on Xamarin. It's an entirely different runtime. But with #431 you will be able to implement your idea and use temporal pools for individual resizing requests! :)
@antonfirsov I'm not working with servers, actually. Right now, my two main use cases with ImageSharp are these:
SmartDevice game development: Use ImageSharp at application startup to procedurally generating some textures (by combining, blending, resizing and all sorts of transforms), while presenting a progress bar to the user. Then, run the game for hours. With ImageSharp's current behaviour, the pools would remain unused and keeping valuable memory from the rest of the app.
A scriptable processing pipeline. not only for images but for other stuff too.... so if the user creates a script that first processes some very big images, and after that the script continues with audio or video processing, then all the memory previously used by ImageSharp cannot be reclaimed.
My feeling is that ImageSharp memory management is designed as if you're going to use ImageSharp, and only ImageSharp, continuously during all the application´s lifetime, if that's the case, then it's fantastic, but it will underperform for all other cases in which it's going to be used sparsely or in combination with other tasks.
I think we need something like a MemoryManager.ReleaseAllRetainedResources()
method for this :)
@tocsoft do you have any concrete suggestions on implementing your MemoryUsageHint
proposal with ArrayPoolMemoryManager
? I can't see any differences in the actual lifecycle for the Temporary/Process/Image cases.
Not really in terms of specific, I think i'm trying to future proof the API somewhat (for when we decide to introduce memory mapped files as an alternative backing store) and let anyone who wants to tweek how memory is allocated and/or retained (i.e. custom managers) have the most information to help drive in decisions about how memory will be used to help then decide where the buffer should be sourced from.
What drove me in suggesting the hints I was just thinking about some of the lifetimes (and thus sizes) for some of the buffers we currently request during a single ImageProcessor call.
We will have a single long term retained buffer for all the image pixel buffer. These can easly be backed by memory mapped files etc as they are much longer lived.
We then have short lived buffers that will store a copy/variant of the entire pixel buffer for the duration of a processor call. (maybe 1 or 2 per call)
Then we have all the smaller items (usually no bigger than the image width) and we request a lot more of these to do line level processing and pixel blending and passing into shapes for edge detection etc. We request these a lot more often (many concurrently) than the others but only for very short lifetimes ( duration of a single line processed). These for example should never be backed by memory mapped files as I can image they would be much slower, and possibly
I see, thanks!
Good question if this model is future-proof enough however, because:
In case of Memory Mapped Files it might be more efficient to read/write blocks into a temporal buffer, than use Span<T>
-s pointing right into the MMF. I don't know however, because I'm not yet familiar with MMF-s.
This statement is not true IMO:
We will have a single long term retained buffer for all the image pixel buffer.
In most use cases (eg. thumbnail generation in stateless services), Image<T>
is just as temporal as the Block8x8
buffers in the Jpeg decoder. And Block8x8
buffers are almost as large as the resulting Rgba32
buffer. It's hard to distinguish.
I've been also thinking on the Buffer<T>.Array
topic.
It seems that we need it only in special cases, to interop with Stream
API-s, which are always using byte[]
across their full API surface. In these cases, MemoryManager must return a buffer backed by a managed byte array regardless of the MemoryManager implementation.
I think we need a type-safe solution for this, distingushing temporal byte
buffers from the rest of memory buffers:
interface IBuffer<T> : IDisposable
{
...
}
interface IManagedByteBuffer : IBuffer<byte>
{
byte[] Array { get; }
}
interface IMemoryManager
{
IBuffer<T> Allocate<T>(MemoryUsageHint hint, int size);
IManagedByteBuffer AllocateManagedByteBuffer(int size); // always temporal!
}
Okey, there's yet another thing I wanted to do that might bomb the ArrayPools. I wanted to experiment with (software based) Anisotropic Filtering , for that I need to create several mipmaps of a image at different sizes, in fact, for a 256x256 image, if in normal mipmapping you would need 8 more variations down to size 1x1 , with anisotropic you need 64 variations!
@vpenades I would be more than happy to have a look at your experiments. Could be a good basis for benchmarks. Is it possible to pack your code into a standalone demo console app?
@antonfirsov Some of it is located here but it's missing documentation, so I don't think you can use it straight away.
Long story short, ÜberFactory is one of my pet projects, it's like a content processor you build by connecting nodes hierarchically, here's a screenshot:
It's a general purpose content processor, it's plugin based, and ImageSharp is used there as a plugin. I've put huge efforts to try making plugins as easy as possible, so theoretically, you can create your own plugins, for audio processing, 3d model conversion, etc. you can batch several tasks in a single project, and run it with a command line app.
Looks cool! :) Reminds me Project Gemini a bit, but your focus is quite different. The best thing is that it builds without issues for me! :) Let me know if you can share some code + reproduction steps which could be used to stress ImageSharp with mipmap generation.
@antonfirsov I've added a small step by step guide here , Alternatively, you can checkout the latest version and after building, load Epsylon.UberFactory.Editor.Tests\ImageSharp Plugin Tests.uberfactory
If it's UberFactory related, feel free to open issues or comment there! 😄
An update on our situation:
I created a custom NuGet package of ImageSharp where I introduced that PoolConfiguration
that @antonfirsov suggested as a workaround (but ultimately decided not to pursue) and configured it as such:
PoolConfiguration.MaximumPooledBufferSize = 100_000;
PoolConfiguration.MaxArrayCount = 10;
The values were arbitrarily chosen, but this seems to work well for us, idle memory use is never over 500 MB. The containers still regularly get killed by Docker for exceeding 4GB, but that now has more to do with the CoreCLR being reluctant about running GC than about ImageSharp memory pooling. When I manually run GC on a container that's high memory (I added a query parameter to force a GC or a LOH compact 😅), it drops right back to a few hundred MB. If I were to disable memory pooling entirely, then I'm sure it would fall back to 10 MB or so.
The package can be found here:
https://www.myget.org/feed/newblack-public/package/nuget/SixLabors.ImageSharp
@JulianRooze I think merging #436 should lead to very similar results, but in a future-proof way, with no temporal API-s.
~Is your package on your own MyGet feed, or is it a public package on nuget.org? If it's on nuget.org, can you please remove it, and move it to MyGet? We are strongly against unofficial packages, because we have no control over them.~
The official solution is on the way, the official half-solution is going to be merged today!
~Your input about this topic was really valuable and appreciated, it really helped me a lot to figure things out. I would be really happy, if we could find a solution inside the box, which is good for all parties! :)~
@JulianRooze sorry, stupid me, haven't noticed your link, need more ☕️. Nevermind my previous comment!
Problem
ArrayPool
behavior does not fit user needs (see #222). Users should be able to configure + reset + disable pools.Image<T>
by memory resources other than managed memory. (Eg. native buffers).Solution
Implement a pluggable
MemoryManager
, make the default pooling memory manager configurable.Update: feature/memory-manager is the WIP branch for this refactor. It's having #431 merged. If anyone wishes to contribute, please file your PR-s against that branch. The tasks below are being updated according to the progress on that branch.
Tasks for beta-3
ArrayPool
andPixelDataPool
usages to useBuffer
andBuffer2D
instead.Buffer2D<T>
should composeBuffer<T>
(asIBuffer<T>
) instead of inheriting itBuffer<T>
instances to a generic MemoryManager class. Refactor allnew Buffer<T>
andnew Buffer2D<T>
instantiations to call factory method onMemoryManager
instead.ArrayPoolMemoryManager
DefaultPixelBlenders<TPixel>
should useMemoryManager
taken from the outsidePixelArea<TPixel>
class withBufferArea<T>
+ extensions methods working onBufferArea<TPIxel>
and/orIBuffer2D<TPixel>
usingPixelOperations<TPixel>
PixelAccessor<TPixel>
usages withBuffer2D<T>
. ~Drop theIBuffer2D<T>
interface.~ - the full removal could be done later.MemoryManager
should constructIBuffer<T>
andIManagedByteBuffer
instead ofBuffer<T>
, codecs should useIManagedByteBuffer.Array
Probably 1.0
MemoryManager
to codecsMost likely post- 1.0:
MemoryManager
toMutate()
andClone()
(---> #650)System.Memory
API-s:Memory<T>
andOwnedMemory<T>
. Find a way to integrate these classes intoImageSharp.Memory
.Image.Wrap<T>(Memory<T> memory)
or similarMMFMemoryManager
which uses Memory Mapped Files to reduce the pressure on system memory when processing large images.