Closed antonfirsov closed 4 years ago
Yes, please! I wanted to split the Sobel edge detection processor into separate X and Y processors, but found that all the required classes and their dependencies are all marked internal.
Yeah we’ve been keeping them internal to work on the API. Is there something missing from the current edge detection processors? We could work together to fix that?
The one in particular I was looking for was to be able to use the Sobel processor on X and Y independently. My plan was at first just to create two subclasses of EdgeDetectorProcessor - SobelXProcessor and SobelYProcessor
Perhaps we should make that operation an Enum?
As in with values X, Y, and XY? That would make sense, I think.
I'll have to have a look to see if we can use separableness without breaking anything. Feel free to have a dig around yourself.
I would consider opening also the pixel blender API, at least the method that lets you process a single pixel.
Maybe its not even required to expose the whole PixelBlender
delegate TPixel PixelFunction<TPixel>(TPixel backdrop, TPixel source, float opacity);
PixelFunction<TPixel> <somewhere>.GetPixelBlendingFunction(PixelAlphaComposition composition, PixelColorBlending blending);
Where <somewhere>
would be TPixel itself.
this would allow something like this:
var pixelFunc = Rgba32.GetPixelBlendingFunction(SrcAtop,Multiply);
@vpenades I'm strongly against per-pixel delegates, because it's a performance-killer.
@antonfirsov I know, I am aware of that, and I guess you want to avoid exposing them to prevent end users implementing linear operations using naive loops.
but there's some scenarios in which they can be useful, specially for random pixel access.
Let's say I want to render a star field, drawing random pixels ?
There's lots of procedural algorythms that require non linear access to the pixels, and they cannot benefit from the pixel blenders.
The only alternative I could find around this is to add a .Mutate( c->DrawPointList(...) ); which could be parallelized.... but most probably users would call it using one pixel at a time. But i'm sure it would still be much faster than drawing 1 pixel sized rectangles which is what I'm using right now.
One of my pet project was to port some of these examples to .Net with ImageSharp. All these examples use random pixel access with blending.
I opened this issue, but now I'm confused about our goal. Currently it is reasonable to define 3 levels of "API opennes":
IImageProcessor
classes are public. We have a clear plan for this in in #907.Buffer2D<T>
) Nice goal, but no free lunch here: those classes need an extensive review, and as soon as we touch #898, we may break everything.ImageProcessor<T>
-s) public. I think we should avoid this, even in long term.Which level do we want to reach for RC1 / 1.0?
I’d go with 1 and maybe a sprinkle of 2 if we need it. With 1 we can call multiple processors from within a new one which allows building complex new ones. Anything else can be negotiated as something to add to the source.
I'm closing this since steps 1 and 2 have been completed.
Currently all processors are internal. We agreed that we should open up their API for 1.0.0-RC1, after an extensive review.