image-rs / imageproc

Image processing operations
MIT License
758 stars 149 forks source link

Gradient enum and Kernel trait V2 #601

Closed ripytide closed 6 months ago

ripytide commented 6 months ago

This is a version 2 PR of #596

This time I've added a Kernel trait to allow heap OR stack allocated kernel usage using OwnedKernel and BorrowedKernel respectively. I also rolled back the 1D kernel function back to taking &[K].

Summary:

ripytide commented 6 months ago

I'm in no rush to get this merged before the next release, the breaking changes are allowable since we're in pre 1.0 stage, and justified given gradients() and filter() are much more flexible now than before and GradientKernel is more ergonomic in my opinion.

cospectrum commented 6 months ago
impl Kernel<'static, i32> {
    pub fn vertical_sobel() -> Self {
        Self::new(&crate::gradients::VERTICAL_SOBEL, 3, 3)
    }
}
cospectrum commented 6 months ago

Creating traits and additional Kernel structs is too much for a small issue like https://github.com/image-rs/imageproc/issues/520. I don't think the owner will be convinced, but I have suggested a possible solution.

ripytide commented 6 months ago

The point is that consistency is important and having some important methods be free-standing and other associated with types is inconsistent. This PR fixes that. The traits are simply much more flexible as they allow heap OR stack allocated kernels which is necessary for dynamically sized kernels such as for the GradientKernel which is 3x3 or 2x2.

cospectrum commented 6 months ago

which is necessary for dynamically sized kernels such as for the GradientKernel which is 3x3 or 2x2

No, it's not

#[derive(Debug, Copy, Clone)]
pub enum Gradient {
    /// The 3x3 Sobel kernel
    /// See <https://en.wikipedia.org/wiki/Sobel_operator>
    Sobel,
    /// The 3x3 Scharr kernel
    Scharr,
    /// The 3x3 Prewitt kernel
    Prewitt,
    /// The 2x2 Roberts kernel
    /// See <https://en.wikipedia.org/wiki/Roberts_cross>
    Roberts,
}

impl Gradient {
    pub fn vertical_kernel(&self) -> Kernel<'static, i32> {
        match self {
            Gradient::Sobel => Kernel::new(&VERTICAL_SOBEL, 3, 3),
            Gradient::Scharr => Kernel::new(&VERTICAL_SCHARR, 3, 3),
            Gradient::Prewitt => Kernel::new(&VERTICAL_PREWITT, 3, 3),
            Gradient::Roberts => Kernel::new(&VERTICAL_ROBERTS, 2, 2),
        }
    }
}
ripytide commented 6 months ago

Ok, it's not necessary for the compile-time dynamic GradientKernel enum, but it is necessary for run-time dynamically sized kernels, like for example a cli script that took the kernel size as a command-line argument and then did a convolution based on that kernel size.

How would you implement this?

pub fn guass_kernel(radius: u32) -> BorrowedKernel<'static, i32> {
    //impossible without leaking memory
}
cospectrum commented 6 months ago

I would just return Vec<i32> or Image<Luma<T>>. Maybe you should create an issue and ask @theotherphil about some changes in Kernel api? This way you won't waste your time writing code that won't be accepted.

ripytide commented 6 months ago

Vec<i32> is missing dimensions, Image cannot be used with any of the filter functions both are inferior to OwnedKernel + the Kernel trait.

A PR is easier to discuss changes over than an issue as it shows the full extent of a change required.

cospectrum commented 6 months ago

Maybe we can add from_image and that's it.

impl<'a, T> Kernel<'a, T>
where T: Primitive
{
    /// Create kernel from image
    pub fn from_image(image: &'a Image<Luma<T>>) -> Self {
        Self::new(image.as_ref(), image.width(), image.height())
    }
}
ripytide commented 6 months ago

But that still doesn't allow the implementation of guass_kernel().

pub fn guass_kernel(radius: u32) -> BorrowedKernel<'static, i32> {
    let image = (0..radius).cartesian_product(0..radius);
    BorrowedKernel::from_image(&image) // borrow checker issue
}
cospectrum commented 6 months ago

So, the question is whether we want struct OwnedKernel or not.

ripytide commented 6 months ago

Why not add it @cospectrum? Owned structs can also be stored in other structs/enums more easily as they don't have leaky lifetimes.

cospectrum commented 6 months ago

I don't have any strong arguments for or against. your example make sense, but there is one point: in the library itself there is not yet a single place where we could use this OwnedKernel. That is, it will be an export-only struct.

theotherphil commented 6 months ago

Thanks for your work on exploring options to improve filter ergonomics.

Unfortunately these changes add too much user facing complexity for the benefits they bring.

Could the discoverability issue of Kernel::filter be fixed by just making this associated function freestanding?

ripytide commented 6 months ago

Could the complexity of using a Kernel trait can be mitigated by adding examples to methods taking kernel arguments? I disagree that the ergonomics and flexibility is not worth the complexity.

Dynamically sized kernels seems to me to be a fairly reasonable usecase, for example, opencv has the ksize parameter (https://docs.opencv.org/4.x/d4/d86/group__imgproc__filter.html#gaabe8c836e97159a9193fb0b11ac52cf1) which would be made much easier to implement using the OwnedKernel.

cospectrum commented 6 months ago

I don't really understand what the problem is with the Kernel::filter. 2 ppl in the issie saw filter3x3 function and wanted filter_nxm(image: &GrayImage, n: usize, m: usize, kernel: &[T]), they didn't want OwnedKernel or Kernel trait. And the trait will cause much more trouble to the user than just a structure with a method, btw. clamped_filter is what they need.

ripytide commented 6 months ago

Wait a minute, @cospectrum

No, it's not


#[derive(Debug, Copy, Clone)]
pub enum Gradient {
/// The 3x3 Sobel kernel
/// See <https://en.wikipedia.org/wiki/Sobel_operator>
Sobel,
/// The 3x3 Scharr kernel
Scharr,
/// The 3x3 Prewitt kernel
Prewitt,
/// The 2x2 Roberts kernel
/// See <https://en.wikipedia.org/wiki/Roberts_cross>
Roberts,
}

impl Gradient { pub fn vertical_kernel(&self) -> Kernel<'static, i32> { match self { Gradient::Sobel => Kernel::new(&VERTICAL_SOBEL, 3, 3), Gradient::Scharr => Kernel::new(&VERTICAL_SCHARR, 3, 3), Gradient::Prewitt => Kernel::new(&VERTICAL_PREWITT, 3, 3), Gradient::Roberts => Kernel::new(&VERTICAL_ROBERTS, 2, 2), } } }



This doesn't work for any type T, only i32, whereas my implementation works for any `T: From<i8>`
ripytide commented 6 months ago

You would have to write a constant for every single type which would be super annoying to use.

ripytide commented 6 months ago

I've now removed the GradientKernel enum and moved all the Kernel constructors to associated methods of OwnedKernel.

cospectrum commented 6 months ago

Wait a minute, @cospectrum

No, it's not

#[derive(Debug, Copy, Clone)]
pub enum Gradient {
    /// The 3x3 Sobel kernel
    /// See <https://en.wikipedia.org/wiki/Sobel_operator>
    Sobel,
    /// The 3x3 Scharr kernel
    Scharr,
    /// The 3x3 Prewitt kernel
    Prewitt,
    /// The 2x2 Roberts kernel
    /// See <https://en.wikipedia.org/wiki/Roberts_cross>
    Roberts,
}

impl Gradient {
    pub fn vertical_kernel(&self) -> Kernel<'static, i32> {
        match self {
            Gradient::Sobel => Kernel::new(&VERTICAL_SOBEL, 3, 3),
            Gradient::Scharr => Kernel::new(&VERTICAL_SCHARR, 3, 3),
            Gradient::Prewitt => Kernel::new(&VERTICAL_PREWITT, 3, 3),
            Gradient::Roberts => Kernel::new(&VERTICAL_ROBERTS, 2, 2),
        }
    }
}

This doesn't work for any type T, only i32, whereas my implementation works for any T: From<i8>

But we don't need any type.

ripytide commented 6 months ago

Not yet, but what if users have different image pixel formats: u8, i8, u16, etc.. it would be very wasteful to always cast to i32. Would it not be better if the kernels could be created in any generic format from i8?

cospectrum commented 6 months ago

A dynamic cast from one type to another will still be a waste of resources. I would expect more from the library. For example, generate all static arrays using macros.

ripytide commented 6 months ago

Despite the fact that kernel creation is incomparably tiny when compared with image convolution? That seems like a heck of performance/complexity tradeoff. A lot of the casts can also be optimized by the compiler since the generic will always be known at compile time.

ripytide commented 6 months ago

I'm going to close this and work on a V3 that discards the disagreeable things like the Kernel trait and just focuses on the minimal other things that have been largely agreed on.