Closed old-school-kid closed 1 year ago
Hey @old-school-kid any chance I could get a little more context as to how people tend to use this in the CV domain? Particularly w/ deep models.
Thanks
Hi @LukeWood SLIC isn't used in any CV model for image augmentation, afaik. But I have seen a lot of work using that as a pre-processing layer in my domain (material science) and even in medical imaging. In material science it helps in grain boundary detection and finding hairline fractures. Do you want me to put links to some papers here using the preprocessing techniques? TIA.
That would actually be great. I'm really interested to see how it's used in both material science and medical imaging!
Thanks @old-school-kid
Also, would you be interested in contributing this @old-school-kid ?
In Material Science
In Medical Imaging
In Object Detection (Old methods, pre-2017)
Also, would you be interested in contributing this @old-school-kid ?
Sure, would love to.
What is the final goal in your material science domain? Semantic segmentation? Or something different?
E.g. I am taking a look at https://github.com/Scientific-Computing-Lab-NRCN/MLography but I don't know if we still need some intermediate preprocessing like SLIC before the network learning stack.
@bhack Mostly Semantic segmentation, yes.
E.g. I am taking a look at https://github.com/Scientific-Computing-Lab-NRCN/MLography but I don't know if we still need some intermediate preprocessing like SLIC before the network learning stack.
I went through the repo. They have directly fed the image to an UNET which is fine, but generally we go for noise reduction and clustering (to separate matrix and grains) and this has proven to achieve better results. This repo throws more light on this.
You repo is a little bit old and I am not an expert in the material science domain. But I remember that a few year ago, at ECCV 2018, there was a paper proposing a learning/differentiable approach ((currently 118 citations):
https://varunjampani.github.io/ssn/
But also if it was differentiable I don't know how much it is still popular today learning this intermediate representation.
@old-school-kid I'm fairly sure this is a good fit and is something that we'd be interested in hosting as long as the contribution is well written & maintainable.
So if you're interested please prepare a PR.
@bhack
You repo is a little bit old and I am not an expert in the material science domain. But I remember that a few year ago, at ECCV 2018, there was a paper proposing a learning/differentiable approach ((currently 118 citations):
https://varunjampani.github.io/ssn/
That was a nice article. Thank you for sharing! While that can be used as a intermediate representation in end-to-end learning, SLIC is just used as a pre-processing layer and in no way it is fused with learning. But the article you shared looks promising too.
But also if it was differentiable I don't know how much it is still popular today learning this intermediate representation.
The papers I have shared above under Material Science are from 2019, 2020, 2021. Moreover apart from research, it is used in industry as it can easily segment matrix in microstructures.
Moreover apart from research, it is used in industry as it can easily segment matrix in microstructures
I think that this part could be a little bit out of the scope if not strictly functional to the learning step for this specific library.
But I still think that it is valid for a generic CV library (e.g. PIL, scikit image, Opencv, etc..).
This is in scope. The real question is whether or not the impact is high enough to justify maintaining the layer.
Medical image segmentation is an important domain with a lot of promise. SLIC is clearly showing promise there, and in other image segmentation areas.
This is especially in scope if the SLIC layer is differentiable. Then we can also use it as a preprocessing layer, or as it's used in https://varunjampani.github.io/ssn/.
The cost of maintaining a specialized layer like this is pretty low. So I am comfortable accepting it if @old-school-kid is willing to contribute a well refined implementation. If you do decide to work on it, feel free to add me as the reviewer on the PR 👍 .
In the last part of @old-school-kid comment It was quite clear that he mentioned that It Is useful also as is, without considering if It is useful or not in the learning pipeline. For that specific part of the comment I think, but I could be wrong as I don't fully understand the Keras-cv policy, that we are not going to collect CV operations as is if they are not functional to the learning step or to populate metrics/visualizations.
Also more in general what I claimed Is that I think, just by a popularity/resources ratio point of view that this intermediate rappresentation, also when differentiable, It isn't not so popular anymore.
Instead if we are evaluating the non differentiable version I think that It could be useful to review this as a baseline in the context of modern proposed solutions for unsupervised/self-supervised/semi-supervised image segmentation papers (e.g. https://arxiv.org/abs/2007.09990 and other works).
Lets close this our as stale for now until theres a strong use case. If we end up trying to tackle a segmentation competition and can't compete without this, we should reprioritize.
Image Augment layer using SLIC https://ieeexplore.ieee.org/document/6205760 cited by 7880 Implementation in skimage https://github.com/scikit-image/scikit-image/blob/v0.19.0/skimage/segmentation/slic_superpixels.py#L110-L385
Sharpen images by using a unsharp mask or something better I am unaware of.