Closed ParmaJonEman closed 8 months ago
I totally agree with your hypothesis that the images are too different for background removal to work. The slight variation in lighting will make it difficult. I tested the image provided here and found the best segmentation with the "a" channel from the LAB colorspace. I think the result is pretty good, but the yellow tags on the plant on the left are not getting filtered out because the shade of yellow is too similar to the intensity of yellow within the plants. I investigated this with HSV, LAB, and each of the colorspaces from CMYK. "y" from CMYK was the only channel that appears to have grayscale pixel intensity differences between the tags and the plants. I hope this helps.
# Imports
from plantcv import plantcv as pcv
# Read image
img, path, filename = pcv.readimage(filename=args.image1)
# Threshold out most of the background
bin_mask = pcv.threshold.binary(gray_img=a_gray, threshold=120, object_type="dark")
# Clean up small "salt" noise in the background, also speeds up filtering step later
cleaned_mask = pcv.fill(bin_img=bin_mask, size=200)
# Rectangular region of interest to filter plants from remaining objects
rect_roi = pcv.roi.rectangle(img=img, x=500, y=850, w=1100, h=150)
# Filter on region of the image to remove color card
filtered_mask = pcv.roi.filter(mask=cleaned_mask, roi=rect_roi, roi_type='partial')
# Visualize the mask
masked_image = pcv.apply_mask(img=img, mask=filtered_mask, mask_color='white')
# Attempt to remove tag at sacrifice of some plant pixels
y_gray = pcv.rgb2gray_cmyk(img, "y")
bin_mask = pcv.threshold.binary(gray_img=y_gray, threshold=10, object_type="light")
double_segmented_mask = pcv.logical_and(bin_img1=bin_mask, bin_img2=filtered_mask)
masked_image2 = pcv.apply_mask(img=img, mask=double_segmented_mask, mask_color='white')
"masked_img"
"masked_img2"
Thank you for that, I was seeing similar results with the "a" channel. In both of your examples, the leaves are separated from the plant, so I don't think the shape analysis functions will work, unless I do some morphological operations, right? Or is there a way to group close segments together?
It's pretty common for thin structures to get excluded, especially when there is an intense light source that over exposes the image and causes the structures to look even thinner. It's true that you can sometimes reconnect separated leaves with morphological operations like dilation or pcv.closing
but the pcv.analyze.size
function is totally capable of measuring disconnected objects. This mask shouldn't affect measurements like height and width of the plant, but would definitely impact the plant perimeter for example.
Since you have two plants, the way that I would analyze them with the disconnected leaves would be like this:
grid_rois = pcv.roi.multi(img=img, coord=(600,800), radius=200, spacing=(900, 0), nrows=1, ncols=2)
labeled_mask, num = pcv.create_labels(mask=filtered_mask, rois=grid_rois, roi_type="partial")
shape_image = pcv.analyze.size(img=img, labeled_mask=labeled_mask, n_labels=num)
analysis_plot = pcv.analyze.color(rgb_img=img, labeled_mask=labeled_mask, n_labels=num, colorspaces='hsv')
Describe the bug I have tried the background removal function, and it doesn't seem to be removing enough of the background. Are there pre-processing steps I should be doing?
To Reproduce Steps to reproduce the behavior
and code: plantImg, , = pcv.readimage(img) bgImg, , = pcv.readimage(background) fgmask = pcv.background_subtraction(foreground_image=plantImg, background_image=bgImg) cv2.imshow("mask", fgmask) cv2.waitKey() masked_image = pcv.apply_mask(img=plantImg, mask=fgmask, mask_color='white')
Expected behavior Just have the plants left after background removal
Local environment (please complete the following information):
Additional Information: These two images were taken 15 minutes apart, which is about as close as we're going to get in a real world scenario. I think the images are too different (pixel-wise) for background removal to work, is this correct? I tried doing color correction and that helped a little, but the results were still unusable. I have also tried just doing the segmentation via the colorspaces, but I am not able to get accurate results that way either. Is the background not black enough for colorspace segmentation? Would some other color work better?