PatD123 / Crop-Lane-Detect

Lane Detection using DBSCAN + IPM + A Bit of Temporal Smoothing
3 stars 3 forks source link

Question about coloured layers #3

Open apetheriotis opened 5 days ago

apetheriotis commented 5 days ago

Hi & thank you for open sourcing all the code - it's really nice the result you ended up to, especially with DBSCAN approach! I was going through the code and examples and I was trying to figure out the processing you performed to reach to these smooth layers on these images https://github.com/PatD123/Crop-Lane-Detect/tree/main/agri_images/0123

I am doing something similar on my side but I cannot reach the nice smoothness you achieved! image

Any help/pointers would be much appreciated!

PatD123 commented 4 days ago

Hey @apetheriotis

I appreciate it thanks!

I know it shouldn't depend on the image used, but can you tell me which image did you use to get this and after which line are you logging the image from? So, most of the image processing techniques I utilize are mainly pulled from here and here. Maybe you have look at these already but OpenCV provides very cool image processing docs that help to smooth images and get rid of any noise that might appear in an image like those in the image folder. I use these from lines 35 - 68 in agri_warp.py (they can be mirrored to live_warp.py as well). My next statement might let you down a bit though, which is on some images my warp files aren't great at, some they are great at, and others they are fantastic at (at the end of the day, you can try and play around more using the links). For example, on my side, 0046.jpg is decent image But maybe some aren't so good.

apetheriotis commented 4 days ago

Thanks for the pointers! I've amended slightly your code and I am adding the overlays using the following

def preprocess_and_segment(image):
    logging.info("Preprocessing and segmenting the image.")

    # Convert the image to HSV and apply blur
    hsv_image = cv2.GaussianBlur(image, (9, 9), 7)
    hsv_image = cv2.cvtColor(hsv_image, cv2.COLOR_BGR2HSV)

    # Define color ranges for sky, plants, and soil
    lower_sky = np.array([90, 50, 50])
    upper_sky = np.array([130, 255, 255])
    lower_plants = np.array([35, 50, 50])
    upper_plants = np.array([85, 255, 255])
    lower_soil = np.array([10, 50, 50])
    upper_soil = np.array([20, 255, 255])

    # Create masks for each region
    mask_sky = cv2.inRange(hsv_image, lower_sky, upper_sky)
    mask_plants = cv2.inRange(hsv_image, lower_plants, upper_plants)
    mask_soil = cv2.inRange(hsv_image, lower_soil, upper_soil)

    # Create a segmented image with sky (blue), plants (green), and soil (brown)
    segmented_image = np.zeros_like(image)
    segmented_image[mask_sky > 0] = [255, 0, 0]  # Sky (blue)
    segmented_image[mask_plants > 0] = [0, 255, 0]  # Plants (green)
    segmented_image[mask_soil > 0] = [128, 0, 128]  # Soil (brown/purple)

    return segmented_image

I've hardcoded the colours which works for my specific field/plant but I think depending to the light conditions my approach will not be good enough overall :)

I now have much better results and I am iterating on that.

image

Out of curiosity, did you use the above detection method on a real robot?

PatD123 commented 3 days ago

@apetheriotis these look great.

I honestly think maybe hardcoding could work better. For example, in the examples I provided, they are first semantically segmented to two different colors and hardcoding masking like you did could work a lot better for Hough Lines

I have not tested it on a robot. I wanted to use this for a research project but the grad students I worked with wanted to jump on the deep learning hype train even though it's gonna be very slow on a Jetson. If you're planning on testing it on a robot, do tell me bcuz I really want to see if the reference path distance from IPM works (if you're using that of course).