agentmorris / MegaDetector

MegaDetector is an AI model that helps conservation folks spend less time doing boring things with camera trap images.
MIT License
117 stars 26 forks source link

Colab notebook: annotated images not available in [Visualization_Folder] #38

Closed agentmorris closed 1 year ago

agentmorris commented 1 year ago

Thanks for sharing the colab notebook. As a newbie in this, it's very helpful to be able to test your script before investing time into deploying it at full scale on all the camera traps photos in our project (nest success and invasive predator in an endangered Caribbean seabird).

I would like to share with collaborators the annotated images resulting from the script but the Google Drive folder used as [Visualization_Folder] does not contain any file. Also, Colab's runtime keeps disconnecting and cannot make through the annotation/display of 25 samples photos, hence the need to be able to save those images that have gone through the process. Thus, my question may be very basic: how to save (or access) the images that are annotated and displayed in the last step (cell below)?

Here are the steps of interest:

images_dir = '/content/drive/My Drive/ctrap'

# choose a location for the output JSON file
output_file_path = '/content/drive/My Drive/ctrap/detector_2020-09-28.json'

_Here we use the visualize_detector_output.py in the visualization folder of the Camera Traps repo to see the output of the MegaDetector visualized on our images. It will save images annotated with the results (original images will not be modified) to the [Visualization_Folder] you specify here._

visualization_dir = '/content/My Drive/ctrap/visualize_2020-09-28'  # pick a location for annotated images
!python visualize_detector_output.py "$output_file_path" "$visualization_dir" --confidence 0.8 --images_dir "$images_dir"
import os
from PIL import Image
for viz_file_name in os.listdir(visualization_dir):
  print(viz_file_name)
  im = Image.open(os.path.join(visualization_dir, viz_file_name))
  display(im)  # display() is an iPython method that comes with the notebook

I understand that the images are supposed to be saved in the [Visualization_Folder] but this folder is empty (see edited screeenshots below - note that the _visualize2020-09-28 folder is open). Untitled


Issue cloned from Microsoft/CameraTraps, original issue posted by YvanSG on Sep 28, 2020.

agentmorris commented 1 year ago

Hi @YvanSG , thanks for trying it out! It looks like when you specified visualization_dir, there was no drive in the path, i.e. you wrote

visualization_dir = '/content/My Drive/ctrap/visualize_2020-09-28'

but from your screenshot it looks like the path should be

visualization_dir = '/content/drive/My Drive/ctrap/visualize_2020-09-28'

Can you check that?


(Comment originally posted by yangsiyu007)

agentmorris commented 1 year ago

Hi Siyu, Thanks for replying. Yes, that did it - of course. I feel dumb for missing it...

If the output path is incorrect, shouldn't an error message be sent when running the Python cell, though?


(Comment originally posted by YvanSG)

agentmorris commented 1 year ago

Great, and you'll become more sensitized to path issues over time :)

I guess it's not really an error from the perspective of the code - the images were written to the specified path without error; in the screenshot you can see another folder "My Drive" that was created as a sister directory to "drive" and I think previous executions saved images there.


(Comment originally posted by yangsiyu007)

agentmorris commented 1 year ago

Hi,

I'm using the colab example to annotate images. Great job, thanks! Anyway, I have a little problem: I'm trying to save my annotated images, but I just found the same images both in my source and destination folder. I made a screenshot:

visualization_output

What am I missing? @yangsiyu007 output.json it's ok and has the inferred annotations well.

Best from Ushuaia, Argentina!


(Comment originally posted by fedegonzal)

agentmorris commented 1 year ago

That looks correct... if I'm interpreting this screenshot correctly, it means that you have 1936 images in your "images" folder (images_dir), and for each one of them, now there's a copy in the "output" folder (visualization_dir) with bounding boxes rendered on the detections that are above a confidence value of 0.8.

Take a look at the images in the "output" folder; if you see boxes on animals/people/vehicles, I think everything worked correctly.

Hope that helps!


(Comment originally posted by agentmorris)

agentmorris commented 1 year ago

Hi, not :) I just have 31 detected pictures over 0.8

If I write this code, my output.json has 31 valid results over 0.8

f = open(WORKING_DIR + '/output.json')

data = json.load(f)
detections = data["images"]

confidence = 0.8
count = 0

for detection in detections:
    if detection["max_detection_conf"] >= confidence:
        count = count+1

print(count)

(Comment originally posted by fedegonzal)

agentmorris commented 1 year ago

Yes, that's correct; this script renders all images, regardless of whether or not they have above-threshold detections. The threshold determines which bounding boxes will be rendered, not which images will be rendered.

For most uses of scripts like this, visualizing the non-detections is at least as important as visualizing the detections (usually more important!). A typical use case might be taking a folder with 1 million images in it, choosing 1000 of those images at random, and rendering bounding boxes on any detections in those 1000. The user would then quickly browse through those 1000 images to see what's detected, but - more importantly - what's being missed.

If you want to use this script to render only images with detections above your threshold, the easiest way is probably to write out a new .json file that only has the images with above-threshold detections.

Sorry for the confusion!


(Comment originally posted by agentmorris)

agentmorris commented 1 year ago

Ok, I understand. I made a simple modification to visualize_detector_output.py to avoid non annotated images if you desire, with False as default. I'll send a pull request to add it. Thank you!


(Comment originally posted by fedegonzal)

agentmorris commented 1 year ago

I made some minor modifications to the parameter name and description, and merged. Thanks for your contribution!


(Comment originally posted by agentmorris)

agentmorris commented 1 year ago

Great, thanks for your awesome work!

El sáb., 8 ene. 2022 12:18, Dan Morris @.***> escribió:

I made some minor modifications to the parameter name and description, and merged. Thanks for your contribution!

— Reply to this email directly, view it on GitHub https://github.com/microsoft/CameraTraps/issues/222#issuecomment-1008022851, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGJSBQEP2WERDH7LDWTMDLUVBITRANCNFSM4R4QHVDA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you commented.Message ID: @.***>


(Comment originally posted by fedegonzal)