Closed ehallein closed 1 year ago
The label map is stored in Torch models unlike TensorFlow. So you could just remap after loading the model by adding 1 to the id.
detector = PTDetector(model_file, force_cpu)
for id in detector.model.names:
DEFAULT_DETECTOR_LABEL_MAP[id+1] = detector.model.names[id]
However, I kind of feel that once you are customizing models you have stepped beyond the intent of the MegaDetector code base.
@persts is right that this is a bit beyond what the MegaDetector code base supports right now, but... fine-tuning for new classes is something we'd like to make easier, and we've never done it, so I will make you (or anyone else who is interested) a deal: if someone publishes a nice step-by-step tutorial with publicly available data (data from LILA is fine) showing someone how to fine-tune MDv5 for new classes, so that we could replicate and test it, I'm happy to update the places where classes are hard-coded to be more robust to variability in classes.
I'll close this issue for now, but if/when that tutorial exists, point us to it! Thanks.
Deal :)
@persts is right that this is a bit beyond what the MegaDetector code base supports right now, but... fine-tuning for new classes is something we'd like to make easier, and we've never done it, so I will make you (or anyone else who is interested) a deal: if someone publishes a nice step-by-step tutorial with publicly available data (data from LILA is fine) showing someone how to fine-tune MDv5 for new classes, so that we could replicate and test it, I'm happy to update the places where classes are hard-coded to be more robust to variability in classes.
I'll close this issue for now, but if/when that tutorial exists, point us to it! Thanks.
ok, I've written up a tutorial here https://www.kaggle.com/code/evmans/train-megadetector-tutorial. Have a look and let me know if it makes sense. It classifies two species, and ignores other species. It would be great to be able to use same megadetector code with these. Then I will merge it into EcoAssist.
Very cool! I was able to run the whole tutorial, everything worked as expected. If you don't mind updating the text a little, I made a few edits in this copy:
https://www.kaggle.com/code/agentmorris/fine-tuning-megadetector/notebook
If you can retrofit those edits back into your copy, I'll link to your copy from the MegaDetector page.
And I've re-activated this issue, since a deal is a deal. :)
For things like separate_detections_into_folders.py, the class list should be loaded from the .json file, so no new inputs should be required. That will still be a bit of new code, but The Right Thing is to load directly from the .json file. FYI Timelapse also reads this format, and doesn't depend on any hard-coded class names.
For run_detector.py and run_detector_batch.py, I'll modify them to take a .json file with class names as input. I can use the detector from the tutorial to test with, but if you have something more complete than what's trained in the tutorial, and you can share the model file, the class list, and a sample of relevant images, that would be great; you can post here or email to cameratraps@lila.science .
No rush; it will be a few weeks before I can get to this.
Thanks for bringing this up and for the great tutorial!
I can't access that notebook, is it private?
I will send you some test data. Thanks for doing this!
My bad, yes, the notebook was private. Fixed, try again?
Thanks for the edits. Mine should be updated.
Once it megadetector is updated, I'll update the tutorial to use it for detection and splitting.
I did... a version of this. It's not perfect or elegant, but given that this is at the fringe of what's supported right now, I'm going to close this unless @ehallein indicates that you still need this and my solution is falling short.
Specifically, there is a new --class_mapping_filename option to run_detector_batch.py. This argument should point to a .json file, with a dictionary mapping int-strings to strings, like this:
{ "0": "motley-crue", "1": "def-leppard", "2": "dokken", "3": "winger", "4": "poison", "5": "bon-jovi" }
This will do three things:
I have tested that the resulting output files are compatible with postprocess_batch_results, but I have not done anything with separate_detections_into_folders (although since this thread started, separate_detections_into_folders did get support for classification results, which is slightly different, but possibly equally useful for what you want to do), nor have I done a broader sweep over the repo for other dependencies on the default class list.
Hopefully this will get a little more baked over time; I'm using this feature regularly now, but only because I'm so used to these scripts that I use them for totally-non-camera-trap-related stuff, hence using these scripts with trained YOLOv5 models that have nothing to do with MDv5.
Closing for now, let me know if this doesn't match the scenario for which you originally raised this issue.
Thanks!
I have been using transfer learning with megadetector to train detection of only species of interest, not animal, person, vehicle. e.g. quokka, raven, magpie, person
Currently the class names are hard coded in https://github.com/microsoft/CameraTraps/blob/main/detection/run_detector.py and https://github.com/microsoft/CameraTraps/blob/main/api/batch_processing/postprocessing/separate_detections_into_folders.py (and possibly others?) i.e.
and
friendly_folder_names = {'animal':'animals','person':'people','vehicle':'vehicles'}
I have been changing the code manually as a work around, but it would be good of there was better way to this. Maybe by loading a class list file at runtime? I am happy to implement something if this is of interest. I have be helping add some changes to PetervanLunteren/EcoAssist, and being able to have classes changed by some sort of config file within CameraTraps seems like the way to go.