Closed Reebz closed 5 years ago
Hi Mitch, thanks for your interest in our work. I'm in the process of uploading the original resolution dataset which is over 100GB. That's an interesting result! I'm curious to see if your transform techniques improve performance on images outside the dataset. I'll update this comment with a link to the uploaded raw dataset.
In recent work, I have deployed DeepWeeds-trained models in the field on a robotic platform for weed detection and spot-spraying. The performance of these models varies wildly and is very site-specific. For example, deploying a Chinee apple specific DeepWeeds classifier on Chinee apple that is underrepresented in the DeepWeeds dataset (due to environmental or health change) causes poorer performance. Whereas, deploying on Chinee apple that is well represented in the dataset works well. So perhaps despite our great lengths to capture the variability of our target environment, the ~17,000 images falls short and more images are required for greater task generalisation.
Since this work, our robotic prototype has been fully realised. We are now able to collect images rapidly using the prototype and are collecting many, many more images. This has resulted in very strong in-field performance: https://www.youtube.com/watch?v=CjnxmKkw5nk.
I watched your Youtube video, really cool. I appreciate your transparency regarding real-world model performance. I hope I can help bring some new ideas to the table with the little bit of experiments I can run. I look forward to getting my hands on the 100GB dataset, let me know when it's ready. Cheers, Mitch
Thanks mate. Here's the raw image dataset: https://cloudstor.aarnet.edu.au/plus/s/R1j7pqouEe8wNiQ. Best of luck for your work.
Hi Alex and Team, thanks for your great work.
Would it be possible to obtain the original dataset of images?
I've found that a model can be trained and tested with high accuracy after replicating your process with Resnet-50 and PyTorch, however I'm struggling with inference on images outside the dataset - they're generally much poorer (in particular, the confusion matrix results between Lantana, Snake Weed, and Rubber Vine). I would like to experiment with different transform techniques as I believe preserving aspect ratio of the weed, management of color/contrast (etc) may help.
Cheers, Mitch
EDIT: Some samples, confusion matrix, etc. DeepWeeds_Ten_Samples_OutofdatasetInference_1Oct2019.pdf