Closed Hatem-Jr closed 3 years ago
You need to overlay the tracking results on the inference video by using the Glitter2 dialog in Annolid toolbar. Please check the video here https://www.youtube.com/watch?v=QW8dhAVNsk0&list=PLYp4D9Y-8_dRXPOtfGu48W5ENtfKn-Owc&index=12 for details.
the first issue is that I forgot to write .mp4 the issue right now is that it's taking too long XD
the first issue is that I forgot to write .mp4 the issue right now is that it's taking too long XD You only need the tracking CSV file and the original inference video to create overlay masks for a video in the Annolid Glitter2 dialog. Detectron2 does not assign consistent colors for the same instance across frames.
where to get the zone_info.json file ?
where to get the zone_info.json file ? Please check the video here. https://www.youtube.com/watch?v=zX8cUImRI_s
that produces an exception and closes the program
that produces an exception and closes the program
Could you copy and paste the error message here?
from what I understood from the video is that I choose a video, I choose the csv file(downloaded from the colab notebook) and then choose a random .json file from the labelled image dataset and this happens
ps:- if you have discord and you're okay with it then please add me and believe me this whole thing would be over in 20 minutes 😂 just name the time and I'll be there (HatemJr#8984) I'm sorry if I'm asking a lot of questions everyday but that's only because this is the most relevant repo to my work and literally the final chance to get this right specifically for this test (novel object)
You cannot use a random JSON file as your zone info file. It is optional and you can leave it empty. If you want to draw masks for your non-moving objects, then you need to find a frame that has no animals inside and label them with polygons, not key points.
Sorry, I don't have discord. You can just post your issues here and I will answer them. I am glad to hear that Annolid is useful for your project.
on a totally seperate issue my ssd seems to have lost like 10 GB in 1 day do installations setups on google collab do that ?
on a totally seperate issue my ssd seems to have lost like 10 GB in 1 day do installations setups on google collab do that ?
I don't think Annolid and Colab take that amount of your disk. You can investigate it with some disk utility tools.
I have a request can you train this for me ? :https://drive.google.com/drive/folders/11fgm2-uWirr1a-ohDnMUEvpn38VxZpvc?usp=sharing
1-as you can see I finished the frame processing and downloaded the "result" video from eval_output folder and I can not even play it 2-I tried to use the glitter format from the annolid gui and then I got this error and the program crashed (did not put a zone_info.json file in the selection and used a normal video and the csv file you see in the first screenshot):
This error is caused by the wrong video. You need to provide the same video that used to produce the CSV for Colab.
you mean the Novel Object Test 1.mp4 ? because I just did that and still didn't work, and isn't the point of all of this is to use the .pth file in order to use it on any video with the same environment ?
you mean the Novel Object Test 1.mp4 ? because I just did that and still didn't work, and isn't the point of all of this is to use the .pth file in order to use it on any video with the same environment ?
The Glitter2 dialog overlays the predicted masks saved in the tracking results to the video. It doesn't need the trained model.
well it still produced the same error in the black screenshot here:
do I need to re-install annolid again ?
well it still produced the same error in the black screenshot here:
do I need to re-install annolid again ?
Please check here https://stackoverflow.com/questions/61285913/unspecified-error-and-the-function-is-not-implemented-in-opencv.
Thanks 🥰 it worked, is there a way to make it work on any video ? because I seriously cannot do this whole process for every single video 😂
You can upload a different video to Colab then inference it with your trained model. If you are satisfied with the results.Then you don't need to train a new model.
well of course but that took 41 minutes to process 9000+ frames, is there a way I can make the Track Animal button in the gui work with the model_final.pth file since it's a trained model that should work on any video ?
well of course but that took 41 minutes to process 9000+ frames, is there a way I can make the Track Animal button in the gui work with the model_final.pth file since it's a trained model that should work on any video ?
well of course but that took 41 minutes to process 9000+ frames, is there a way I can make the Track Animal button in the gui work with the model_final.pth file since it's a trained model that should work on any video ?
The track animal button does not support Mask-RCNN model. Colab has a better GPU than a regular consumer GPU on end user's workstation. So it is faster on Colab. For near real time inference,you can try YOLACT models with lower accuracy.
so you mean that the model_0000999.pth is lower in accuracy but faster ? (my gpu is gtx 1650 on laptop)
so you mean that the model_0000999.pth is lower in accuracy but faster ? (my gpu is gtx 1650 on laptop)
Yes, if model_0000999.pth is a trained YOLACT model. Google Colab's free GPUs are way better than GTX 1650.
I ran every cell until the "Only save the top 1 prediction for each frame for each class" section on a custom dataset and everything worked and did not produce a single error, however the result video did not appear in the eval_output directory or even in the path of my drive
please help me 😞