Closed klnikhil closed 2 years ago
One folder represent one animal (the name '0', '1', ... are the IDs). When generating datasets, LabGym creates a new folder when it detects (sometimes can be false detection) a new animal. So if you got many folders but only had one animal, the background subtraction did not do a good job. You can just discard the data that is blank or false detection and select those well-segmented ones for building the training dataset.
I would suggest you to run the Analyze Behavior unit on your video for a quick tracking test (you can select any in-built Categorizer and select some behavior parameters for quantification) to see if the animal is tracked well in the annotated video. If it's tracked well, you don't need to worry about these false detections. But if it's not tracked well, you may want to make the background subtraction work better. Here are some suggestions: 1. LabGym does have requirements on the illumination of the video--it might not work well on videos with highly-variable illuminations; and the contrast between the animal and background should not be too low. 2. If the illumination in your video is quite stable and the contrast is not too poor, you may consider to select a more appropriate time window for extracting background and estimating animal size. An appropriate time window for background extraction is typically a 20-60 sec period during which the animal is moving around (if the animal is always immobile during this period it will be considered as static background). An appropriate time window for estimating the animal size is a 10-30 sec period during which the animal is always in the field of view. Both time windows can be longer but longer durating means longer processing time.
Let me know if these help.
Thanks! Will give it a try. Based on your reply, I have a question about the illumination. I use videos of mice housed in light-dark cycles. Therefore, a single 24h long video will have different illuminations based on whether it is day or night as attached below.
For the 'training set' I can always sort the videos separately as day and night (if required) but eventually I want to infer behaviors across multiple days with lights ON and OFF cycles. Do you think the difference in illumination between day and night will affect the inference? If so, any suggestions for that?
The current version of LabGym can work for videos with one illumination transition from dark to light (but not from light to dark, unfortunately). So if the video has multiple ON / OFF transitions, the background subtraction might not work well. This is a problem and I will work out a solution. But for right now, is it possible for you to do behavior inferring separately on ON / OFF period (you don't need to trim the video, just make multiple copies with different analysis-starting time in the filenames)? By the way, I haven't tried to analyze videos that are longer than 4 hours and it takes about 12 hours for LabGym to finish the behavior inferring for a 4-hour long video on a MacBook without a GPU--just for your information.
I will give it a try! My videos are 96 h long and I do have a GPU but the pytorch is configured for deepethogram which I have been using for a while now. I see some pytorch related errors when I try labgym on the same desktop even if I create a separate python env. I will try and resolve that issue first before I proceed. Thanks yujiahu!
After I generate animations using 'generate datasets', I get 2 or more folders named '0' and '1' and so on. What do the multiple folders mean?
Within these folders, some of the saved animations are as one would expect -
However, some animations are just black screen videos and paired motion images that look like the the ones below even though the mouse did not leave the field of view - Does this mean that the bg subtraction fails for certain frames? Do I just discard them from the training dataset?