Open DanieleRomeo opened 1 year ago
Hi @DanieleRomeo! I don't know enough about the behaviors and tracking data, but there is nothing, theoretically, stopping you if you're tracking is solid and behavior is salient (e.g. human can tell the behaviours apart).
We've done a few fish projects lately, albeit single fish, and some general points:
Out-of-the-box default SimBA calculates mainly variables related to movements, shapes and distances between animals and their body-parts to build the classifiers. So, out-of-the-box, you will easily be able to capture behaviors like “jolts”, while some other behaviors may be trickier with the default variables. For more complex fish behaviors we’ve had to concentrate further on circular statistics, documented HERE, to get good models going and use these methods as documented HERE, through the SimBA GUI.
So, it’s good to include two body parts for each animal in tracking which allows you to calculate circular statistics, e.g. head and swim bladder or whatever the equivalent is in your species as in image below (not just the perimeter of the fishes). If such body-parts are included, we can do more appropriate stats to capture what the human annotator is looking at. The image is from the side but I worked with it from above as well.
A caveat is that building your own feature extraction class in SimBA, and pick-and-mixing these statistics calculators, requires some moderate python skills which probably keep many people away (I haven’t yet thought of a user-friendly way to do it graphically). If that is the case, I am happy to help to help put together and explain the code.
Lastly, we do not have domain knowledge about fish behavior, and the calculators have been written under guidance from people with such knowledge telling us what they need. If you find something missing, that you want calculated, let us know.
Thanks!
Simon
Dear Simon,
Thank you so much for your reply!
I am trying to use Simba and I am really glad to see it is super user friendly! I have managed to import my Boris tracking and my H5 file from DLC. As you mention in your comment, I see the extraction features is based on mouse, therefore I was trying to adapt this script on zebrafish (https://github.com/sgoldenlab/simba/blob/072963b338eb4d59cf9cb8738361dc9180ebf72f/misc/fish_feature_extraction_092221.py#L15). Unfortunately, I my python skills are super basic, I would like to ask if you think is a good idea to adapt this script and if you could help me. In my tracking file I have 2 animals with 8 bodyparts.
In this script, I changed the body parts with the one I used for my tracking but when I try to run it in the "Extract Features" section, I got errors that there are no modules named "simba.rw_dfs" or "simba.drop_bp_cords".
Thank you so much for your help!
Daniele
Thanks @DanieleRomeo for pointing this out and it is not your python skills - this feature extraction scrip was written some years ago and is incompatible with current SimBA, it calls some methods that has been moved around (mainly to make SimBA easier to work with for me).
I would not recommend going back to an earlier version of SimBA that supports this file (because you are likely to hit other errors and no-one supports these prior versions).
There is a more recent fish feature extraction script HERE that should work.
Again, it is written for one animal though. I'm happy to adapt it to run for two animals with your specific body-parts if you should bump into issues, if you share a sample of your simba project (so I know what the body-parts are called etc).
Thank you so much, I would be really glad to accept your help!
I can share with you the project folder, inside there is also the config.yaml file of DLC!
Thank you so much again for your time and kindness!
No problem @DanieleRomeo !
This script I sent, was used to score these behaviours on the Y-axis in single fish:
I had a brief look at your project, and you have one behavior called "Interaction" - which probably depends on the proximity of the two fishes relative to each other? If so, we can add some measurements on the relationships between the animals.
Not for now, but in case I forget.. for the next batch :) it can be good to crop the videos around the tank prior to running it through pose estimation. If the videos are cropped around the tank, then (i) you will save a lot of space as videos are smaller, and its quicker to create visualizations, and (ii) the pose-estimation is likely containing fewer errors, as objects and odd things not contained in the tank can't be mistaken as body-parts, and (iii) we can easily compute where the animals are in the tank relative to the top/bottom/left/right walls of the tank as those are also represented by the top/bottom/left/right edges of the image
Yes, I am investigating two behaviours: The interaction, when the two fish (the cleaner (that is, the elongated shape fish) and the client) are close to each other (the cleaner inspects closely or bites the skin of the client) and the cheating, when the cleaner bites the client and provokes a "jolt" (so a rapid movement away from the cleaner).
Yes, I will definitely pre-process the next batch, thank you so much for the suggestion and for the clarification!
Great thanks for that very helpful - those are rather salient and shoud be easy to catch. Stay with me though, won't be able to get to this immediately!
Sure, let me know if you need any other information! and thank you again for your time!
Sorry to bother again, I have just a quick question.
If I understood well, once I have my model I can process new videos. However, to do that, I'll always need the pose-estimation file for each videos before processing them through SimBA (I am using DLC, so H5 files), right?
No problem @DanieleRomeo ! Yes, you obviously won’t need to train new behavioral models. But the pose estimation input data need to be created in some dedicated tool like DLC, it can’t currently be done within SimBA.
I can see how this may be issues with your data - you have a lot of it - and you may run out of space and time as the number of videos grows and you want to do this at scale. Some notes below how to get around this! I have zero experience with these kinds of fish. But for what it is worth anyway,
The resolution and FPS are high. I see 59 images a second at more than 2x1.5k pixels. You can probably decrease both fps and resolution substantially, at least for for these behaviors - as long a you can see behavior by eye. E.g., if you can see the behaviors you are interested in at 25 fps by eye, then the DLC processing time will be more than halved, decrease resolution will also help.
Also I noticed glare:
You can probably get rid of some through cropping out the sides or positioning camera so sides of tank are more obscured. Another, last case option is to train the pose-estimation model what glare is and for it to ignore it.
Combined with the glare and sometime not, there seems to be some identity switches? It would be good to minimize these by improving pose model in DLC
I see a lot of missing body-parts. For LB_el, each body-part don’t have any tracking data in about half the frames (around 20K ish frames), probably caused by the animals swim in Z direction. I interpolated all these missing data using “Body-part: Nearest” option in SimBA during import. Meaning, if the animal is swimming in exclusively in Z direction, SimBA will think it is standing still instead of missing. Not ideal but will work, could probably be limited by a less depth in tank but not sure if that is viable.
There are 8 body-parts tracked per animal. I don’t know all the behaviors you may want to score, but probably don’t need that many, at least for the behaviors in the example project, at least locations could potentially be inferred post-hoc (e.g., body-part A is always half-way in-between body-part B and C etc). Generally it’s most important to track outer boundaries of animals. Also for example, the movements of body-parts can be collinear, e.g., the animal can’t move one without moving the other, if you have the movement information of one body-part you have it of the other, worth thinking of when you choose with body-parts to track.
There are sections in the beginning of video, human is in the frame noting the trial start etc. I’d clip that off before doing pose-estimation, we don’t want the models to try and use that when training a model. Ultimately, your behavior classifier will only be as good as the data going in, so it’s worth making sure the tracking is as good as it can be and process the data as quickly as possible.
Thank you so much for your comments and explanation, you are very clear, so appreciated!
I will cut, downsize and crop the videos to standardise everything and reduce the time to analyse them! I'll also create a new DLC model meeting your points, trying to improve the pose-estimation model, labelling more frames to avoid identity switching.
I really cannot thank you enough, hope this is not taking too much of your time!
Before creating a new DLC model, I am wondering if I change the body-parts like the one in picture to optimize the shape of the fish, will the script still be able to work? Or it would be specific for the body parts I previously sent?
Yeah that will work! I will send you some descriptions later today of whats computed - if you have the image where the body-part names are annotate that would help so I know what is what
I wrote this snippet to compute the below. You can use it in the GUI as you did with the earlier one that failed (after unzipping). We need to update the body-part names at top when you got a new pose-mode going.
two_fish_feature_extractor_040924.py.zip
It computes:
How the animals move on the X relative to the Y axis - description
Velocity and acceleration of each animal - description
The correlation between the two animals velocity and acceleration - icluding how each animal velocity is correlated with lagged versions of themselves and the other animal description.
The direction of the animals description
Instantaneous angular velocities description and instantaneous rotations
Animal areas - description
Animal body-part distances- description
Most are computed in rolling time windows (min, max, median), so all in all about 150 measurements per frame. The behaviors from the way you describe mainly seem to be judged by the proximity of the animals and how much they move.
As I mentioned I don’t know these animals and behaviors, and often it is important to have domain knowledge of what exactly defines a jolt from a near-jolt etc, I am kind of guessing here lol. If you see anything in the docs or think of something that you know is a good proxy for your behaviors and annotations, we can add it to the code
Great, thank you!
Yes you are right, the interaction is when the two fish are close to each other while jolt is when the client move rapidly away from the cleaner (so should be a metter of position and acceleration). Therefore, with those parameters it should be able to detect both behaviours.
The list of the new body parts is this:
So I can just change the parts: MID_BODYPARTS = ['BodyMidUp_1', 'BodyMidUp_2'] MOUTH_BODYPARTS = ['HeadTerminalMouth_1', 'HeadTerminalMouth_2'] HEAD_MID = ['HeadBasisDown_1', 'HeadBasisDown_2']
Hello,
Sorry to bother again!
I am having trouble to let the script work! When I try to run it I have this error, do you know how I can fix it?
I am using the v. 1.87.4
Thank you again!
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/fishlab4/anaconda3/envs/Simba/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/fishlab4/anaconda3/envs/Simba/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/fishlab4/anaconda3/envs/Simba/lib/python3.6/site-packages/simba/SimBA.py", line 1456, in run_feature_extraction
custom_feature_extractor.run()
File "/home/fishlab4/anaconda3/envs/Simba/lib/python3.6/site-packages/simba/utils/custom_feature_extractor.py", line 220, in run
spec.loader.exec_module(user_module)
File "
No problem - there are two ways to fix it, either:
i) Update SimBa with pip install simba-uw-tf-dev --upgrade
, or
ii) Comment out lines 77-79 in the python code I sent:
I'd suggest doing the first option though first.
let me know how it goes.
Yes it is working! I am using the 1.90.2 and the script worked!
Thank you so much!
Great, just let me know if any others issue comes up!
Hey,
Yes I am having another problem, sorry!
Once I have the model and I am trying to create a path plot in "Visualization",but I am having this error:
simba.utils.errors.InvalidInputError: SIMBA VALUE ERROR: input_style_attr requires (<class 'dict'>,), got <class 'NoneType'>
And if I try to modify the style setting putting "milliseconds" and "Max Prior Lines" as 2000 to have only the path of the last 2 seconds, I got this other error:
Traceback (most recent call last):
File "/home/fishlab4/anaconda3/envs/Simba/lib/python3.6/tkinter/_init.py", line 1705, in call_
return self.func(*args)
File "/home/fishlab4/anaconda3/envs/Simba/lib/python3.6/site-packages/simba/ui/pop_ups/path_plot_pop_up.py", line 247, in
However, I can create a video of the path plot only if I select "Entire Video", but in this way the video created track the path of the fish in all the video without disappearing, and so always overlapping with previous track.
Do you have any suggestion?
Thank you so much
Thanks @DanieleRomeo ! You caught me.. I was just working on optimizing these functions and fix some bugs that cause the first images when creating videos using multiprocessing to look a little off. I will look into this one too and let you know when fixed - and if you could test it on your end after that would be super helpful.
@DanieleRomeo - if you do a pip install simba-uw-tf-dev --upgrade
and get version 1.90.4 - how does the path plots run on your end?
Ahah great! Yes it is working perfectly now, thank you so much again!
Hello,
We work with cleaner fish, studying the cleaning behavior in coral reef. In this interaction, the cleaner fish cleans another fish called "client", standing close to him and biting the surface of his body, eating parasites and dead skins.
My study is mainly focus on the "cheating behavior", when the cleaner fish eats the mucus of the client instead of the parasites. This, provoke a clear reaction in the client identify as the "Jolt", a rapid movement of the client away from the cleaner fish.
Do you think Simba could be a good tool to analyse this kind of interaction, considering the interaction time (time spent together, where the cleaner fish cleans the client), dance (are wide longitudinal movement of the cleaner fish to attract the attention of the client), tactile stimulation (where the cleaner fish stays on the dorsal part of the client providing a massage with pelvic fins), and the cheating behavior (considering so far we have only video from frontal view)?
I would like to hear your idea about before proceed and try it!
Thank you so much for your time and for your precious help!
Daniele