Closed enajx closed 3 years ago
Hi Elias, all of the weights you linked to are from rats and use 20 landmarks as output. Where did you see that they have 22?
On Wed, Jul 7, 2021, 4:27 PM Elias Najarro @.***> wrote:
Hi, are the weights of networks trained on rats that are mentioned in the paper available? The provided models https://github.com/spoonsso/dannce/tree/master/demo/markerless_mouse_1/DANNCE/weights seem to be the models fine-tuned on mice (since network output has 22 landmarks rather than 20).
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/spoonsso/dannce/issues/52, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAW2P46VWWCSMGI4JED4BYLTWS2C3ANCNFSM477LXJOA .
I followed the Quickstart Demo and the pred
variable in the generated file predict_resuts/save_data_AVG0.mat
has dimensions 1000x3x22. Also, when I use the script run_dannce_predict.sh
to generate the predictions.mat
file from save_data_AVG0.mat
, it contains a structure with 22 landmarks, not 20.
Ah, that's because those rat weights are being fine-tuned with 22-landmark mouse data, which is what we used for mouse. This finetuning leads to the mouse network weights in DANNCE/train_results, which are then used by the demo dannce-predict.
The rat weights can be used to generate 20-landmark predictions if you feed them rat data (see our Rat7M dataset on figshare). You can also have them generate 20-landmark outputs on the mouse videos (set the dannce_predict_model' parameter to point to the rat weights), but I'm not sure how interpretable they will be.
On Wed, Jul 7, 2021, 5:55 PM Elias Najarro @.***> wrote:
I followed the Quickstart Demo and the pred variable in the generated file predict_resuts/save_data_AVG0.mat has dimensions 1000x3x22. Also, when I use the script run_dannce_predict.sh to generate the predictions.mat file from save_data_AVG0.mat, it contains a structure with 22 landmarks, not 20.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/spoonsso/dannce/issues/52#issuecomment-875960604, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAW2P47NR36DW2CNWUIKUDDTWTEODANCNFSM477LXJOA .
I see, that explains it.
We're trying to test it with our own rat data (3 synced cameras videos), which model exactly would be the appropriate for this? The weights names weights.12000-0.00014.hdf5, weights.1200-12.77642.hdf5 are a bit cryptic and I couldn't find anywhere a description of what they correspond to.
Could you also point me to the rat skeleton file? I could also only find the mouse one.
Thank you for the help Timothy, it's really appreciated.
Hi Tim,
Can we use the rat weights to train a network with less than 20 labels?
Thanks
@morbent Yes, you can! The rat pup data in the paper is an example of this.
Thanks Diego,
When we tried to use it to fine-tune a 5 labels animal it gives an error (similar to the one posted in issue 50)
@enajx
Could you also point me to the rat skeleton file? I could also only find the mouse one.
Label3D has a few skeleton files in Label3D/skeletons
We're trying to test it with our own rat data (3 synced cameras videos), which model exactly would be the appropriate for this?
A general rule is that you want the model you're finetuning to have been trained on the same number of cameras and color channels. Even though you can come up with many workarounds to use models with data of different input shapes, we find that keeping the number of cameras and color channels the same gives the best results. The models currently in the repo include 6 cam RGB (AVG and MAX) and 6 cam MONO (AVG). You'll want a 3 camera RGB or 3 camera MONO model trained on rat7m depending on whether your data is RGB or mono. I have some 3 cam RGB models trained on rat7M that we used to finetune the rat pup data. I could send you them if your data is rgb, otherwise you'll need to wait for Tim for the mono versions.
@spoonsso I suspect this and potentially other issues ( #50 #54 ) could be addressed by adding the collection of rat7M-trained dannce and com models to the rat7m figshare, if they are not already there.
@diegoaldarondo
We are working with 3 RGB cameras, so if you could share the 3 cam model trained on rat7m that would be fantastic!
Thank you for pointing at the skeleton files, I'll check them out.
@morbent for the shape errors you are experiencing, please see my recent comment in #50.
@spoonsso @diegoaldarondo
Could you please share the link to the 3 cam RGB model trained on rat7M you mentioned?
Hi @enajx . I've put links to these weights in https://github.com/spoonsso/dannce/issues/62#issuecomment-893714409.
Hi, are the weights of networks trained on rats that are mentioned in the paper available? The provided models seem to be the models fine-tuned on mice (since network output has 22 landmarks rather than 20).