Closed antoniyaaboyanova closed 1 year ago
Thanks for getting in touch! I'm sorry, I didn't understand what problem you were referring to. What's the issue? Being specific is great, as it allows us to check the code and run tests and hopefully identify a problem.
Hi,
Thank you for your reply. I think that CORNet-S might not have the right pretrained weights when I am calling the model. The reason why I am thinking this might be so is that the RSA analysis I am doing with the features extracted from this model just don't make sense, but the results for all of the other models are aligned to their behavioural performance. Additionally I apply SVM on the extracted features and as I go deeper in layers the SVM's performance gets better - that's the case with all other models I've used, except for CORnet-S. So I am trying to figure out why this is so. Moreover, when I look at the final performance of the models on the images I have, CORnet-S is one of the best performing models, so I am super confused as to why when I try to do decoding based on its features the performance is so poor.
Please don't hesitate to ask me further questions if something is unclear.
Any insight on the matter would be greatly appreciated. Also potential suggestions on what I can do differently would be helpful as well. I want to use your library for all feature extraction as it keeps it very consistent.
Best, Antoniya
On Wed, 1 Feb 2023 at 19:44, Alex Murphy @.***> wrote:
I'm sorry, I didn't understand what problem you were referring to. What's the issue? Being specific is great, as it allows us to check the code and run tests and hopefully identify a problem.
— Reply to this email directly, view it on GitHub https://github.com/ViCCo-Group/thingsvision/issues/128#issuecomment-1412546473, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUVDWE7B4DJAES4PJM3ZV63WVKVKNANCNFSM6AAAAAAUOCO4FQ . You are receiving this because you authored the thread.Message ID: @.***>
Hi @antoniyaaboyanova ,
I checked our code for the different CORnet versions, and all models, including CORnet-s, correctly pull pretrained state dicts from the official AWS bucket and load their weights from them. This is more or less identical to the official CORnet repository. Model outputs also differ between pretrained and random weight initialization and resulting RDMs for some images I had lying around look significantly different.
For me, this suggests that on our side things should work as intended. Could you maybe post the code you used to do your feature extraction and analyses? Also, what version of thingsvision
do you use?
Best, Johannes
Could this be for some reason related to #46 or is it a mistake on the end of @antoniyaaboyanova? Could you check this @antoniyaaboyanova? Posting the code you've used, as @andropar suggested, would definitely be helpful, and letting us know about your thingsvision
version.
Hi,
Thanks for your reply and help! Attached you will find my code for the extraction + SVM. The thingsvision I am using is: 2.2.18. I do apply PCA to the features once I extract them for storage purposes - could this be causing a weird effect on the features?
Best, Antoniya
On Mon, 6 Feb 2023 at 15:47, Johannes Roth @.***> wrote:
Hi Antoniya,
I checked our code for the different CORnet versions, and all models, including CORnet-s, correctly pull pretrained state dicts from the official AWS bucket and load their weights from them. This is more or less identical to the official CORnet repository https://github.com/dicarlolab/CORnet. Model outputs also differ between pretrained and random weight initialization and resulting RDMs for some images I had lying around look significantly different.
For me, this suggests that on our side things should work as intended. Could you maybe post the code you used to do your feature extraction and analyses? Also, what version of thingsvision do you use?
Best, Johannes
— Reply to this email directly, view it on GitHub https://github.com/ViCCo-Group/thingsvision/issues/128#issuecomment-1419201037, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUVDWEZUPFS5ZGTF45JCH4TWWEFHHANCNFSM6AAAAAAUOCO4FQ . You are receiving this because you authored the thread.Message ID: @.***>
@antoniyaaboyanova, you'll have to post the code into the Github issue, I don't think you can attach files when answering by mail! 🙂
Thanks for pointing that out! Sorry about that. I think the best way to show you what I have is by providing you with google colab links:
For the feature extraction: https://colab.research.google.com/drive/1BpBW_GBpDxm84oua5k-F3HPRV2Bg47uZ?usp=sharing
For the SVM on the features: https://colab.research.google.com/drive/1jzxS0LDaqNaG8ovgASPBBpnD1APA6FL_?usp=sharing
The links are set to viewers mode only. Let me know if this is not sufficient.
I think this is not an issue on our end. @andropar, can we close this?
Yes, I think so too.
Hi guys,
I am using your library to extract features from different models including the cornet family. With all of the models my results make sense (RSA with human data) except for CORNet-S. I did a test where I use the model pre-trained vs not and couldn't see a systematic difference in the outcome. It's strange because everything seems to work perfectly well for the features extracted from the other models (i.e. other two cornets, vgg16, alexnet, resnet50). So I wanted to ask if anyone else has had some issues with the CORNet-S features and if you have any suggestions on what could be the problem?