rfelixmg / frwgan-eccv18

Code for model presented on our paper accepted on European Conference on Computer Vision 2018.
57 stars 23 forks source link

How to use h5py to create the specific structure? #2

Closed Hanzy1996 closed 5 years ago

Hanzy1996 commented 5 years ago

I downloaded all the dataset that is .mat, but how to create the h5?

Best wishes!

rfelixmg commented 5 years ago

Just organize your dataset into a dictionary, and then use: https://github.com/rfelixmg/util

from util.storage import DataH5py DataH5py().save_dict_to_hdf5({'dictionary':['data']}, '/tmp/location')

Soon I will upload the database processing.

Cheers,

Hanzy1996 commented 5 years ago

Much appreciation. My dataset is as below. I don't quite understand how to organize them into a dictionary. Could you please tell me more details?

default

And after organizing, can I just run the code you mentioned and replace '/tmp/location' with where I put my dictionary?

Best wishes!

Hanzy1996 commented 5 years ago
default

For data.h5, I try to load the data and define a dictionary like : dictionary={{'test':{'seen','unseen'}},{'train', 'val'}}. Could please tell me what A means and where I should put it in the dictionary? For knn.h5, I don't understand all of the key like openset,openval,zsl,class2id,id2class,id2knn and knn2id

The strcture of att_splits.mat is as below.

default

The strcture of res101.mat is as below.

default
rfelixmg commented 5 years ago

Hi @Hanzy1996,

I just updated the repository with the dataset processing, as well as knn.

https://github.com/rfelixmg/frwgan-eccv18/blob/master/notebooks/dataset/processing.ipynb

I hope this will help you to understand the procedure. Nonetheless, I also added the datasets already processed to perform the experiments.

Hanzy1996 commented 5 years ago

Much appreciation!

Akash481 commented 3 years ago

I am trying to implement on my custom dataset. Although I did not understand much of it, so I am hoping to create att_split.mat and ResNet101.mat file for my custom dataset and see the results. I am unable to comprehend what is this att and original_att(312x200). Can you please tell me what is this and how to generate this?

rfelixmg commented 3 years ago

You will find more information about att(312x200) on the original data set paper. This should be the human-annotated attributes from CUB and SUN.

http://www.vision.caltech.edu/visipedia/CUB-200.html

I am trying to implement on my custom dataset. Although I did not understand much of it, so I am hoping to create att_split.mat and ResNet101.mat file for my custom dataset and see the results. I am unable to comprehend what is this att and original_att(312x200). Can you please tell me what is this and how to generate this?

Hanzy1996 commented 3 years ago

Hi, @rfelixmg, I have another small confusion: how to get the 1024-dim CNN-RNN features of CUB?

In the original codes of Learning deep representations of fine-grained visual descriptions, they provided the pre-trained model on CUB, and their codes are based on Lua which is quite hard for me. Did you get the CNN-RNN features based on their pre-trained model? And how you get these features? Could you please provide the processes or codes?

Much appreciation!

rfelixmg commented 3 years ago

@Hanzy1996, I have only downloaded their features, and processed it in a h5py format. I'm not sure how hard it would be to have their version of the code running. If you are JUST interested in language models for encoding sentences, you might give it a look to BERT models [1].

[1] Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

Best wishes, Rafa.

Hanzy1996 commented 3 years ago

Much appreciation for your reply. I have checked their codes but cannot find the features. Could you please provide the link or path to download these features? Are these features contained in the pre-trained model's files or somewhere else?

I have successfully got the features from your h5py files and feel much thankful for your effort, but I still want to try to get these features by myself. I would be very grateful for your help.

Best wishes!