Closed jiangjiaxi96 closed 4 years ago
Thank you so much for your help!!!! But, I got some bugs when I choose the Omniglot dataset. Have you met those bugs before? Is it possible that the related package version is wrong?would you plz tell me the requirement.txt?
On Oct 9, 2020, at 9:04 PM, stefano vincenzi notifications@github.com wrote:
Hi @jiangjiaxi96 https://github.com/jiangjiaxi96, we added in the README a usage example on the Omniglot dataset and the link to the Omniglot and Mini-Imagenet dataset (embedding included). If this doesn't clear your doubts please let me know. Best, Ste
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/alessiabertugli/FUSION-ME/issues/2#issuecomment-706168611, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANBQOKP7KXX5RAH234CBDLDSJ4C6NANCNFSM4SJXWCNA.
No description provided.
Hello, I am just a beginner at this. Could you tell me what to link to the data_floder? Because there's many data_floders in that function. Thanks!!
Are you talking about data_folder in get_embeddings function in datasets/utils.py? If yes, you have to link the precomputed embeddings (you can find them in the link provided in the README file under "Embeddings"), according to the type of embeddings and the dataset you want to use.
Thanks for reply! I do talking about the data_folder in get_embeddings function in datasets/utils.py. What you mean is that these data_floder are supposed to link to other project's precomputed embeddings? As the perior knowledge I guess? So, because I use the omniglot data set, I should replace data_folder = 'path_to_bigan_encodings'by the path to the "baseline.sh" in the file that I download form your README file. Forgive me for my stupidity,I am a undergraduate and my task is to run a rcent project of few-shot learing and to make a demo.
Thank you again!!! That's so nice for you to help me! Wish you a good time! ------------------ 原始邮件 ------------------ 发件人: "alessiabertugli/FUSION-ME" <notifications@github.com>; 发送时间: 2020年10月12日(星期一) 晚上8:01 收件人: "alessiabertugli/FUSION-ME"<FUSION-ME@noreply.github.com>; 抄送: "1111a"<1259840828@qq.com>;"Comment"<comment@noreply.github.com>; 主题: Re: [alessiabertugli/FUSION-ME] how to use this model step by step ? (#2)
Are you talking about data_folder in get_embeddings function in datasets/utils.py? If yes, you have to link the precomputed embeddings (you can find them in the link provided in the README file under "Embeddings"), according to the type of embeddings and the dataset you want to use.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.
We use precomputed embeddings obtained with other methods (Deep Cluster, ACAI, BiGAN) to cluster data and assign them pseudo-labels. To facile the use of our model, we give you our precomputed embeddings that you can find at the link in the README. You have to download these files, put them in a folder (for example you can create a new folder called "data" contains sub-folders such "acai_encodings", "bigan_encodings" and "deepcluster_encodings") and finally you have to set data_folder = your_absolute_path+"data/acai_encodings/" for example to use Omniglot dataset with acai embeddings.
@jiangjiaxi96 requirements.txt I use this environment also for other projects, so not all the packages are necessary for this repo.
Hi @jiangjiaxi96, we added in the README a usage example on the Omniglot dataset and the link to the Omniglot and Mini-Imagenet dataset (embedding included). If this doesn't clear your doubts please let me know. Best, Ste