LechengKong / OneForAll

A fundational graph learning framework that solves cross-domain/cross-task classification problems using one model.
MIT License
161 stars 22 forks source link

How to replicate the results in Table 4? #15

Closed Wangyuwen0627 closed 4 months ago

Wangyuwen0627 commented 4 months ago

Thanks for your outstanding work. I'm very interesting in the experiment about few-shot and zero-shot learning. I have a few questions that I would like to seek your guidance on:

  1. What does the parameter d_min_ratio signify?
  2. How should I adjust the parameters to reproduce the results in Table 4? I attempted to directly execute python run_cdm.py --override lr_all_config.yaml, however, it was challenging to achieve the reported outcomes. For instance, I only obtained a testing performance of 38.60 on ogbn-arxiv 5-way 5-shot.

I would be immensely grateful if I could receive your response.

Wangyuwen0627 commented 4 months ago

Sorry, I just noticed that you updated your code, and I got my results based on your previous version of the code. But I have a problem when trying to run the updated one: you load eval_config.yaml in line 241--247 of run_cdm.py, but it seems you have not provided this file. How should I get this file? Thank you once again, and I am looking forward to your reply.

LechengKong commented 4 months ago

Hi @Wangyuwen0627, thanks for raising the issue. It was added by mistake, please pull and try again. Thanks!

LechengKong commented 4 months ago

For d_min_ratio, it controls the minimum amount of data from a dataset in one epoch. We decrease the amount of data used in each epoch if the dataset validation score is not improving. However, we do not want any dataset to be completely removed from the epoch, so we set d_min_ratio, such that at least d_min_ratio*len(dataset) data from that dataset appears in one epoch.

Wangyuwen0627 commented 4 months ago

Thank you for your reply, I will try again! ^_^