forchchch / JTDS

This reporsitory is for the code of the paper Auxiliary Learning with Joint Task and Data Scheduling
9 stars 0 forks source link

Reproducing experiments #1

Open abwilf opened 2 years ago

abwilf commented 2 years ago

Hi! Thank you for the codebase. Could you please list the commands necessary to reproduce the experiments from your paper with CUB? I would like to cite the paper but I'd like to verify the results first. Specifically, I'd like to reproduce the results for N-JTDS, JTDS on CUB from table 2 (supervised) and table 3 (semi-supervised) and with different corrupted ratios (table 4). What are the commands you used to run those experiments? In your README you say that the command necessary to reproduce the results are in run_bilevel.sh but I only see one command there.

forchchch commented 2 years ago

Hello, thanks for your attention to our work! To reproduce the experiments for different corrupted ratio, you only need to change the config in run_bilevel.sh by setting —corupted 1 and —corupted ratio 0.2(or any other ratio you like), and then still use run_bilevel.sh to reproduce the experiments. For the code of N-JTDS, we will soon add it to the repo.

Best

2022年8月26日 23:25,Alex Wilf @.***> 写道:

Hi! Thank you for the codebase. Could you please list the commands necessary to reproduce the experiments from your paper with CUB? I would like to cite the paper but I'd like to verify the results first. Specifically, I'd like to reproduce the results for N-JTDS, JTDS on CUB from table 2 (supervised) and table 3 (semi-supervised) and with different corrupted ratios (table 4). What are the commands you used to run those experiments? In your README you say that the command necessary to reproduce the results are in run_bilevel.sh but I only see one command there.

— Reply to this email directly, view it on GitHub https://github.com/forchchch/JTDS/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANISDNSMZSRSSJ26XUWUUGDV3DOX5ANCNFSM57XD5E6A. You are receiving this because you are subscribed to this thread.

abwilf commented 2 years ago

Great, thank you. So this --method common flag is correct? When I run the code, this gives significantly worse than stated performance. Is this the baseline? Would you please write out the full commands for reproducing these performances, so I make sure to fairly evaluate your method?

  1. Table 2 JTDS performance
  2. Table 3 JTDS performance
  3. Table 4 JTDS performance

Thank you for your help!

forchchch commented 2 years ago

To utilize the joint scheduling method, set the method to "joint" as described in the updated Readme.md. To reproduce the results in table 4, change the flag corupted to 1 and set your needed corrupted ratio. For the semi-supervised setting, we do not incorporate the dataset split in the codebase but you can modify the dataset a little for easy implementation.