Gp1g / RFL

6 stars 0 forks source link

About episode #1

Open i-hu opened 1 month ago

i-hu commented 1 month ago

Hello, when we train with MNIST, we choose episode is 0, if we use other datasets, how many episodes should we choose? such as cifar-10, etc.

Gp1g commented 4 weeks ago

Hi i-hu. To get a fair comparison with FL methods, we set the episode at 0 for all experiments.

i-hu commented 4 weeks ago

Hi i-hu. To get a fair comparison with FL methods, we set the episode at 0 for all experiments.

Hi,That‘s mean the RL agent is trained with the epochs, and when we have trained 200 epochs (if we set 200), the RL agent will be trained at the same time.

Gp1g commented 4 weeks ago

Hi i-hu. To get a fair comparison with FL methods, we set the episode at 0 for all experiments.

Hi,That‘s mean the RL agent is trained with the epochs, and when we have trained 200 epochs (if we set 200), the RL agent will be trained at the same time.

Yes, your understanding is correct.

i-hu commented 4 weeks ago

Hi i-hu. To get a fair comparison with FL methods, we set the episode at 0 for all experiments.

Hi,That‘s mean the RL agent is trained with the epochs, and when we have trained 200 epochs (if we set 200), the RL agent will be trained at the same time.

Yes, your understanding is correct.

Hi i-hu. To get a fair comparison with FL methods, we set the episode at 0 for all experiments.

Hi,That‘s mean the RL agent is trained with the epochs, and when we have trained 200 epochs (if we set 200), the RL agent will be trained at the same time.

Yes, your understanding is correct.

Hi i-hu. To get a fair comparison with FL methods, we set the episode at 0 for all experiments.

Hi,That‘s mean the RL agent is trained with the epochs, and when we have trained 200 epochs (if we set 200), the RL agent will be trained at the same time.

Yes, your understanding is correct. Thank you,have a good day!

i-hu commented 2 weeks ago

Sorry to bother you at night, I have one more question. Cause I found that your results seem to come from the code part of the ‘’Evaluate local model‘’, but what is the difference between this and the LOCAL and GLOBAL comparisons in your paper?

Gp1g commented 2 weeks ago

Sorry to bother you at night, I have one more question. Cause I found that your results seem to come from the code part of the ‘’Evaluate local model‘’, but what is the difference between this and the LOCAL and GLOBAL comparisons in your paper?

Hi, you can refer to Ditto or Lp-proj for more implementation and technical details. https://github.com/litian96/ditto?tab=readme-ov-file https://github.com/desternylin/perfed

i-hu commented 1 week ago

Sorry to bother you at night, I have one more question. Cause I found that your results seem to come from the code part of the ‘’Evaluate local model‘’, but what is the difference between this and the LOCAL and GLOBAL comparisons in your paper?

Hi, you can refer to Ditto or Lp-proj for more implementation and technical details. https://github.com/litian96/ditto?tab=readme-ov-file https://github.com/desternylin/perfed

Sorry to bother you at night, I have one more question. Cause I found that your results seem to come from the code part of the ‘’Evaluate local model‘’, but what is the difference between this and the LOCAL and GLOBAL comparisons in your paper?

Hi, you can refer to Ditto or Lp-proj for more implementation and technical details. https://github.com/litian96/ditto?tab=readme-ov-file https://github.com/desternylin/perfed

hi, I found the pytorch version of ditto, but when I try to add an attack part to the ditto code, I find that the loss of the global-model sometimes becomes hundreds of thousands, which makes it impossible for me to reproduce the comparative experiments, can you please give some guidance? the code:https://github.com/TsingZ0/PFLlib 11111