Closed TheaperDeng closed 4 months ago
Hi @TheaperDeng, Should all the benchmark experiments share the same script?
Hi @TheaperDeng, Should all the benchmark experiments share the same script?
I should think so. But it seems hard for some complicated experiment settings e.g. nanogpt and music transformer. I guess it's OK to make those complicated ones have their own scripts.
For this PR, I will
@tingwl0122 please also have a look, I think we may make "maestro_musictransformer" another setting later.
@tingwl0122 please also have a look, I think we may make "maestro_musictransformer" another setting later.
so directly pair up dataset and model?
@tingwl0122 please also have a look, I think we may make "maestro_musictransformer" another setting later.
so directly pair up dataset and model?
I think we can split --setting
to --model
and --dataset
? And then we can assert if the combination is in our supported scope?
Actually --model
and --dataset
seems more clear?
@tingwl0122 please also have a look, I think we may make "maestro_musictransformer" another setting later.
so directly pair up dataset and model?
I think we can split
--setting
to--model
and--dataset
? And then we can assert if the combination is in our supported scope?Actually
--model
and--dataset
seems more clear?
I think so. So basically the name of the file should follow dataset_model.py under dattri/benchmark/dataset/
right
Description
1. Motivation and Context
This PR
ImageNet
to the benchmark moduledattri_retrain
.2. Summary of the change
Here is some usage to the new entry point
dattri_retrain
.Train ImageNet with LDS
Train MNIST 10 with LOO
Usage Guide
3. What tests have been added/updated for the change?