Closed Karami-m closed 9 months ago
I also tried it with python -m train experiment=cifar/hyena-vit-cifar model.d_model=128
to match the description in Appendix of Hyena paper. As a result, the performance improved to $\sim 70$%, but it is still far from the 90% performance that was reported in the paper. Am I missing something on the config for this experiment?
Hi, thanks for the issue.
It looks like the config and code in this version is quite different from the one used to run CIFAR for the paper, @exnx might be able to help a bit more with recreating the exact config.
One thing I can tell is different is the order of the filter - try setting model.layer.filter_order=128
in the config.
Probably more important, I think the scheduler in the hyena-vit-cifar
config may not be set correctly, probably a copy-paste problem. Try changing the values at the top:
# @package _global_
defaults:
- /pipeline: cifar-2d
- /model: vit
- override /model/layer: hyena
- override /scheduler: cosine_warmup_timm
to:
# @package _global_
defaults:
- /pipeline: cifar-2d
- /model: vit
- override /model/layer: hyena
- override /scheduler: cosine_warmup
scheduler:
num_training_steps: 100000
(following this config for the scheduler: https://github.com/HazyResearch/safari/blob/main/configs/experiment/cifar/s4-simple-cifar.yaml)
EDIT: nevermind, I think this config is not supposed to be used... Eric will comment with a correction soon.
Hello!
So the current vit-cifar config was only meant for testing the pipeline, it's not meant for comparing cifar results. (eg, testing ViT pipeline on cifar is faster than running it on ImageNet).
In the Hyena paper, the cifar results are for a 2D version of Hyena (notably not a ViT model).
We still need to port the 2D version over (for cifar), which we'll do at some point soon!
Hi, thanks for the issue.
It looks like the config and code in this version is quite different from the one used to run CIFAR for the paper, @exnx might be able to help a bit more with recreating the exact config.
One thing I can tell is different is the order of the filter - try setting
model.layer.filter_order=128
in the config.Probably more important, I think the scheduler in the
hyena-vit-cifar
config may not be set correctly, probably a copy-paste problem. Try changing the values at the top:# @package _global_ defaults: - /pipeline: cifar-2d - /model: vit - override /model/layer: hyena - override /scheduler: cosine_warmup_timm
to:
# @package _global_ defaults: - /pipeline: cifar-2d - /model: vit - override /model/layer: hyena - override /scheduler: cosine_warmup scheduler: num_training_steps: 100000
(following this config for the scheduler: https://github.com/HazyResearch/safari/blob/main/configs/experiment/cifar/s4-simple-cifar.yaml)
EDIT: nevermind, I think this config is not supposed to be used... Eric will comment with a correction soon.
Thanks for your response. With the current setup, I got accuracy of 87% using the following setting:
scheduler:
t_in_epochs: False
t_initial: ${eval:${div_up:50000, ${train.global_batch_size}} * ${trainer.max_epochs}}
warmup_lr_init: 1e-6
warmup_t: 500
lr_min: ${eval:0.1 * ${optimizer.lr}}
model:
_name_: vit_b_16
d_model: 128
First of all, thanks for your great work and maintaining this repository.
I have tried to get Hyena's results on CIFAR but the accuracy reaches to ~60% after 100 epochs. From the Appendix, the model dimension is 128 which is different from
experiment/cifar/hyena-vit-cifar.yaml
. So, I wonder if this is the only setting from this config file that should be fixed to get the results reported in the paper (Acc= 91%)?