HazyResearch / hyena-dna

Official implementation for HyenaDNA, a long-range genomic foundation model built with Hyena
https://arxiv.org/abs/2306.15794
Apache License 2.0
574 stars 82 forks source link

Cuda out of memory for huggingface pre-trained model on A100-80GB #30

Closed AndreiBlindu closed 10 months ago

AndreiBlindu commented 10 months ago

I'm using the standalone_hyenadna.py script and loading the pre-trained weights of the large-1m model from huggingface in order to have a standalone code similar to the colab. When performing fine-tuning and testing on the dummy_mouse_enhancers_ensembl dataset from Genomic Benchmark with a sequence max_length of 3400, I get a "CUDA out of memory" error on an A100 GPU. As you suggest in the readme I tried to modify the downloaded config.json file found in checkpoints_path/hyenadna-large-1m-seqlen by setting to True the fields: checkpoint_mixer: True checkpoint_mlp: True But now I'm getting the error: scratch_dict[key] = pretrained_dict[key_loaded] KeyError: 'model.backbone.layers.0.mixer.layer.in_proj.weight' As you suggest I tried to toggle on/off these params in order to find the working combination, but I either get this key error or the cuda out of memory error.

It seems to me that since the pre-trained model loaded from huggingface probably has been trainined with those flags set to False, now there's a configuration mismatch. Am I missing something? How can I work with ultralong sequences without getting memory errors by using pre-trained models downloaded from huggingface?

Thanks in advance for your response and for your valuable contribution to this research field!

exnx commented 10 months ago

The OOM error is more relevant, the other error is because of the checkpoing_mixer/mlp flag.

what's your batch size? For 1M, we use a batch size of 1 (per gpu) and then use the grad_accummulate_batch flag to adjust the effective batch size.

exnx commented 10 months ago

oh I reread your response and see you're using 1M model on short seqs, that's highly inefficient. I would use the short seq models for short tasks. Then you can use much bigger batch sizes too.

AndreiBlindu commented 10 months ago

Thank you very much for your advice! That totally makes sense since I noticed I was using a batch size of 256 that is actually too large. Using a batch size of 1 gives me the following error: ValueError: Expected input batch_size (1) to match target batch_size (0). Setting the batch size to 2 or a bit higher solves the problem. What can be the cause?

You're right about using short seq models for short sequences but we would like to test Hyena 1M in order to compare its performance with its other context sizes. Unfortunately the Genomic Benchmark Dataset contains only short sequences and the same is true for most Nucleotide Transformer datasets. Can you suggest me a proper dataset on which we can test the larger versions of Hyena?

Our main goal is to reproduce the results you showed in the paper on the Genomic Benchmark Dataset. Did you use the tiny-1k? Can you provide the exact hyperparameters?

Thank you for your time.

exnx commented 10 months ago

We can help in general for using the codebase as is (with existing datasets) but we're going to need a lot more context: what dataset / task, cmd used to launch, wandb (if avail) etc.

If it's a custom dataset, then you'll need to get fairly intimate with the code and how things flow, eg, putting breakpoints everywhere. That'll be the most efficient for you.

In general, start small, check each module individually - eg, is my dataloader returning the exact shapes I expect, and if not, add a breakpoint and check. Better yet, use the existing datasets and check that it works the way you think.

Good luck!

exnx commented 10 months ago

Also for using a giant context on short datasets, you'll want to use the masking functionality, otherwise it's not a fair comparison, since we average embeddings, it'll get drowned out by 1M random embeddings lol.

See masking section here for how to use it.

AndreiBlindu commented 10 months ago

Thanks for the suggestion about the masking for larger models.

As a first step we want to reproduce your results on the Genomic Benchmark Dataset showed in the paper. Thank you very much for all your advices!