Closed upczxy closed 1 year ago
Thanks for reporting it. You could try either
micro_batch_size
, e.g. set micro_batch_size: 2
or micro_batch_size: 1
in cfg.yaml
config files.checkpoint_level: 1
or checkpoint_level: 2
in cfg.yaml
config files.Also, @upczxy can you point us to the training code that you cannot run? We are happy to help.
thank you very much,the code can run now
@upczxy Let me close the issue for now. Feel free to reopen if you find any problems.
Hello, In the paper you say the code can run under a 16G GPU when batch is under 4 . but It still showed out of the memory when I tested. What should I do