The great thing about the new update is I'm able to spin up a free google collab to run ongo, and elrich with 2 batches., this is great. But having to constantly iterate, load Bert, kl-f8, and the models each time takes a while. Is there any memory performance increase if we could add back the --num_batches?
The great thing about the new update is I'm able to spin up a free google collab to run ongo, and elrich with 2 batches., this is great. But having to constantly iterate, load Bert, kl-f8, and the models each time takes a while. Is there any memory performance increase if we could add back the --num_batches?