Open sugatasanshiro opened 1 month ago
So sorry to hear that. Our model need about 13GB memory at least. We are training the lightweight version right now.
oh good
duplicate of #15
I'm using a Tesla M60 graphics card, which has two GPU cores, each with 8GB of memory. When I run my program, I still encounter out-of-memory issues. How should I configure my setup to use both GPUs?
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
3060 12GB work