Open wangd12rpi opened 3 days ago
torch.cuda.max_memory_allocated() // 1024 // 1024 // 1024
still need to add mamba check and modify other files. only integrated in fiqa right now 11/19
torch.cuda.max_memory_allocated() // 1024 // 1024 // 1024