Closed MZhao-ouo closed 2 years ago
I have about the same setup (16G MEM and 2G SWAP) as yours and encountered the same "killed" problem. The problem was then solved when I set batch_size = 512.
I have about the same setup (16G MEM and 2G SWAP) as yours and encountered the same "killed" problem. The problem was then solved when I set batch_size = 512.
Thanks. I upgraded MEM to 32G and it is running successfully now. It occupied 31.5G MEM when training.
But I obversed a weird phenomenon. When I try running ./scripts/train_multiblender.sh
on a 64G MEM computer, the program occupied up to 55G MEM. It's bizarre!
Anyway, thank you for your method and I will close this issue.
I use the following command to train on multiscale datasets, but get "killed" output.
I have generated multiscale datesets, and changed the correct path in
./scripts/train_multiblender.sh
. It works well on./scripts/train_blender.sh
with original datasets.My computer has 16G MEM and 4G SWAP and I'd like to know the minimum requirements.
Thanks.