Open nartes opened 4 years ago
Memory is not being deallocated, since no destructor is being defined: https://github.com/NVIDIA/nv-wavenet/blob/03f69576f6c6b984340c1ddef2288e3f7d1102ca/matrix.h#L39
The only place that is being used in pytorch bindings: https://github.com/NVIDIA/nv-wavenet/blob/03f69576f6c6b984340c1ddef2288e3f7d1102ca/pytorch/wavenet_infer.cu#L92
Softmax sampling shoud not leak that much memory though.
Were you able to solve this issue? I am unable to perform inference with even a batch size of 1 as my GPU runs out of memory.
Memory is not being deallocated, since no destructor is being defined: https://github.com/NVIDIA/nv-wavenet/blob/03f69576f6c6b984340c1ddef2288e3f7d1102ca/matrix.h#L39
The only place that is being used in pytorch bindings: https://github.com/NVIDIA/nv-wavenet/blob/03f69576f6c6b984340c1ddef2288e3f7d1102ca/pytorch/wavenet_infer.cu#L92
Softmax sampling shoud not leak that much memory though.