NVIDIA / nv-wavenet

Reference implementation of real-time autoregressive wavenet inference
BSD 3-Clause "New" or "Revised" License
735 stars 126 forks source link

Matrix doesn't deallocate memory #99

Open nartes opened 4 years ago

nartes commented 4 years ago

Memory is not being deallocated, since no destructor is being defined: https://github.com/NVIDIA/nv-wavenet/blob/03f69576f6c6b984340c1ddef2288e3f7d1102ca/matrix.h#L39

The only place that is being used in pytorch bindings: https://github.com/NVIDIA/nv-wavenet/blob/03f69576f6c6b984340c1ddef2288e3f7d1102ca/pytorch/wavenet_infer.cu#L92

Softmax sampling shoud not leak that much memory though.

m-k-S commented 4 years ago

Were you able to solve this issue? I am unable to perform inference with even a batch size of 1 as my GPU runs out of memory.