pytti-tools / pytti-core

https://pytti-tools.github.io/pytti-book/intro.html
MIT License
78 stars 23 forks source link

modify GMA loading code to not use DataParallel (by default) #195

Closed dmarx closed 2 years ago

dmarx commented 2 years ago

in OpticalFlowLoss.init_GMA, RAFTGMA is initialized wrapped in torch.nn.DataParallel. I'm pretty sure the only thing this accomplishes is providing a target for "module." prefix in the parameter key names. I suspect that nuances with DataParallel that are not respected in the pytti code base are the cause memory leaks like crashing the discord bot and other OOM issues that manifest as if memory is being filled and not released -- even in the presence of errors -- unless the kernel is reset.