Open nik13 opened 2 years ago
Hi @nik13, can you provide details on the profiling? Note, the PlayTorch SDK technology stack is different from tfjs. PlayTorch uses jsi.h, which supports C++ smart pointers and destructors (if needed). However, it is based on the JavaScript GC to cleanup unused objects and memory.
That said, it would help us to reproduce any possible memory leak if you could provide profiling steps.
Thank you!
Hi, @raedle. I agree with @nik13. in my case, memory usage increases by 200MB in an hour when inference runs and it gradually increases, so probably the profile is useless to resolve this issue. Also, keeping to call getImageData occurs increases of memory usage 100MB/hour. After about 5 hours, OutOfMemoryError comes up. I wish this issue will be resolved.
The memory leak was not that bad when the Native Module functions existed. If it is difficult to deal with this issue, could you please consider to leave the Native Module functions(live.spec.json version)
Thank you.
@SomaKishimoto, is this happening on Android or iOS or both platforms? Please share code to reproduce the issue
Thanks @raedle , it's on iOS. I don't check enough for Android... we refer to DETR example and do continuous rendering with onFrame instead of onCapture. actually, the model that I use is YOLO model instead of DETR.
Version
0.2.1
Problem Area
react-native-pytorch-core (core package)
Steps to Reproduce
HI, so there is no method to clean the Tensors or Models from memory. The memory heap keep on increasing drastically if new tensors are made by a function all. They are never dereferenced from memory. its same for models.
Tensorflow.js also provides a method called .dispose() to delete either the tensor or model from memory.
Unable to do this, has made this library to use more than a few times.
Expected Results
No response
Code example, screenshot, or link to repository
No response