Open openSourcerer9000 opened 3 months ago
Hi, @openSourcerer9000. I'm Dosu, and I'm helping the LangChain team manage their backlog. I'm marking this issue as stale.
Issue Summary:
OpenAIWhisperParserLocal
model not unloading from VRAM in the LangChain library.Next Steps:
Thank you for your understanding and contribution!
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
No response
Description
The protocol for unloading the whisper model from memory is detail here: https://github.com/openai/whisper/discussions/1313#discussioncomment-5813140
However, the python lang chain wrapper for whisper doesn't release the model upon deletion of the object, so these steps don't work when using lang chain, making it impossible to unload the model until the python script has shut down. this makes it impossible to build apps since the model cannot be unloaded while the app is still running in order to free up VRAM for other models or other processes. This is a seriously fatal flaw and makes the library a nonstarter for many of its intended use cases.
Is there any mechanism exposed that could unload this model from memory?
System Info
'0.2.12' windows