Huanshere / VideoLingo

Netflix-level subtitle cutting, translation, alignment, and even dubbing - one-click fully automated AI video subtitle team | Netflix级字幕切割、翻译、对齐、甚至加上配音,一键全自动视频搬运AI字幕组
https://docs.videolingo.io
Apache License 2.0
7.8k stars 750 forks source link

用ollama翻译,能行嘛 #121

Closed 602387193c closed 1 month ago

602387193c commented 1 month ago

这是看了这视频得来的技巧,安装了ollama,希望作者把ollama的支持加下? Ollama+Gemma2:9b本地开源大模型输出OpenAI兼容API作为本地翻译引擎工作流 QQ20241007-121628

602387193c commented 1 month ago

抱歉,好像没有用,运行到中间会出现这个错误:

File "C:\Users\c6023\anaconda3\envs\videolingo\lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py", line 88, in exec_func_with_error_handling result = func() File "C:\Users\c6023\anaconda3\envs\videolingo\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 590, in code_to_exec exec(code, module.dict) File "D:\AI\VideoLingo\st.py", line 126, in main() File "D:\AI\VideoLingo\st.py", line 122, in main text_processing_section() File "D:\AI\VideoLingo\st.py", line 35, in text_processing_section process_text() File "D:\AI\VideoLingo\st.py", line 60, in process_text step4_2_translate_all.translate_all() File "D:\AI\VideoLingo\core\step4_2_translate_all.py", line 83, in translate_all results.append(future.result()) File "C:\Users\c6023\anaconda3\envs\videolingo\lib\concurrent\futures_base.py", line 438, in result return self.get_result() File "C:\Users\c6023\anaconda3\envs\videolingo\lib\concurrent\futures_base.py", line 390, in get_result raise self._exception File "C:\Users\c6023\anaconda3\envs\videolingo\lib\concurrent\futures\thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "D:\AI\VideoLingo\core\step4_2_translate_all.py", line 44, in translate_chunk things_to_note_prompt = search_things_to_note_in_prompt(chunk) File "D:\AI\VideoLingo\core\step4_1_summarize.py", line 20, in search_things_to_note_in_prompt prompt = '\n'.join( File "D:\AI\VideoLingo\core\step4_1_summarize.py", line 22, in f' meaning: {term["explanation"]}'

TITC commented 1 month ago

ollama是有OpenAI compatibility支持的。但是据我这两天的尝试,A100单卡qwen2.5:7b 20k上下文都无法成功跑完一个5分多的YouTube视频 ,因为上下文程度不够,但是显存已经接近80GB了

602387193c commented 1 month ago

原来如此,看来还是得API才行,主要上下文限制了,一般电脑跑不动

Huanshere commented 1 month ago

"能用别的模型吗? ✅ 支持 OAI-Like 的 API 接口,需要自行在 streamlit 侧边栏更换。 ⚠️ 但其他模型(尤其是小模型)遵循指令要求能力弱,非常容易在翻译过程报错,强烈不推荐。"

文档中有提到~