v3ucn / Modelscope_Faster_Whisper_Multi_Subtitle

基于Faster-whisper和modelscope一键生成双语字幕,双语字幕生成器,基于离线大模型,Generate bilingual subtitles with one click based on Faster-whisper and modelscope. Off-line large model
MIT License
204 stars 23 forks source link

无论提取人声还是直接识别字幕,都会error,代码如下 #7

Closed zoushenhh closed 7 months ago

zoushenhh commented 8 months ago

To create a public link, set share=True in launch(). Traceback (most recent call last): File "G:\FunAsr_Faster_Whisper_Multi_Subs\utils.py", line 252, in make_srt model = WhisperModel(model_name, device="cuda", compute_type="float16",download_root="./model_from_whisper",local_files_only=False) File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\faster_whisper\transcribe.py", line 130, in init self.model = ctranslate2.models.Whisper( ValueError: Requested float16 compute type, but the target device or backend do not support efficient float16 computation.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\gradio\routes.py", line 534, in predict output = await route_utils.call_process_api( File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\gradio\blocks.py", line 1550, in process_api result = await self.call_function( File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\gradio\blocks.py", line 1185, in call_function prediction = await anyio.to_thread.run_sync( File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\anyio_backends_asyncio.py", line 2134, in run_sync_in_worker_thread return await future File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\anyio_backends_asyncio.py", line 851, in run result = context.run(func, args) File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\gradio\utils.py", line 661, in wrapper response = f(args, **kwargs) File "G:\FunAsr_Faster_Whisper_Multi_Subs\app.py", line 28, in do_trans_video srt_text = make_srt(video_path,model_type) File "G:\FunAsr_Faster_Whisper_Multi_Subs\utils.py", line 254, in make_srt model = WhisperModel(model_name, device="cuda", compute_type="int8_float16",download_root="./model_from_whisper",local_files_only=False) File "G:\FunAsr_Faster_Whisper_Multi_Subs\venv\lib\site-packages\faster_whisper\transcribe.py", line 130, in init self.model = ctranslate2.models.Whisper( ValueError: Requested int8_float16 compute type, but the target device or backend do not support efficient int8_float16 computation.

hyhuc0079 commented 8 months ago

好像faster whisper实际上是量化加载了原始的模型,但是你的显卡又不支持int8_float16这个量化参数,你是什么显卡呀。

hyhuc0079 commented 8 months ago

对了,faster whisper目前还不支持cuda12你是不是装的cuda12的pytorch?

zoushenhh commented 8 months ago

我是1066的卡,CUDA我不知道怎么看,我查一下

zoushenhh commented 8 months ago

python.exe: Error while finding module specification for 'torch.version' (ModuleNotFoundError: No module named 'torch') 不会是没有把。。。。

hyhuc0079 commented 8 months ago

你是下的一键整合包还是自己git呀?这个报错是说没装pytorch呀?

zoushenhh commented 8 months ago

你是下的一键整合包还是自己git呀?这个报错是说没装pytorch呀?

整合包呀。。。

hyhuc0079 commented 8 months ago

那应该是你没激活他的venv用的自己本地的python了