Open eyewee opened 5 days ago
Did you try to use a smaller amount of memory in gradio advanced?
Could you show me how to run in cpu mode?
Thank you in advance,
Ubirajara
Em sáb., 14 de set. de 2024 06:01, eyewee @.***> escreveu:
Hello,
Currently works fine with --cpu_mode but throws "RuntimeError: CUDA failed with error out of memory" in standard mode. Unable to use it with GPU (rtx 3070 ti).
Suppressing numeral and symbol tokens Traceback (most recent call last): File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/blocks.py", line 1627, in process_api result = await self.call_function( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/blocks.py", line 1173, in call_function prediction = await anyio.to_thread.run_sync( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, args) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/utils.py", line 690, in wrapper response = f(args, kwargs) File "/home/barbarossa/SoniTranslate/app_rvc.py", line 355, in batch_multilingual_media_conversion return self.multilingual_media_conversion( File "/home/barbarossa/SoniTranslate/app_rvc.py", line 708, in multilingual_media_conversion audio, self.result = transcribe_speech( File "/home/barbarossa/SoniTranslate/soni_translate/speech_segmentation.py", line 240, in transcribe_speech result = model.transcribe( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/whisperx/asr.py", line 219, in transcribe for idx, out in enumerate(self.call(data(audio, vad_segments), batch_size=batch_size, num_workers=num_workers)): File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in next item = next(self.iterator) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 125, in next processed = self.infer(item, self.params) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1112, in forward model_outputs = self._forward(model_inputs, forward_params) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/whisperx/asr.py", line 152, in _forward outputs = self.model.generate_segment_batched(model_inputs['inputs'], self.tokenizer, self.options) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/whisperx/asr.py", line 47, in generate_segment_batched encoder_output = self.encode(features) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/whisperx/asr.py", line 86, in encode return self.model.encode(features, to_cpu=to_cpu) RuntimeError: cuDNN failed with status CUDNN_STATUS_NOT_INITIALIZED [INFO] >> Content in 'outputs' removed. [INFO] >> Transcribing... Traceback (most recent call last): File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/queueing.py", line 495, in call_prediction output = await route_utils.call_process_api( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/route_utils.py", line 235, in call_process_api output = await app.get_blocks().process_api( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/blocks.py", line 1627, in process_api result = await self.call_function( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/blocks.py", line 1173, in call_function prediction = await anyio.to_thread.run_sync( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, args) File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/gradio/utils.py", line 690, in wrapper response = f(args, kwargs) File "/home/barbarossa/SoniTranslate/app_rvc.py", line 355, in batch_multilingual_media_conversion return self.multilingual_media_conversion( File "/home/barbarossa/SoniTranslate/app_rvc.py", line 708, in multilingual_media_conversion audio, self.result = transcribe_speech( File "/home/barbarossa/SoniTranslate/soni_translate/speech_segmentation.py", line 231, in transcribe_speech model = whisperx.load_model( File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/whisperx/asr.py", line 291, in load_model model = model or WhisperModel(whisper_arch, File "/home/barbarossa/miniconda3/envs/sonitr/lib/python3.10/site-packages/faster_whisper/transcribe.py", line 131, in init self.model = ctranslate2.models.Whisper(
Cuda drivers are installed:
sudo apt-get -y install cudnn-cuda-12 Note, selecting 'cudnn9-cuda-12' instead of 'cudnn-cuda-12' cudnn9-cuda-12 is already the newest version (9.4.0.58-1). 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
sudo apt-get -y install cudnn cudnn is already the newest version (9.4.0-1).
sudo apt install nvidia-cudnn nvidia-cudnn is already the newest version (8.2.4.15~cuda11.4).
Any ideas how to fix this?
— Reply to this email directly, view it on GitHub https://github.com/R3gm/SoniTranslate/issues/88, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL4Y2XCYB5XAKWVDXMIWDDTZWP3OJAVCNFSM6AAAAABOGVMYSKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGUZDMMJSGEZTEOI . You are receiving this because you are subscribed to this thread.Message ID: @.***>
NO! It failed! Tagalog language is on the code that R3gm set up, and now he has to update SoniTranslate by adding Esperanto and any other new languages to translating, but how can we with CUDA failing! R3gm, please find a way to save SoniTranslate on HuggingFace!
This topic is not about esperanto or filipino. Please do not make these comments again. You are messing this great project.
Em sáb., 14 de set. de 2024 10:11, levi-creator-max < @.***> escreveu:
NO! It failed! Tagalog language is on the code that R3gm set up, and now he has to update SoniTranslate by adding Esperanto and any other new languages to translating, but how can we with CUDA failing! R3gm, please find a way to save SoniTranslate on HuggingFace!
— Reply to this email directly, view it on GitHub https://github.com/R3gm/SoniTranslate/issues/88#issuecomment-2350986644, or unsubscribe https://github.com/notifications/unsubscribe-auth/AL4Y2XEMFOHOZCRI5B4NYNDZWQYZPAVCNFSM6AAAAABOGVMYSKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJQHE4DMNRUGQ . You are receiving this because you commented.Message ID: @.***>
Hello,
When installed on a local Ubuntu WSL under windows 11, It currently works fine with --cpu_mode but throws "RuntimeError: CUDA failed with error out of memory" in standard mode. Unable to use it with GPU (rtx 3070 ti).
Cuda drivers are installed:
Any ideas how to fix this?