JarodMica / audiosplitter_whisper

MIT License
91 stars 35 forks source link

[Errno 2] No such file or directory #22

Open bobcat7080 opened 6 months ago

bobcat7080 commented 6 months ago

I am running into these errors and im not sure why : Exception has occurred: FileNotFoundError [Errno 2] No such file or directory: 'C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\data\output\1.srt' File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 96, in extract_audio_with_srt subs = pysrt.open(srt_file) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 150, in process_audio_files extract_audio_with_srt(audio_file_path, srt_file, speaker_segments_dir) File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 180, in main process_audio_files(input_folder, settings) File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\split_audio.py", line 183, in main() FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\data\output\1.srt'

CUDA is available. Running on GPU. The torchaudio backend is switched to 'soundfile'. Note that 'sox_io' is not supported on Windows. The torchaudio backend is switched to 'soundfile'. Note that 'sox_io' is not supported on Windows. Lightning automatically upgraded your loaded checkpoint from v1.5.4 to v2.2.0.post0. To apply the upgrade to your files permanently, run python -m pytorch_lightning.utilities.upgrade_checkpoint C:\Users\bobca\.cache\torch\whisperx-vad-segmentation.bin Model was trained with pyannote.audio 0.0.1, yours is 3.1.1. Bad things might happen unless you revert pyannote.audio to 0.x. Model was trained with torch 1.10.0+cu102, yours is 2.0.0+cu118. Bad things might happen unless you revert torch to 1.x.

Performing transcription... Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Scripts\whisperx.exe__main.py", line 7, in File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\transcribe.py", line 176, in cli result = model.transcribe(audio, batch_size=batch_size, chunk_size=chunk_size, print_progress=print_progress) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 218, in transcribe for idx, out in enumerate(self.call__(data(audio, vad_segments), batch_size=batch_size, num_workers=num_workers)): File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\transformers\pipelines\pt_utils.py", line 124, in next item = next(self.iterator) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\transformers\pipelines\pt_utils.py", line 125, in next processed = self.infer(item, self.params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\transformers\pipelines\base.py", line 1102, in forward model_outputs = self._forward(model_inputs, forward_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 152, in _forward outputs = self.model.generate_segment_batched(model_inputs['inputs'], self.tokenizer, self.options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 47, in generate_segment_batched encoder_output = self.encode(features) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bobca\OneDrive\Documents\AI\aiVoiceMaker\audiosplitter_whisper\venv\Lib\site-packages\whisperx\asr.py", line 86, in encode return self.model.encode(features, to_cpu=to_cpu) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Library cublas64_12.dll is not found or cannot be loaded

lobsterchan27 commented 6 months ago

i think im having the same issue. any help would be appreciated. i had a previous issue from https://github.com/JarodMica/audiosplitter_whisper/issues/16#issuecomment-1769431169 and now have this one

lobsterchan27 commented 6 months ago

i got it working by reinstalling cuda 12.