chidiwilliams / buzz

Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
https://chidiwilliams.github.io/buzz
MIT License
12.41k stars 938 forks source link

Failed to execute script 'main' due to unhandled exception: 'NoneType' object has no attribute 'write' #731

Closed ashepp closed 2 months ago

ashepp commented 5 months ago

Getting this error on Windows 11 using 0.9.0

Traceback (most recent call last): File "multiprocessing\process.py", line 314, in _bootstrap File "multiprocessing\process.py", line 108, in run File "buzz\transcriber\whisper_file_transcriber.py", line 91, in transcribe_whisper File "buzz\transcriber\whisper_file_transcriber.py", line 168, in transcribe_openai_whisper File "stable_whisper\whisper_word_level.py", line 372, in transcribe_word_level File "stable_whisper\whisper_word_level.py", line 206, in decode_with_fallback File "torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, kwargs) File "stable_whisper\whisper_word_level.py", line 825, in decode_word_level File "torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, *kwargs) File "stable_whisper\whisper_word_level.py", line 717, in run File "whisper\decoding.py", line 655, in _get_audio_features File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(input, kwargs) File "whisper\model.py", line 162, in forward File "torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "torch\nn\modules\conv.py", line 307, in forward return self._conv_forward(input, self.weight, self.bias) File "whisper\model.py", line 48, in _conv_forward File "torch\nn\modules\conv.py", line 303, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [1280, 128, 3], expected input[1, 80, 3000] to have 128 channels, but got 80 channels instead

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 4, in File "buzz\buzz.py", line 36, in main File "Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_multiprocessing.py", line 52, in _freeze_support File "multiprocessing\spawn.py", line 116, in spawn_main File "multiprocessing\spawn.py", line 129, in _main File "multiprocessing\process.py", line 329, in _bootstrap AttributeError: 'NoneType' object has no attribute 'write'

raivisdejus commented 5 months ago

@ashepp Can you please provide more information about the issue, Like steps to reproduce and any settings you used, like what model type and model size was selected?

Issue seems related to the model selected, so you could try some other model type or model size.

hhive commented 5 months ago

@ashepp Can you please provide more information about the issue, Like steps to reproduce and any settings you used, like what model type and model size was selected?

Issue seems related to the model selected, so you could try some other model type or model size.

image It frequently appears in videos that process more than two hours

处理器 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz 2.42 GHz 机带 RAM 16.0 GB (15.7 GB 可用) 系统类型 64 位操作系统, 基于 x64 的处理器 版本 Windows 11 专业版 版本 22H2 安装日期 ‎2024/‎5/‎17 操作系统版本 22621.3593

hhive commented 5 months ago

@ashepp Can you please provide more information about the issue, Like steps to reproduce and any settings you used, like what model type and model size was selected?

Issue seems related to the model selected, so you could try some other model type or model size.

It seems to work better if you turn this switch off,But it takes more time

image

Shawn-Chou commented 4 months ago

@ashepp Can you please provide more information about the issue, Like steps to reproduce and any settings you used, like what model type and model size was selected?

Issue seems related to the model selected, so you could try some other model type or model size.

I have encountered the same problem. And I uninstalled and reinstalled the program, but the problem still happened. 微信截图_20240606113946 微信截图_20240606114045

raivisdejus commented 4 months ago

@Shawn-Chou This we can fix, but I need a bit more info. The error states that the language is unsupported. What language is selected in the screenshot. Chinese, right? attels

And are you running unmodified latest version v0.9.0 from the https://github.com/chidiwilliams/buzz/releases or is it some modified version? If modified version, problem may be with the language code used in buzz/transcriber/transcriber.py whisper supports zh as language code for Chinese language, so if that code got changed to something else it may be a reason you are getting this problem.

Also as a test, please try running the app with English interface. You may need switch language in the operating system to English.

Shawn-Chou commented 4 months ago

@Shawn-Chou This we can fix, but I need a bit more info. The error states that the language is unsupported. What language is selected in the screenshot. Chinese, right? attels

And are you running unmodified latest version v0.9.0 from the https://github.com/chidiwilliams/buzz/releases or is it some modified version? If modified version, problem may be with the language code used in buzz/transcriber/transcriber.py whisper supports zh as language code for Chinese language, so if that code got changed to something else it may be a reason you are getting this problem.

Also as a test, please try running the app with English interface. You may need switch language in the operating system to English.

In the imported audio file, most of the time speaking in English, and few time speaking in Chinese

Shawn-Chou commented 4 months ago

@Shawn-Chou This we can fix, but I need a bit more info. The error states that the language is unsupported. What language is selected in the screenshot. Chinese, right? attels

And are you running unmodified latest version v0.9.0 from the https://github.com/chidiwilliams/buzz/releases or is it some modified version? If modified version, problem may be with the language code used in buzz/transcriber/transcriber.py whisper supports zh as language code for Chinese language, so if that code got changed to something else it may be a reason you are getting this problem.

Also as a test, please try running the app with English interface. You may need switch language in the operating system to English.

The selected is Automatic language detection

vincechang1015 commented 4 months ago

Hi sirs,

I had the same error on my Windows 10 with unmodified latest version v0.9.0 from the https://github.com/chidiwilliams/buzz/releases

I ran with "whisper model" without any problem.

But if I run with "faster whisper model", it pops up the error message as the above.

Here are the parameters I set,

"faster whisper model" "medium" "transcribe" "english" word-level timings ->no check txt ->check srt ->check vtt ->no check

Thanks!

raivisdejus commented 4 months ago

Please try the latest development build, Faster whisper had some fixes there.

Log into GitHub and download latest development build from some successful action run https://github.com/chidiwilliams/buzz/actions?query=branch%3Amain

vincechang1015 commented 4 months ago

Hi @raivisdejus,

I've tried, but still failed.

And I'm uncertain if the lack of internet on PC_a (my environment) is crucial.

So, This is actually what I did,

Step1. I installed the BUZZ you mentioned on PC_a without internet.

Step2. Subsequently, I installed BUZZ on PC_b (which has internet), downloaded the fast-whisper model, and copied the two folders listed below to PC_a.

Step3. Then, I ran BUZZ on PC_a and got the error mssage shown below.

------start of the err mssage------

(X) Unhandled exception in script (X) Failed to execute script 'main' due to unhandled exception: 'NoneType' object has no attribute 'write'

Traceback (most recent call last): File ''urllib3\connectionpool.py", line 793, in urlopen File "urllib3\connectionpool.py'', line 491, in _make_request File "urllib3\connectionpool.py'', line 467, in _make_request File "urllib3\connectionpool.py'', line 1099, in _validate_conn File "urllib3\connection.py'', line 653, in connect File "urllib3\connection.py'', line 806, in _ssl_wrap_socket_and_matchhostname File "urllib3\util\ssl.py", line 465, in ssl_wrapsocket File "urllib3\util\ssl.py", line 509, in _ssl_wrap_socket_impl File "ssl.py", line 517, in wrap_socket File "ssl.py", line 1104, in _create File "ssl.py", line 1382, in do_handshake ConnectionResetError: [WinError 10054] 遠端主機已強制關閉一個現存的連線

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "requests\adapters.py", line 486, in send File "urllib3\connectionpool.py", line 847, in urlopen File "urllib3\util\retry.py", line 470, in increment File "urllib3\util\util.py", line 38, in reraise File "urllib3\connectionpool.py", line 793, in urlopen File "urllib3\connectionpool.py", line 491, in _make_request File "urllib3\connectionpool.py", line 467, in _make_request File "urllib3\connectionpool.py", line 1099, in _validate_conn File "urllib3\connection.py", line 653, in connect File "urllib3\connection.py", line 806, in _ssl_wrap_socket_and_matchhostname File "urllib3\util\ssl.py", line 465, in ssl_wrapsocket File "urllib3\util\ssl.py", line 509, in _ssl_wrap_socket_impl File "ssl.py", line 517, in wrap_socket File "ssl.py", line 1104, in .create File "ssl.py", line 1382, in do.handshake urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, '遠端主機已強制關閉一個現存的連線', None, 10054, None))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "huggingface_hub_snapshot_download.py", line 179, in snapshot.download File "huggingface_hub\utils_validators.py", line 118, in .inner.fn File "huggingface_hub\hf_api.py", line 2410, in repojnfo File "huggingface_hub\utils_validators.py", line 118, in _inner_fn File "huggingface_hub\hf_api.py", line 2219, in modeljnfo File "requests\sessions.py", line 602, in get File "requests\sessions.py", line 589, in request File "requests\sessions.py", line 703, in send File "huggingface_hub\utils_bttp.py", line 67, in send File "requests\adapters.py", line 501, in send requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.’, ConnectionResetError(10054, '遠端主機已強制關閉一個現存的連線', None, 10054, None)), ’(Request ID: 89648c02-e091-4583-b5d9-717cadfccf38)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "faster_whisper\utils.py”, line 103, in download.model File "huggingface_hub\utilsVvalidators.py", line 118, in .inner.fn File "huggingface_hub_snapshot_download.py", line 251, in snapshot.download huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the files on the Hub and we cannot find the appropriate snapshot folder for the specified revision on the local disk. Please check your internet connection and try again.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "multiprocessing\process.py", line 314, in .bootstrap File "multiprocessing\process.py", line 108, in run File "buzz\transcriber\whisper.file_transcriber.py", line 93, in transcribe.whisper File "buzz\transcriber\whisper.file_transcriber.py", line 131, in transcribe.faster.whisper File "faster_whisper\transcribe.py", line 127, in init
File "faster_whisper\utils.py”, line 119, in download.model File "huggingface_hub\utils_validators.py", line 118, in.inner.fn File "huggingface_hub_snapshot_download.py", line 235, in snapshot.download huggingface_hub.utils._errors.LocalEntryNotFoundError: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 4, in File "buzz\buzz.py", line 37, in main File "Pylnstaller\hooks\rthooks\pyi_rth_multiprocessing.py", line 50, in .freeze.support File "multiprocessing\spawn.py”, line 122, in spawn.main File "multiprocessing\spawn.py”, line 135, in .main File "multiprocessing\process.py", line 329, in .bootstrap AttributeError: 'NoneType' object has no attribute 'write'

------end of the err mssage------

raivisdejus commented 4 months ago

In the very latest versions that are available from Actions on GitHub model download paths for Faster whisper has changed, models should be is C:\Users\MyUserName\AppData\Local\Buzz\Buzz\Cache\models I recommend copying over the whole models folder.

Please note some models in the current version will still be stored in Cache\whisper, but that will change soon, so in future you can expect all models to be in models folder

vincechang1015 commented 4 months ago

Hi @raivisdejus,

Thank you for your explaination.

I think I might not describe clearly. however, I do copy the "whole" directory below: C:\Users\MyUserName\AppData\Local\Buzz\Buzz (and all its sub-dirs)

So, in my case, I still cannot get through this that I have tried anything I can do.

raivisdejus commented 4 months ago

I think "Whisper" and "Whisper.cpp" models should work. See if you have ....\Cache\Buzz\models\whipser folder for Whipser models and files that start with ggml-* in ....\Cache\Buzz\models\, f.e. ggml-model-whisper-tiny.bin for "Whisper.cpp"

For other types of Whisper models you will also need to move over the huggingface hub cache. See C:\Users\MyUserName\.cache there should be a huggingface folder.

vincechang1015 commented 4 months ago

Hi @raivisdejus,

Yes, the "Whisper model" works well on my machine. However, I look forward to reducing the transcription time with "Fast Whisper."

shuke12306 commented 4 months ago

Is this issue fixed at the moment? I'm experiencing the same issue.

Traceback (most recent call last): File "whisper\audio.py", line 58, in load_audio File "subprocess.py", line 524, in run subprocess.CalledProcessError: Command '['ffmpeg', '-nostdin', '-threads', '0', '-i', 'C:/Users/舒克/Downloads/dcdcc.mp4', '-f', 's16le', '-ac', '1', '-acodec', 'pcm_s16le', '-ar', '16000', '-']' returned non-zero exit status 3753488571.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "multiprocessing\process.py", line 314, in _bootstrap File "multiprocessing\process.py", line 108, in run File "buzz\transcriber\whisper_file_transcriber.py", line 91, in transcribe_whisper File "buzz\transcriber\whisper_file_transcriber.py", line 168, in transcribe_openai_whisper File "stable_whisper\whisper_word_level.py", line 178, in transcribe_word_level File "whisper\audio.py", line 140, in log_mel_spectrogram File "whisper\audio.py", line 60, in load_audio RuntimeError: Failed to load audio: ffmpeg version 6.1-full_build-www.gyan.dev Copyright (c) 2000-2023 the FFmpeg developers

built with gcc 12.2.0 (Rev10, Built by MSYS2 project)

configuration: --enable-gpl --enable-version3 --enable-static --pkg-config=pkgconf --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-ffnvcodec --enable-nvdec --enable-nvenc --enable-dxva2 --enable-d3d11va --enable-libvpl --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint

libavutil 58. 29.100 / 58. 29.100

libavcodec 60. 31.102 / 60. 31.102

libavformat 60. 16.100 / 60. 16.100

libavdevice 60. 3.100 / 60. 3.100

libavfilter 9. 12.100 / 9. 12.100

libswscale 7. 5.100 / 7. 5.100

libswresample 4. 12.100 / 4. 12.100

libpostproc 57. 3.100 / 57. 3.100

[mov,mp4,m4a,3gp,3g2,mj2 @ 000001cfa9858680] reached eof, corrupted STSS atom

[mov,mp4,m4a,3gp,3g2,mj2 @ 000001cfa9858680] error reading header

[in#0 @ 000001cfa98584c0] Error opening input: End of file

Error opening input file C:/Users/舒克/Downloads/dcdcc.mp4.

Error opening input files: End of file

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 4, in File "buzz\buzz.py", line 36, in main File "Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_multiprocessing.py", line 52, in _freeze_support File "multiprocessing\spawn.py", line 116, in spawn_main File "multiprocessing\spawn.py", line 129, in _main File "multiprocessing\process.py", line 329, in _bootstrap AttributeError: 'NoneType' object has no attribute 'write'

raivisdejus commented 4 months ago

@shuke12306 Something is wrong with your ffmpeg.

Option 1 - try installing it with choco install ffmpeg more info here https://github.com/chidiwilliams/buzz/blob/main/CONTRIBUTING.md

Option 2 - Run the ffmpeg on command line

ffmpeg -nostdin -threads 0 -i C:/Users/舒克/Downloads/dcdcc.mp4 -f s16le -ac 1 -acodec pcm_s16le -ar 16000

there may be more debug output that can help to identify the issue

Also question, do other files work or all have error? If possible to share upload some file somewhere. Not likely that the file itself has some issue, but still we could try to test it on other installations.

raivisdejus commented 3 months ago

@vincechang1015 An issue with faster whisper was fixed in latest development version https://github.com/chidiwilliams/buzz/actions/runs/10211346492. You may need to delete the downloaded models and re-download them again

raivisdejus commented 2 months ago

@vincechang1015 Faster whisper will not work on Windows with CUDA. The required cuBLAS (or HPC SDK) for Windows is currently not available. Regular whisper and Huggingface models should work with CUDA GPU support, so to get faster transcription you can check out notes on getting GPU support working on Windows - https://github.com/chidiwilliams/buzz/blob/main/CONTRIBUTING.md#gpu-support

On systems without Nvidia GPUs Whisper.cpp should be faster than Faster whisper.

On Linux all models work with and without GPU support.

Zsural commented 2 months ago

@raivisdejus Hey bro! I also encountered the issues('NoneType' and 'libcublas64_12'),detailed info below: Traceback (most recent call last): File "multiprocessing\process.py", line 314, in _bootstrap File "multiprocessing\process.py", line 108, in run File "buzz\transcriber\whisper_file_transcriber.py", line 97, in transcribe_whisper File "buzz\transcriber\whisper_file_transcriber.py", line 152, in transcribe_faster_whisper File "faster_whisper\transcribe.py", line 511, in generate_segments File "faster_whisper\transcribe.py", line 762, in encode RuntimeError: Library cublas64_12.dll is not found or cannot be loaded

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "main.py", line 4, in File "buzz\buzz.py", line 37, in main File "PyInstaller\hooks\rthooks\pyi_rth_multiprocessing.py", line 50, in _freeze_support File "multiprocessing\spawn.py", line 122, in spawn_main File "multiprocessing\spawn.py", line 135, in _main File "multiprocessing\process.py", line 329, in _bootstrap AttributeError: 'NoneType' object has no attribute 'write'

the version is 1.0.1, and I use the faster-whisper method, my system is window10, the hardware are GTX-1660TI with CUDA 11.8+CUDNN8.9, I just have libcublas64_11.dll, how can I inplace it to libcublas64_12.dll, and is there any others requirement I must change?Thangks

raivisdejus commented 2 months ago

@Zsural Faster whisper supports only CUDA 12. I think your best option is to update CUDA.

To use CUDA 11 you would need to run the source version and downgrade fsater whisepr module to some older version. See notes here https://github.com/chidiwilliams/buzz/blob/main/CONTRIBUTING.md and https://github.com/SYSTRAN/faster-whisper?tab=readme-ov-file#gpu

raivisdejus commented 2 months ago

Will close the issue, feel free to open new ones if you encounter this and none of the suggestions work on the latest development version from https://github.com/chidiwilliams/buzz/actions/workflows/ci.yml?query=branch%3Amain To download artifacts you need to log into the github

shuke12306 commented 2 months ago

@raivisdejus Thank you very much for your dedication to the project, I am already using it successfully.