Koischizo / AI-Vtuber

AI Livestreamer for Youtube
https://www.youtube.com/watch?v=1MrDnLBc-wQ
MIT License
384 stars 91 forks source link

A problem with ffmepg cache? #11

Open lordmax20000 opened 1 year ago

lordmax20000 commented 1 year ago

i am a total beginner, the program was working fine but lately after the first messages, or sometimes even at the first one, i get this message and the program crashes

libavutil 58. 2.100 / 58. 2.100 libavcodec 60. 3.100 / 60. 3.100 libavformat 60. 3.100 / 60. 3.100 libavdevice 60. 1.100 / 60. 1.100 libavfilter 9. 3.100 / 9. 3.100 libswscale 7. 1.100 / 7. 1.100 libswresample 4. 10.100 / 4. 10.100 libpostproc 57. 1.100 / 57. 1.100 [cache @ 000001c2b6453340] Inner protocol failed to seekback end : -40 Last message repeated 1 times [mp3 @ 000001c2b6455ac0] Failed to read frame size: Could not seek to 1239. [cache @ 000001c2b6453340] Statistics, cache hits:2 cache misses:1 cache:pipe:0: Invalid argument

i have tried updating ffmepg but it doesn't work and if i search online there seem to be different problems, so i don't really know what to do

lordmax20000 commented 1 year ago

nevermind i think the problem was that i finished my elevenlabs characters without even noticing,

lordmax20000 commented 1 year ago

i was thinking if it was possible somehow to change the tts part of the code and maybe use marytts or espeak, wouldn't we have to change only this part of the code?

"def EL_TTS(message):

url = f'https://api.elevenlabs.io/v1/text-to-speech/{EL.voice}'
headers = {
    'accept': 'audio/mpeg',
    'xi-api-key': EL.key,
    'Content-Type': 'application/json'
}
data = {
    'text': message,
    'voice_settings': {
        'stability': 0.75,
        'similarity_boost': 0.75
    }
}

response = requests.post(url, headers=headers, json=data, stream=True)
audio_content = AudioSegment.from_file(io.BytesIO(response.content), format="mp3")
play(audio_content)

"

if we do that it would be free, i mean we would still pay for chat gpt but not for evenlabs right?

Kopbabakop commented 1 year ago

i was thinking if it was possible somehow to change the tts part of the code and maybe use marytts or espeak, wouldn't we have to change only this part of the code?

"def EL_TTS(message):

url = f'https://api.elevenlabs.io/v1/text-to-speech/{EL.voice}'
headers = {
    'accept': 'audio/mpeg',
    'xi-api-key': EL.key,
    'Content-Type': 'application/json'
}
data = {
    'text': message,
    'voice_settings': {
        'stability': 0.75,
        'similarity_boost': 0.75
    }
}

response = requests.post(url, headers=headers, json=data, stream=True)
audio_content = AudioSegment.from_file(io.BytesIO(response.content), format="mp3")
play(audio_content)

"

if we do that it would be free, i mean we would still pay for chat gpt but not for evenlabs right?

Bro how did you do that ı cant do it because of pyproject.toml-based error

lordmax20000 commented 1 year ago

i honestly don't remember, i probably asked chatgpt though

lordmax20000 commented 1 year ago

in the end it didn't work though, so i used the default voice instead

Fadlay commented 1 year ago

i have same issue pls help me, how u fix it? @Kopbabakop

HiPach commented 5 months ago

I have the same problem, I don’t know what to do, can anyone help?

E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) \Running!

2024-03-20 05:08:28 [LGOFBL]- hi

Traceback (most recent call last): File "E:\Neuro_net\AI-Vtuber\run.py", line 149, in read_chat() File "E:\Neuro_net\AI-Vtuber\run.py", line 113, in read_chat response = llm(message) ^^^^^^^^^^^^ File "E:\Neuro_net\AI-Vtuber\run.py", line 130, in llm response = openai.Completion.create( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_resources\completion.py", line 25, in create return super().create(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( ^^^^^^^^^^^^^^^^^^ File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "E:\Neuro_net\AI-Vtuber\myenv\Lib\site-packages\openai\api_requestor.py", line 765, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: The model text-davinci-003 has been deprecated, learn more here: https://platform.openai.com/docs/deprecations