rany2 / edge-tts

Use Microsoft Edge's online text-to-speech service from Python WITHOUT needing Microsoft Edge or Windows or an API key
https://pypi.org/project/edge-tts/
GNU General Public License v3.0
4.21k stars 444 forks source link

How can i change voice by a custom voice list i made ? #198

Closed Awesomeali01 closed 1 month ago

Awesomeali01 commented 3 months ago
import asyncio
import os
import pygame
import edge_tts

voice_map = {
    "English 1": "en-US-ChristopherNeural",
    "English 2": "en-US-EricNeural",
    "Hindi m": "gu-IN-NiranjanNeural",
    "Hindi F": "gu-IN-DhwaniNeural"
}

async def generate_audio(text, voice, output_file, rate=1.0):
    communicate = edge_tts.Communicate(text, voice)
    await communicate.save(output_file)

def speak(text, voice=None, output_file="test.mp3", rate=2.0):
    if voice is None:
        voice = "en-CA-LiamNeural"
    elif voice == "Hindi m":
        voice = "gu-IN-NiranjanNeural"
    elif voice == "Hindi F":
        voice = "gu-IN-DhwaniNeural"
    elif voice == "English A":
        voice = "en-US-ChristopherNeural"
    elif voice == "English B":
        voice = "en-US-EricNeural"
    elif voice not in voice_map:
        voice = "en-CA-LiamNeural"
    else:
        voice = voice_map.get(voice, "en-CA-LiamNeural")
    asyncio.run(generate_audio(text, voice, output_file, rate=rate))
    pygame.mixer.init()
    pygame.mixer.music.load(output_file)
    try:
        pygame.mixer.music.play()
        while pygame.mixer.music.get_busy():
           pygame.time.Clock().tick(10)
    except Exception as e:
        print(e)
    finally:
        pygame.mixer.music.stop()
        pygame.mixer.quit()
    os.remove(output_file)
rany2 commented 3 months ago

I don't understand the question, sorry.

Awesomeali01 commented 1 month ago

I don't understand the question, sorry.

I asked that how can I build a function through which I can switch between voices permanently like there are different voices and the predefined voice is en-CA-LiamNeural,

rany2 commented 1 month ago

Why don't you just specify the voice as parameter to your generate_audio as usual? What's wrong with doing that?