brailcom / speechd

Common high-level interface to speech synthesis
GNU General Public License v2.0
231 stars 67 forks source link

module request: piper #866

Open KiaraGrouwstra opened 1 year ago

KiaraGrouwstra commented 1 year ago

piper is 'a fast, local neural text to speech system' (samples here). it would be nice to have speechd support this as well.

cross-post: https://github.com/rhasspy/piper/issues/265

/cc @Elleo who has done some work integrating these thru pied.

csukuangfj commented 1 year ago

I suggest that you also have a look at https://github.com/k2-fsa/sherpa-onnx

It is implemented in C++ and has various APIs for different languages, e.g., Python/C/Go/C#/Swift/Kotlin, etc.

You can find Android APKs for it at https://k2-fsa.github.io/sherpa/onnx/tts/apk.html

You can also try it in our huggingface space without installing anything. https://huggingface.co/spaces/k2-fsa/text-to-speech


By the way, it supports models from piper as well.

csukuangfj commented 1 year ago

Also cc @Elleo . You may find sherpa-onnx interesting. It supports both speech-to-text and text-to-speech.

Elleo commented 1 year ago

Just for a little context on what I'm doing, Pied can currently configure speech dispatcher to work with Piper through the sd_generic module, but my long term plan is to create a piper speech dispatcher module that can be kept loaded to further reduce latency and add support for speed/pitch/etc. changes

@csukuangfj Thanks, that's interesting, I'll check it out!

coderalpha commented 1 year ago

I'm trying to integrate Piper through the sd_generic module. But I get the error: speechd: Error: Module reported error in request from speechd (code 3xx): 300-Opening sound device failed. Reason: Cannot open plugin server. error: file not found.

I can't find any information on this error. Any help will be appreciated!

I added the module in speechd.conf: AddModule "piper" "sd_generic" "piper.conf"' DefaultVoiceType "FEMALE1" DefaultModule "piper" DefaultLanguage "en" AudioOutputMethod "libao"

And created the piper.conf file in the /etc/speech-dispatcher/modules directory: AddVoice "en" "FEMALE1" "en_US-amy-medium.onnx" DefaultVoice "en_US-amy-medium.onnx" GenericExecuteSynth "echo \'$DATA\' | /home/dev/Apps/piper/piper --model /home/dev/Apps/piper/models/en_US-amy-medium.onnx --output_raw | paplay"

sthibaul commented 1 year ago

As mentioned in the issue template, add Debug 1 to the speechd config file and the module config file, and get the corresponding log files, so we get to know what exactly went wrong.

sthibaul commented 1 year ago

(of course, the issue template knows better, that's why we write documentation, so we don't have to rely on our memory: it's LogLevel 5 in the speechd config, and indeed Debug 1 in the module config)

coderalpha commented 1 year ago

I set the LogLevel to 5 and Debug to 1 and attached the files speechd.zip

coderalpha commented 1 year ago

I've obviously been trying everything to get this going and in the process I've changed a lot of the config and it might not be optimal. Since it seems to be an issue with the loading of the sound plugin, I changed the AudioOutputMethod back to "pulse". I'm running Ubuntu 22.04 and according to the output of inxi, Pulse is running:

System: Host: dev Kernel: 6.2.0-36-generic x86_64 bits: 64 Desktop: N/A Distro: Ubuntu 22.04.3 LTS (Jammy Jellyfish) Machine: Type: Desktop Mobo: ASUSTeK model: WS X299 SAGE/10G v: Rev 1.xx serial: <superuser required> UEFI: American Megatrends v: 3601 date: 09/24/2021 Audio: Device-1: Intel 200 Series PCH HD Audio driver: snd_hda_intel Device-2: AMD Navi 21 HDMI Audio [Radeon RX 6800/6800 XT / 6900 XT] driver: snd_hda_intel Device-3: AMD Navi 21 HDMI Audio [Radeon RX 6800/6800 XT / 6900 XT] driver: snd_hda_intel Sound Server-1: ALSA v: k6.2.0-36-generic running: yes Sound Server-2: PulseAudio v: 15.99.1 running: yes Sound Server-3: PipeWire v: 0.3.48 running: yes

Then I get the following error: speechd: Error: Module reported error in request from speechd (code 3xx): 300-Opening sound device failed. Reason: Couldn't open pulse plugin.

I noticed that there is an Ubuntu package speech-dispatcher-audio-plugins, and it is installed, and contains the following: /usr/lib/x86_64-linux-gnu/speech-dispatcher /usr/lib/x86_64-linux-gnu/speech-dispatcher/spd_alsa.so /usr/lib/x86_64-linux-gnu/speech-dispatcher/spd_libao.so /usr/lib/x86_64-linux-gnu/speech-dispatcher/spd_oss.so /usr/lib/x86_64-linux-gnu/speech-dispatcher/spd_pulse.so

So, the Pulse plugin is installed.

sthibaul commented 1 year ago

Since it is using the generic module, piper.conf is just passing audio to paplay (though it should rather be $PLAY_COMMAND so it works automatically with pulse, ao, alsa, etc.), so there is no need for an audio plugin.

Reason: Cannot open plugin server. error: file not found : that happens with other generic modules actually: the server is just trying to make audio go through it, and notices the error and falls back to making the module open audio itself. The warning is indeed confusing, I have now fixed it.

Your speechd.log seems to be showing various attempts, I can't see how to know what corresponds to what configuration you used.

Actually, at the end of your speechd.log there doesn't seem to be any issue?

coderalpha commented 1 year ago

Yes, there aren't any issues logged in speech-dispatcher.log or piper.log, but it isn't working. There is no sound played. If I run the command directly, i.e. echo "hello" | /home/dev/Apps/piper/piper --model /home/dev/Apps/piper/models/en_US-amy-medium.onnx --output_raw

it works.

sthibaul commented 1 year ago

Your command is missing the paplay part?

Also, in your speech-dispatcher.log I don't see any speech attempt, how do you actually test it?

coderalpha commented 1 year ago

Yes, the full command I run on the command-line is: "echo 'hello' | ./piper --output-raw --model models/en_US-amy-medium.onnx | aplay -r 22050 -f S16_LE -t raw"

Through speech-dispatcher, I test it on the command-line with spd-say "hello"

sthibaul commented 1 year ago

Yes, the full command I run on the command-line is:

You are using aplay here, not paplay, you need to test exactly the same way as you described in the .conf file...

Through speech-dispatcher, I test it on the command-line with spd-say "hello"

Then please provide the logs that correspond to this test. The logs you uploaded didn't contain anything about that.

murlakatamenka commented 1 year ago

"echo 'hello' | ./piper --output-raw --model models/en_US-amy-medium.onnx | aplay -r 22050 -f S16_LE -t raw"

is there a specific reason for ./piper? I would suggest using just piper or absolute path like /usr/bin/piper.

coderalpha commented 1 year ago

I've changed the configuration to the simplest case to avoid confusion. I selected alsa for the audio output.

I can use the following command and it works: echo "hello" | /home/dev/Apps/piper/piper --model /home/dev/Apps/piper/models/en_US-amy-medium.onnx --output_raw | aplay -r 22050 -f S16_LE -t raw - According to the log file everything looks good to me, yet no sound. speechd.zip sound.

sthibaul commented 1 year ago

There is still a difference: /home/dev vs /home/ws2.

And your speech-dispatcher.log still doesn't show any attempt to speech anything. No client ever connects to it within the 5s daemon timeout:

[Mon Nov 13 12:05:59 2023 : 716872] speechd:    Currently no clients connected, enabling shutdown timer.
[Mon Nov 13 12:05:59 2023 : 716898] speechd:    speak_queue Playback thread starting.......
[Mon Nov 13 12:06:04 2023 : 875778] speechd: Terminating...

Again: how exactly do you test?

coderalpha commented 1 year ago

Again: spd-say "hello"

sthibaul commented 1 year ago

But that does not show up at all in the logs... Are you sure you have only one installation of speech-dispatcher, as in: is spd-say actually connecting to the speech-dispatcher daemon that you are starting? Does it work with other speech syntheses?

coderalpha commented 1 year ago

I just tried to get going from scratch on a different computer and now I have the issue where speech-dispatcher doesn't want to start.

sudo systemctl restart speech-dispatcher Job for speech-dispatcher.service failed because the control process exited with error code. See "systemctl status speech-dispatcher.service" and "journalctl -xeu speech-dispatcher.service" for details.

In the log file, it is the same issue: `Reply from output module: 300-Opening sound device failed. Reason: Cannot open plugin server. error: file not found. 300 MODULE ERROR

[Mon Nov 13 14:23:37 2023 : 432859] speechd: Error: Module reported error in request from speechd (code 3xx): 300-Opening sound device failed. Reason: Cannot open plugin server. error: file not found. And using the command produces output from piper: echo "hello" | ~/Apps/piper/piper --output-raw --model ~/Apps/piper/models/en_US-amy-medium.onnx | aplay -r 22050 -f S16_LE -t raw ` spd-say works but it isn't using piper.

inxi -SMA System: Host: GCS-WS5 Kernel: 6.2.0-36-generic x86_64 bits: 64 Desktop: N/A Distro: Ubuntu 22.04.3 LTS (Jammy Jellyfish) Machine: Type: Laptop System: Dell product: Precision 5570 v: N/A serial: <superuser required> Mobo: Dell model: 03M8N5 v: A00 serial: <superuser required> UEFI: Dell v: 1.18.0 date: 09/12/2023 Audio: Device-1: Intel Alder Lake PCH-P High Definition Audio driver: snd_hda_intel Sound Server-1: ALSA v: k6.2.0-36-generic running: yes Sound Server-2: PulseAudio v: 15.99.1 running: yes Sound Server-3: PipeWire v: 0.3.48 running: yes

It seems that spd-say is not using speech-dispatcher as the voice is different from the piper voice. This is a standard Ubuntu install

sthibaul commented 1 year ago

Reason: Cannot open plugin server. error: file not found.

As I already mentioned, this is just a harmless warning. What's important is after that. That's why one should always put the whole log in the bug report.

using the command produces output from piper: echo "hello" | ~/Apps/piper/piper --output-raw --model ~/Apps/piper/models/en_US-amy-medium.onnx | aplay -r 22050 -f S16_LE -t raw

Does that work as root? You are starting speech-dispatcher from systemd, but that assumes that you can emit audio from root-started speech-dispatcher. Nowadays what usually happens is rather that you don't start speech-dispatcher from systemd, but let it get auto-started from the spd-say call.

spd-say works but it isn't using piper.

You can use spd-say -O to get the list of modules, and spd-say -o yourmodule foo to select which module you want speech to go through.

coderalpha commented 1 year ago

I can run the command with sudo and get audio output: sudo echo "hello" | ~/Apps/piper/piper --output-raw --model ~/Apps/piper/models/en_US-amy-medium.onnx | aplay -r 22050 -f S16_LE -t raw

According to spd-say: send text-to-speech output request to speech-dispatcher

But it is not using the speech-dispatcher that I've configured! If I run spd-say -O -L, I get: OUTPUT MODULES espeak-ng with LOTS of voices. In my speechd.conf I commented out the module "espeak-ng". It seems speech-dsipatcher is getting a different config.

It doesn't seem like there is any logic to how this operates...

It seems I have to abandon this, but I'm working on a Qt application, and QtTextToSpeech integrates with speech-dispatcher.

sthibaul commented 1 year ago

I can run the command with sudo and get audio output:

sudo only applies to the first command of your pipeline. It's just before aplay that you want to put sudo so as to properly test audio as root.

But it is not using the speech-dispatcher that I've configured!

Maybe check whether you might have different log files in /var/log, in /run/user/*/log

It doesn't seem like there is any logic to how this operates...

There is, it's just that with nowaday's desktops, things have become more involved, as system-wide daemons are now frowned upon, and thus daemons are rather started in user sessions.

coderalpha commented 1 year ago

Using sudo before aplay results in: ALSA lib pcm_dmix.c:1032:(snd_pcm_dmix_open) unable to open slave aplay: main:831: audio open error: Device or resource busy [2023-11-13 16:14:47.232] [piper] [info] Loaded voice in 0.192067876 second(s) [2023-11-13 16:14:47.232] [piper] [info] Initialized piper

sthibaul commented 1 year ago

Using sudo before aplay results in:

So that explains why using a system-wide speech-dispatcher won't work. And thus why you want to just let the speechd auto-start trigger in your desktop session (as is the default), and see logs in /run/user/*/log

coderalpha commented 1 year ago

Can you point me to the documentation to do this? All the explanations I've seen show the configuration I've applied.

How do I undo the changes I've made? Do I just remove the references in speecd.conf to piper?

sthibaul commented 1 year ago

Can you point me to the documentation to do this?

It's already the default. Your spd-say call is probably already doing that, and you are just not opening the log files corresponding to that. Again, normally they end up in something like /run/user/*/log.

All the explanations I've seen show the configuration I've applied.

Yes, that's the problem with documentation when people don't take the time to update them. Help is welcome.

How do I undo the changes I've made? Do I just remove the references in speecd.conf to piper?

You probably don't need to do anything, and just make sure to open the logs that actually correspond to the instance that is auto-started.

coderalpha commented 1 year ago

There is no log directory in the /run/user/1000 directory.

So where do I configure the piper module if the way I did it is incorrect?

coderalpha commented 1 year ago

Is this the correct way:

https://hojjatsblog.com/better-tts-in-linux.html

I've tried it as well, and it doesn't work either

jpwhiting commented 1 year ago

The end of that blog post enables and starts speech-dispatcher which runs it as a service, i.e. running as root. To not do that don't enable and don't start it. Or if you've already enabled and started. Stop it and disable it.

spd-say will connect to whichever speech-dispatcher is running and listening on the socket. If there isn't one listening it will start the daemon (as your user, not root) for you and talk to it instead.

Hope that helps, Jeremy

On Mon, Nov 13, 2023 at 7:45 AM Rincewind @.***> wrote:

Is this the correct way: [https://hojjatsblog.com/better-tts-in-linux.html]

I've tried it as well, and it doesn't work either

— Reply to this email directly, view it on GitHub https://github.com/brailcom/speechd/issues/866#issuecomment-1808299647, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAHDYPABRAN3IO4UPIW77LTYEIXBFAVCNFSM6AAAAAA67QFIP6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBYGI4TSNRUG4 . You are receiving this because you are subscribed to this thread.Message ID: @.***>

sthibaul commented 1 year ago

There is no log directory in the /run/user/1000 directory.

Immediately after running spd-say (i.e. within at most 5 seconds), does ps x | grep speech show something?

Alternatively, you can run it by hand with speech-dispatcher -s -t 0 to make sure it stays alive, and you can use lsof -p $(pidof speech-dispatcher) to see which log file it is emitting to (that depends on whatnot desktop xdg configuration you have)

So where do I configure the piper module if the way I did it is incorrect?

The question is not the piper module for now, but speechd itself: we are not looking at the right logs.

Is this the correct way:

No, because it uses system-wide speechd, which is frowned upon by pulseaudio/pipewire/whatnot

coderalpha commented 1 year ago

blog post enables and starts speech-dispatcher which runs it as a service, i.e. running as root. To not do that don't enable and don't start it. Or if you've already enabled and started. Stop it and disable it.

I stopped speech-dispatcher, hope I understood correctly: sudo systemctl stop speech-dispatcher sudo systemctl disable speech-dispatcher

I also deleted the log file in /run/user/1000/speech-dispatcher/log/, not sure if this was a good idea, as they weren't recreated.

Now if I run: spd-say "hello" ps x | grep speech

I get the following: 935927 ? Sl 0:00 /usr/lib/speech-dispatcher-modules/sd_generic /home/ws2/.config/speech-dispatcher/modules/piper.conf 935930 ? Ssl 0:00 /usr/bin/speech-dispatcher --spawn --communication-method unix_socket --socket-path /run/user/1000/speech-dispatcher/speechd.sock

sthibaul commented 1 year ago

935930 ? Ssl 0:00 /usr/bin/speech-dispatcher --spawn --communication-method unix_socket --socket-path /run/user/1000/speech-dispatcher/speechd.sock

Yes, that's the one, you can use lsof to check what log file it writes to.

coderalpha commented 1 year ago

Seems it is the log files I deleted. How do I restart speech-dispatcher?

sd_generi 935927 ws2 2w REG 0,59 3375 239 /run/user/1000/speech-dispatcher/log/piper.log (deleted) sd_generi 935927 ws2 7w REG 0,59 2011137 240 /run/user/1000/speech-dispatcher/log/speech-dispatcher.log (deleted) sd_generi 935927 ws2 13w REG 0,59 3375 239 /run/user/1000/speech-dispatcher/log/piper.log (deleted)

coderalpha commented 1 year ago

I just killed the speech-dispatcher process and executed the spd-say command and the output was generated by piper!

I did create the speech-dispatcher.conf and modules/piper.conf in the .config/speech-dispatcher directory in my home directory

coderalpha commented 1 year ago

Thanks, Samuel for the help. And also Jeremy for the useful information.

Elleo commented 1 year ago

@andresmessina1701 The first version of Pied is now publicly released, that can automatically set everything up for you: https://pied.mikeasoft.com/

murlakatamenka commented 1 year ago

@Elleo it is only available as a snap, right?

it has various options if you compile it yourself (flatpak, appimage), see the repo:

https://github.com/Elleo/pied

Elleo commented 1 year ago

@Elleo it is only available as a snap, right?

Currently, yes; I am working on making it available via flatpak and appimage (and probably eventually as a deb too), but there are still some issues that need work with those packages.

Elleo commented 1 year ago

@Elleo great to hear, and thank you for making the process easier with a GUI application, helps a lot!

You're welcome!

carlocastoldi commented 11 months ago

For anyone wondering, this the module for piper that i wrote. It can handle multiple languages and maps [-100, 100] speed (=RATE) values to [0.1,3] for sox to handle. However, it does not handle the volume I can't lower it, only have it muted, normal or boosted (which is useless for me)

# /etc/speech-dispatcher/modules/piper-generic.conf
Debug "1"

GenericCmdDependency "piper-tts"
GenericCmdDependency "sox"
GenericCmdDependency "jq"
GenericCmdDependency "bc"
GenericExecuteSynth \
"printf %s \'\$DATA\' \
| /opt/piper-tts/piper --model /opt/piper-tts/voices/\$VOICE.onnx --output_raw \
| sox -v 1 -r \$(jq .audio.sample_rate < /opt/piper-tts/voices/\$VOICE.onnx.json) -c 1 -b 16 -e signed-integer -t raw - -t wav - tempo \$(echo \"0.000055*\$RATE*\$RATE+0.0145*\$RATE+1\" | bc) pitch \$PITCH norm \
| \$PLAY_COMMAND"
# not using $VOLUME

AddVoice "en-us" "MALE1"    "en_US-ryan-medium"         # "en_US-ryan-high"
AddVoice "en-us" "MALE2"    "en_US-lessac-medium"       # "en_US-lessac-high"
AddVoice "en-gb" "FEMALE1"  "en_GB-jenny_dioco-medium"
AddVoice "en-us" "FEMALE2"  "en_US-amy-medium"
AddVoice "it"    "MALE1"    "it_IT-riccardo-x_low"

DefaultVoice "it_IT-riccardo-x_low"

I found that using high quality models takes some time. I have a better experience with medium!

⚠️NOTE⚠️ I bumped my head hard for hours on why speechd couldn't open any sound device with any generic model. Similarly to @coderalpha I kept running speech-dispatcher as a service through systemd. I have no idea why I had in mind that that was the "correct way" of running it. So yea... just like it was mentioned above, I would recommend forgetting about the systemd's service at all.

sthibaul commented 11 months ago

This looks nice :)

@carlocastoldi could you try to add

VoiceFileDependency /opt/piper-tts/voices/$VOICE.onnx

to check that this correctly makes the voice list shown by spd-say -o piper-generic -L matches what is available in /opt/piper-tts/voices?

tkapias commented 11 months ago

My user module config for speechd works fine, I am sharing it below.
But I don't understand how to adapt the Rate/Pitch formula, maybe someone will have an idea.

Timeout 30                                                                   
LogLevel  2                                                                  
LogDir  "default"                                                            

DefaultVolume 100                                                            
DefaultVoiceType "MALE1"                                                     
DefaultLanguage "en"                                                         
DefaultPunctuationMode "some"                                                

SymbolsPreproc "char"
SymbolsPreprocFile "gender-neutral.dic"
SymbolsPreprocFile "font-variants.dic" 
SymbolsPreprocFile "symbols.dic"   
SymbolsPreprocFile "emojis.dic"    
SymbolsPreprocFile "orca.dic"
SymbolsPreprocFile "orca-chars.dic"

DefaultCapLetRecognition  "none"
DefaultSpelling  Off

AudioOutputMethod "pulse"            
AudioPulseDevice "default"            
AudioPulseMinLength 10 

AddModule "piper"                   "sd_generic"   "piper.conf"

DefaultModule piper                                                          

LanguageDefaultModule "en"  "piper"
LanguageDefaultModule "fr"  "piper"

Include "clients/*.conf"
Debug 0

GenericExecuteSynth "printf %s \'$DATA\' | piper --length_scale 1 --sentence_silence 0 --model ~/.local/share/piper/voices/$VOICE --output-raw | aplay -r 22050 -f S16_LE -t raw -"
# only use medium quality voices to respect the 22050 rate for aplay in the command above.

GenericCmdDependency "piper"
GenericCmdDependency "aplay"
GenericCmdDependency "printf"
GenericSoundIconFolder "/usr/share/sounds/sound-icons/"

GenericPunctNone ""
GenericPunctSome "--punct=\"()<>[]{}\""
GenericPunctMost "--punct=\"()[]{};:\""
GenericPunctAll "--punct"

#GenericStripPunctChars  ""

GenericLanguage  "en" "en_US" "utf-8"
GenericLanguage  "fr" "fr_FR" "utf-8"

AddVoice        "en"    "MALE1"         "en_US-hfc_male-medium.onnx"
AddVoice        "en"    "FEMALE1"       "en_US-amy-medium.onnx"
AddVoice        "fr"    "MALE1"         "fr_FR-upmc-medium.onnx -s 1"
AddVoice        "fr"    "FEMALE1"       "fr_FR-upmc-medium.onnx"

DefaultVoice    "en_US-amy-medium.onnx"

#GenericRateForceInteger 1
#GenericRateAdd 1
#GenericRateMultiply 100
tkapias commented 11 months ago

Ok, I'm still not sure how the formula works because if you put 0 in GenericRateAdd the output becomes a float and with 1 it becomes an integer and it's not the purpose given in the doc.

But, it works with bc.

(I don't use pitch modifications but has 2 noise parameters if someone want to set it)

Debug 0

GenericExecuteSynth "printf %s \'$DATA\' | piper --length_scale \`echo \'($RATE * -0.01) + 1\' \| bc\` --sentence_silence 0 --model ~/.local/share/piper/voices/$VOICE --output-raw | aplay -r 22050 -f S16_LE -t raw -"
# only use medium quality voices to respect the 22050 rate for aplay in the command above.

GenericCmdDependency "piper"
GenericCmdDependency "aplay"
GenericCmdDependency "printf"
GenericCmdDependency "bc"
GenericSoundIconFolder "/usr/share/sounds/sound-icons/"

GenericPunctNone ""
GenericPunctSome "--punct=\"()<>[]{}\""
GenericPunctMost "--punct=\"()[]{};:\""
GenericPunctAll "--punct"

#GenericStripPunctChars  ""

GenericLanguage  "en" "en_US" "utf-8"
GenericLanguage  "fr" "fr_FR" "utf-8"

AddVoice        "en"    "MALE1"         "en_US-hfc_male-medium.onnx"
AddVoice        "en"    "FEMALE1"       "en_US-amy-medium.onnx"
AddVoice        "fr"    "MALE1"         "fr_FR-upmc-medium.onnx -s 1"
AddVoice        "fr"    "FEMALE1"       "fr_FR-upmc-medium.onnx"

DefaultVoice    "en_US-amy-medium.onnx"

# for --length_scale $RATE (default: 1.0)
#GenericRateAdd num
#GenericRateMultiply num
# for --noise_scale $PITCH (default: 0.667)
#GenericPitchAdd num
#GenericPitchMultiply num
# for --noise_w $PITCH_RANGE (default: 0.8)
#GenericPitchRangeAdd num
#GenericPitchRangeMultiply num
omega3 commented 10 months ago

Could you please give some description to non-technical users like me what to change in config to replace:

AddVoice "en" "MALE1"

DefaultVoiceType "MALE1"

What these values are and how can I replace them to chose different voice? How to find this classification for equivalent of "MALE1", for example libritts_r medium 8699(1) or jenny_dioco or en_GB-northern_english_male-medium.onnx?

I typed piper --help and don't see any --list-voices command. I downloaded them from hugging face and so far applied from command line pointing to onnx file.

For example in Plasma Okular there is an option to change voice (this also sometimes means language). But with proposed configuration I don't know how to make other voices available for speech dispatcher.

tkapias commented 10 months ago

Check my 2 previous comments (1, 2), you should be able to use it by modifying only those lines: LanguageDefaultModule, GenericLanguage, AddVoice, DefaultVoice.

RoyalOughtness commented 8 months ago

FYI, I found this app which does it all for you :smile:

https://github.com/Elleo/pied

omega3 commented 8 months ago

which does it all for you

Unfortunately not all. It changes config files every time voice is changed, so there is no way to set speed or other values and keep it.

And with Pied only one voice is available at a time as on option in programs like Calibre or Okular.

Piper still needs a good speech dispatcher support like Festival or espeak have.

Elleo commented 8 months ago

Unfortunately not all. It changes config files every time voice is changed, so there is no way to set speed or other values and keep it.

Just as a side-note, if you have sox installed then Pied 0.2 now supports speech-dispatcher's dynamic rate and pitch settings at runtime

sthibaul commented 8 months ago

It changes config files every time voice is changed

Which is really not the way speech-dispatcher workers. The piper module should just expose all the voices that are available, just like e.g. espeak-ng-mbrola-generic.conf does.

KAGEYAM4 commented 6 months ago

Can someone share working config, my config which i got from - https://aur.archlinux.org/cgit/aur.git/tree/piper-generic.conf?h=piper-voices-common and also the config generated by Pied -- had long pause between sentences ( 2-3 ) seconds.

Found this black magic - GenericDelimiters "˨" from https://github.com/ken107/read-aloud/issues/375#issuecomment-1937517761 which fixed it. But now after fixing that i realise paragraph also have 2-3 seconds pause.

Edit - i asked in read-aloud repo, and they said ->

I'm guessing the 2-3 second pause you're experiencing is the time it takes to synthesize the next sentence. Our implementation deals with that by pre-synthesizing the next sentence while the current sentence is being spoken. Your tool will need to support this 'prefetching' strategy.

Any idea on how to do prefetch?