openvpi / DiffSinger

An advanced singing voice synthesis system with high fidelity, expressiveness, controllability and flexibility based on DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism
Apache License 2.0
2.62k stars 275 forks source link

Inference from Raw Input #165

Closed Tox1cPhantom closed 3 months ago

Tox1cPhantom commented 5 months ago

Might not be related to this repo, i was using the original DiffSinger, and since you guys are maintaining it i thought you guys might be able to help with inference from raw input on English words.

I'm trying to inference but it keeps saying with English that you need to separate Notes with | or says that the notes don't align with the number of words in English, is there any way to fix that.

This is according to the README.md from the original repo: inp = { 'text': '小酒窝长睫毛AP是你最美的记号', 'notes': 'C#4/Db4 | F#4/Gb4 | G#4/Ab4 | A#4/Bb4 F#4/Gb4 | F#4/Gb4 C#4/Db4 | C#4/Db4 | rest | C#4/Db4 | A#4/Bb4 | G#4/Ab4 | A#4/Bb4 | G#4/Ab4 | F4 | C#4/Db4', 'notes_duration': '0.407140 | 0.376190 | 0.242180 | 0.509550 0.183420 | 0.315400 0.235020 | 0.361660 | 0.223070 | 0.377270 | 0.340550 | 0.299620 | 0.344510 | 0.283770 | 0.323390 | 0.360340', 'input_type': 'word' }

And this is what i'm trying to inference: 'text': 'I paid my dues Time after times I done my sentences but committed no crime', 'notes': 'C4 | A3 | C4 | E4 | C4 | B3 | A3 | E4 | D4 | C4 | G4 | B4 | C5 | D5 | E5', 'notes_duration': '0.25 | 0.25 | 1.5 | 2.0 | 0.25 | 1.75 | 2.0 | 0.25 | 0.25 | 1.5 | 2.0 | 0.375 | 0.25 | 1.375 | 0.875', 'input_type': 'word'

yqzhishen commented 5 months ago

The pretrained model provided by the original DiffSinger repo is Chinese-only and cannot sing English. Also as far as I know, the code of original DiffSinger is not compatible with languages like English, and is only suitable for two-phase phoneme systems like Chinese. Many things are hard-coded and cannot be changed easily :(.

Tox1cPhantom commented 5 months ago

Thanks for the reply. I assume English language will not be compatible with the fork you guys are maintaining as well right? Moreover, do you know of any other tool which might be compatible with English and can generate singing given it the notes and notes duration aligned with lyrics. So far whatever i've tested seems to be heavily focused on either Chinese, Japanese or Korean.

yqzhishen commented 5 months ago

This repo supports any language. You can find documentation for the making process.

Tox1cPhantom commented 5 months ago

Oh i see, a few things i would like some clarity on:

I am planning to use all of this via command line so that's why i'm asking all of this stuff. Thanks in advance for the help!

yqzhishen commented 4 months ago
  1. There are no test models, you need to train by yourself, or you can ask for people from the English voicebank developing community.
  2. offset: the start position in seconds of each segment; text: not in use now; ph_seq: phone sequence; ph_dur: phone duration sequence in seconds; ph_num: the number of phones of each word/syllable (each word/syllable usually starts with an onset vowel); note_seq: note name sequence; note_dur: note duration sequence in seconds; note_slur: whether each note is a slur (1) or not (0) (notes that share the same word/syllable with other notes are slurs); f0_seq: F0 sequence, in Hz; f0_timestep: the interval between two neighbor F0 curve points. For the alignment method see https://github.com/openvpi/MakeDiffSinger/tree/main/variance-temp-solution#4-estimate-note-values.
  3. There is a method in utils/binarizer_utils.py called get_pitch_parselmouth that you can use to extract F0.
  4. Difference of the two models is shown in the image in README.