CjangCjengh / MoeGoe

Executable file for VITS inference
MIT License
2.32k stars 251 forks source link

Project dependencies may have API risk issues #19

Open PyDeps opened 1 year ago

PyDeps commented 1 year ago

Hi, In MoeGoe, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

numba
librosa
numpy==1.22.4
scipy
torch
Unidecode
openjtalk>=0.3.0.dev2
jamo

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict. The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project, The version constraint of dependency numba can be changed to >=0.55.0rc1,<=0.55.2. The version constraint of dependency librosa can be changed to >=0.2.0,<=0.3.1. The version constraint of dependency numpy can be changed to >=1.16.0rc1,<=1.18.5. The version constraint of dependency Unidecode can be changed to >=0.04.1,<=0.04.18.

The above modification suggestions can reduce the dependency conflicts as much as possible, and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the numba
d
numba.jit
The calling methods from the librosa
librosa.load
librosa.resample
The calling methods from the numpy
c
The calling methods from the Unidecode
unidecode.unidecode
The calling methods from the all methods
torch.split
math.sqrt
torch.cosine_similarity
self.encdec_attn_layers.append
self.padding
i.self.convs_1x1
neg_cent.data.cpu
self.block_length.scores.torch.ones_like.triu.tril
torch.nn.functional.pad.gather
self.drop
torch.ceil
text._clean_text
HubertSoft
self.positional_embedding
self.encode
self.convs2.apply
win_size.torch.hann_window.to
self.conv3
dynamic_range_decompression_torch
torch.multinomial
torch.nn.ModuleList
symbols_to_japanese
float.group
torch.zeros
self.label_embedding.weight.unsqueeze.unsqueeze
DiscriminatorP
HubertSoft.load_state_dict
mask.sum.data.cpu.numpy
logs_p.transpose.attn.squeeze.torch.matmul.transpose.transpose
mask.sum.data.cpu
self.conv4
j.self.num_kernels.i.self.resblocks
torch.flip
rational_quadratic_spline
latin_to_hangul
i.self.res_skip_layers
zip
self.__dict__.items
DDSConv
length_scale.noise_scale_w.noise_scale.sid.x_tst_lengths.x_tst.net_g_ms.infer.data.cpu
t_t.self.k_channels.self.n_heads.b.query.view.transpose.size
HubertSoft.eval
b.pow.all
torch.FloatTensor.size
decimals.split
mask.sum
self.emb
torch.ones
self.conv_layers.append
i.self.norm_layers_1
text.text_to_sequence
numpy.interp
text.mandarin.chinese_to_romaji
self.attention
hubert_model.hubert_soft
modifiers.split
y.squeeze.squeeze
mel.torch.from_numpy.to
torch.nn.Linear
utils.load_audio_to_torch
y_d_gs.append
japanese_text.japanese_to_romaji_with_accent.replace
self.norm_layers_2.append
re.match
text.mandarin.latin_to_bopomofo
librosa.load
text.korean.korean_to_lazy_ipa
numpy.zeros
DurationPredictor
self.logits
torch.flatten.view
print
torch.unsqueeze.squeeze
japanese_to_romaji_with_accent
cn2an.an2cn
convert_pad_shape
torch.log1p
zip.get
text.korean.number_to_hangul
self.ResBlock2.super.__init__
self.post_proj
w_ceil.torch.sum.torch.clamp_min.long
self.conv5
torch.abs
indic_transliteration.sanscript.transliterate
torch.cat.transpose
self.dropout
torch.no_grad
modules.Flip
i.marks.unidecode.replace
self.cond_layer
torch.ones_like
torch.FloatTensor.uniform_
torch.tril
float
modules.DDSConv
japanese_text.japanese_to_romaji_with_accent.replace.replace.replace
delta.gather.pow
torch.nn.Conv1d
latin_to_bopomofo
torch.inference_mode
audio.astype
self.convs
pyopenjtalk.extract_fullcontext
set
mask.sum.data.cpu.numpy.astype
torch.nn.init.xavier_uniform_
logging.info
sid_tgt.sid_src.spec_lengths.spec.net_g_ms.voice_conversion.data.cpu
number_to_hangul
modules.ConvFlow
f.read
commons.generate_path
self.post.weight.data.zero_
output.transpose.contiguous
path.from_numpy.to
ko_pron.romanise
fmap_rs.append
ResidualCouplingBlock
t_s.self._attention_bias_proximal.to
torch.nn.functional.layer_norm
x.size.torch.zeros.to
modules.WN
torch.nn.functional.softplus
self.flow.transpose
self.norms_2.append
audio16000.FloatTensor.unsqueeze
re.search
self.conv_2
self.proj.weight.data.zero_
fmap_gs.append
y_lengths.commons.sequence_mask.torch.unsqueeze.to
t_s.self.k_channels.self.n_heads.b.key.view.transpose.size
i.self.norm_layers
re.split
torch.log
n.labels.re.search.group
digit2name.get.replace
range
i.self.convs_sep
re.sub
torch.clamp_min
spline_fn
torch.stft
torch.nn.functional.gelu.transpose
self.label_embedding.weight.unsqueeze
commons.rand_slice_segments
self.conv_pre
numba.jit
commons.get_padding
reversed
torch.sqrt.pow
super
model.module.state_dict
length_scale.noise_scale_w.noise_scale.sid.x_tst_lengths.x_tst.net_g_ms.infer.data.float.numpy.unsqueeze
self.conv_k
TransformerEncoder
attentions.Encoder
int
torch.IntTensor
model.load_state_dict
self.conv_1
self.enc_p
numba.void
torch.nn.utils.weight_norm
i.self.self_attn_layers
logging.getLogger.setLevel
audio16000.FloatTensor.unsqueeze.unsqueeze.hubert.units.squeeze
i.self.encdec_attn_layers
self.__dict__.__repr__
torch.nn.Conv2d
numpy.arange
modules.ElementwiseAffine
str
slice_segments
neg_cent.data.cpu.numpy
os.path.dirname
self.conv_post
copy.deepcopy
sid_src.self.emb_g.unsqueeze
torch.nn.functional.leaky_relu
re.findall
enumerate.keys
self.__dict__.keys
torch.nn.functional.gelu.size
logs_p.transpose.attn.squeeze.torch.matmul.transpose
StochasticDurationPredictor
self.post.bias.data.zero_
commons.fused_add_tanh_sigmoid_multiply
y.squeeze.unsqueeze
model.state_dict
i.self.norm_layers_2
audio16000.FloatTensor.unsqueeze.unsqueeze.hubert.units.squeeze.numpy
HParams
torch.nn.Parameter
label.re.search.group
t_s.self.k_channels.self.n_heads.b.value.view.transpose
text.mandarin.chinese_to_bopomofo
self._get_relative_embeddings
hangul_number.append
torch.nn.functional.softplus.gather
_compute_mask
torch.FloatTensor
length_scale.noise_scale_w.noise_scale.sid.x_tst_lengths.x_tst.net_g_ms.infer.data.float.numpy
c
modules.ResidualCouplingLayer
numpy.log
self.dec
label.re.search.group.replace
hangul_number
text.japanese_cleaners.replace.replace
FeatureProjection
get_label_value
max
commons.sequence_mask
torch.nn.ConvTranspose1d
ask_if_continue
self.proj
i.self.norms_1
self.attn_layers.append
hps_ms.data.keys
torch.nn.functional.logsigmoid
torch.cumsum.view
librosa.resample
self.mask
get_real_sokuon
self.ffn_layers.append
self.convs1.apply
FFN
t_s.self.k_channels.self.n_heads.b.key.view.transpose
bopomofo_to_romaji
torch.clamp
hubert_model.hubert_soft.units
text.japanese.japanese_to_ipa
device.mask_length.torch.arange.expand
i.self.ups
h_mask.unsqueeze
self.convs_1x1.append
torch.zeros_like
text.mandarin.chinese_to_lazy_ipa
output.transpose.contiguous.view
torch.matmul
spectral_normalize_torch.pow
getattr
jieba.set_dictionary
commons.generate_path.sum
path.unsqueeze.transpose
self.convs_sep.append
self._matmul_with_relative_values
x.group
numpy.exp
self.res_skip_layers.append
torch.nn.ReLU
torch.nn.Sequential
input
unidecode.unidecode
commons.generate_path.squeeze
text.japanese.japanese_to_romaji_with_accent
torch.nn.Embedding
self.resblocks.append
models.SynthesizerTrn.voice_conversion
sys.exit
get_text
d
get_speaker_id
scores.masked_fill.masked_fill
text.replace.replace
print_speakers
torch.cat
i.self.ffn_layers
self.Generator.super.__init__
format
self.convs.apply
self.conv_v
torch.nn.GroupNorm
torch.transpose
torch.max
models.SynthesizerTrn.infer
join
setattr
l.remove_weight_norm
self.enc
path.unsqueeze.transpose.view
self.norm_1
torch.nn.functional.gelu
length.length.torch.ones.torch.tril.unsqueeze
text.mandarin.number_to_chinese
self.conv6
self.ResBlock1.super.__init__
self.post_convs
torch.cat.size
output.transpose.contiguous.view.transpose
torch.nn.init.normal_
jamo.j2hcj
fmap.append
scipy.io.wavfile.write
length.length.torch.ones.torch.tril.unsqueeze.unsqueeze
torch.nn.Dropout
self.conv_k.weight.copy_
self.__dict__.values
t_t.self.k_channels.self.n_heads.b.query.view.transpose.view
jieba.lcut
length_scale.noise_scale_w.noise_scale.sid.x_tst_lengths.x_tst.net_g_ms.infer.data.cpu.float.numpy
self.in_layers.append
LayerNorm
t.c.b.h.reshape.permute.reshape
theta.pow
self.conv_o
torch.sigmoid
PositionalConvEmbedding
open
torch.unsqueeze
min
torch.cumsum
self.post_flows.append
self.feature_projection
torch.arange.size
random.random
text.x.group.x.group.re.sub.replace
i.self.norms_2
model.state_dict.items
phoneme.replace.replace.replace
sequence_mask
self.cond
self.dp
commons.convert_pad_shape
attn_mask.squeeze.neg_cent.monotonic_align.maximum_path.unsqueeze
m.weight.data.normal_
torch.rand
phoneme.replace.replace
y_d_rs.append
self.drop.unsqueeze
torch.flatten
self.feature_extractor
type
torch.randn_like
l
length.max
self.conv2
b.torch.rand.to
spec.pow.sum
core.maximum_path_jit
monotonic_align.maximum_path
chinese_to_bopomofo
self.norm_layers.append
self.ups.apply
sid_tgt.self.emb_g.unsqueeze
text.x.group.ko_pron.romanise.re.sub.split
self.norm0
i.self.norm_layers_0
text.korean.divide_hangul
json.loads
self.conv0
m_p.transpose.attn.squeeze.torch.matmul.transpose.transpose
len
self.masked_spec_embed.to
DiscriminatorS
commons.subsequent_mask
self._matmul_with_relative_keys
self.enc_q
japanese_cleaners
length_scale.noise_scale_w.noise_scale.sid.x_tst_lengths.x_tst.net_g_ms.infer.data.float
self.flow
self.DiscriminatorS.super.__init__
x_mask.size
list
sid_tgt.sid_src.spec_lengths.spec.net_g_ms.voice_conversion.data.cpu.float
self.norm_layers_1.append
neg_cent.data.cpu.numpy.astype
ValueError
torch.nn.functional.gelu.unsqueeze
enumerate
utils.get_hparams_from_file
x.size.x.size.torch.randn.to
attn_mask.squeeze.neg_cent.monotonic_align.maximum_path.unsqueeze.detach
logging.getLogger
self.post_pre
super.__init__
torch.nn.TransformerEncoderLayer
torch.flatten.size
delta.gather
layer
text.sanskrit.devanagari_to_ipa
model.module.load_state_dict
librosa.pyin
WN
self.WN.super.__init__
torch.load
torch.nn.utils.remove_weight_norm
self.self_attn_layers.append
hasattr
x_mask.unsqueeze
m_p.transpose.attn.squeeze.torch.matmul.transpose
self._absolute_position_to_relative_position
c1
ids_str_max.x.device.b.torch.rand.to.to
self.ups.append
self.conv1
i.self.conv_layers
mask_indices.unsqueeze.expand
torch.exp.size
utils.load_checkpoint
torch.detach
torch.nn.functional.softmax
text.korean.latin_to_hangul
self.proj.bias.data.zero_
_real_sokuon.keys
x_mask.size.commons.subsequent_mask.to
text.replace.replace.replace
self.flows.append
torch.from_numpy
self.projection
FeatureExtractor
torch.LongTensor
sid.self.emb_g.unsqueeze
japanese_text.japanese_to_romaji_with_accent.replace.replace
pypinyin.lazy_pinyin
self.relu_drop
torch.randn
text.japanese_cleaners.replace
numpy.nan_to_num
_clean_text
chinese_to_romaji
self._attention_bias_proximal
input.replace
length.unsqueeze
self.TransformerEncoder.super.__init__
librosa.note_to_hz
transforms.piecewise_rational_quadratic_transform
self.pre
Exception
torch.nn.functional.pad.size
scores.torch.ones_like.triu
mask_length.num_masked_spans.batch_size.device.mask_length.torch.arange.expand.reshape
i.self.attn_layers
x.size.x_lengths.commons.sequence_mask.torch.unsqueeze.to
cleaner
self.norm_2
self.norm
torch.exp
jamo.h2j
commons.intersperse
torch.hann_window
flow
self.conv_k.bias.copy_
self.post
classname.find
PosteriorEncoder
torch.nn.functional.pad.view
b.pow
models.SynthesizerTrn
math.log
torch.nn.modules.utils.consume_prefix_in_state_dict_if_present
resblock
torch.sum
TextEncoder
self.norms_1.append
librosa.filters.mel
utils.get_hparams_from_file.keys
modules.Log
torch.FloatTensor.unsqueeze
torch.nn.functional.pad
modules.LayerNorm
self.conv_q
y.unsqueeze.transpose
re.compile
self.conv
self.MultiPeriodDiscriminator.super.__init__
mask_length.num_masked_spans.batch_size.mask_indices.unsqueeze.expand.reshape.unsqueeze
models.SynthesizerTrn.eval
torch.relu
sid_tgt.sid_src.spec_lengths.spec.net_g_ms.voice_conversion.data.cpu.float.numpy
length_scale.noise_scale_w.noise_scale.sid.x_tst_lengths.x_tst.net_g_ms.infer.data.cpu.float
mask.scatter.scatter
self.emb_g
searchsorted
torch.min
w.size.w.size.torch.randn.to
t.c.b.h.reshape.permute
number_to_chinese
spectral_normalize_torch
kwargs.items
torch.nn.functional.softmax.gather
torch.arange.unsqueeze
t_s.self.k_channels.self.n_heads.b.value.view.transpose.view
t_s.self.k_channels.self.n_heads.b.key.view.transpose.transpose
dynamic_range_compression_torch
self._relative_position_to_absolute_position
torch.tanh
torch.sqrt
t_s.self.k_channels.self.n_heads.b.key.view.transpose.view
audio16000.FloatTensor.unsqueeze.unsqueeze
i.self.in_layers
_real_hatsuon.keys
torch.nn.LayerNorm
self.DiscriminatorP.super.__init__
re.sub.replace
MultiHeadAttention
path.unsqueeze.transpose.unsqueeze
Generator
t_t.self.k_channels.self.n_heads.b.query.view.transpose
mel_processing.spectrogram_torch.size
self.norm_layers_0.append
get_real_hatsuon
c2
torch.arange
jieba.initialize
get_label
RuntimeError
self.encoder
t_y.cum_duration_flat.sequence_mask.to
mel_processing.spectrogram_torch
mask_length.num_masked_spans.batch_size.mask_indices.unsqueeze.expand.reshape
self.log_flow
norm_f

@developer Could please help me check this issue? May I pull a request to fix it? Thank you very much.

fumiama commented 1 year ago

numpy==1.22.4 is required by openjtalk because it is compiled exactly for that version. You can use venv to prevent damaging os libs.