Closed biofoolgreen closed 2 years ago
can you tell me what exactly a bug is?
@dathudeptrai The issue is my loss curve looks good, but when I do inference with trained weights, it's really bad that give me all noise. The experiments were not running on GPU but other hardware(Graphcore IPU). So I want to check if all weights/gradients/layer_outputs on IPU are identical to GPU. I believe this is more like a hardware related bug than this repo. But I don't know how to get what I need here.
The mel-spectrum I got like this:
Any suggestion?
@biofoolgreen are the eval mels good ?
@dathudeptrai See above mels. Even I use trained sample is bad too.
@biofoolgreen can you share your tensorboard ?
@dathudeptrai Sure! Here is the tensorboard zip file. tensorboard.zip
@biofoolgreen did you reach output of mid-layers (feature maps)? I wanna reach too, to just observe.
we can do something like that for multiband-melgan:
intermadiate_layer_index = 20 (the layer we want)
feature_extactor = mb_melgan.get_layer(index=0)
inter_output_model = tf.keras.Model(feature_extactor.input, feature_extactor.get_layer(index = intermadiate_layer_index).output)
feat = mel_spectrogram # (1, #frame, 80)
intermediate_output = inter_output_model(feat) # (1, #frame, #feat_dimension)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
Hi, I'm debugging a numerial issue on FastSpeech2. I'd like to printing/saving weights/gradients/outputs of each layer. I've tried add logger to
_calculate_gradient_per_batch
function inbase_trainer.py
but only got tensors not numpy values.How could I get intermediate weights/gradients of each layer while training? It's very appreciated if you have any convienient way to debug numerical issue. Thanks in advance.