jzi040941 / PercepNet

Unofficial implementation of PercepNet: A Perceptually-Motivated Approach for Low-Complexity, Real-Time Enhancement of Fullband Speech
BSD 3-Clause "New" or "Revised" License
325 stars 91 forks source link

Am I use dump_percepnet.py right? #10

Closed xyx361100238 closed 3 years ago

xyx361100238 commented 3 years ago

Dear Noah, Thanks for share. According to #4 ,I Have done with training and get the model file use sampledata(model.pt 30.3MB) Q1:Is the file size correct Use:python3 ./dump_percepnet.py model.pt tmpList/a.c Have: printing layer fc weight: Parameter containing: tensor([[-5.8342e-02, 8.4117e-02, -1.8991e-02, ..., -1.0439e-01, -3.0405e-02, 3.7125e-02], [-8.3928e-02, -8.2344e-02, -9.2069e-02, ..., 1.8947e-02, -1.1299e-01, -6.5784e-02], [ 3.6998e-02, 8.9760e-02, 1.7038e-02, ..., 5.5876e-02, 8.1813e-02, 1.0908e-01], ..., [-2.4296e-02, -1.0941e-02, -7.2806e-02, ..., 1.5993e-02, -5.7701e-02, -1.0907e-01], [-3.3082e-02, -9.1393e-02, -1.0323e-01, ..., -9.3106e-02, 7.7872e-02, -8.4516e-02], [-3.9096e-02, 5.6298e-02, -4.1803e-02, ..., -5.2403e-02, -4.0629e-02, 2.0898e-05]], requires_grad=True) printing layer conv1 printing layer conv2 printing layer gru1 printing layer gru2 printing layer gru3 printing layer gru_gb printing layer gru_rb printing layer fc_gb weight: Parameter containing: tensor([[-0.0119, -0.0091, 0.0048, ..., -0.0063, 0.0110, -0.0173], [-0.0055, 0.0052, -0.0083, ..., -0.0027, 0.0184, -0.0007], [ 0.0111, 0.0031, 0.0160, ..., -0.0148, 0.0004, 0.0086], ..., [-0.0202, 0.0177, 0.0110, ..., -0.0202, 0.0173, 0.0023], [-0.0017, -0.0150, -0.0045, ..., 0.0106, 0.0158, 0.0015], [-0.0185, 0.0009, 0.0129, ..., 0.0045, 0.0028, 0.0105]], requires_grad=True) printing layer fc_rb weight: Parameter containing: tensor([[ 0.0813, -0.0757, 0.0472, ..., 0.0742, -0.0321, 0.0692], [ 0.0574, 0.0049, 0.0802, ..., 0.0282, 0.0149, 0.0733], [ 0.0457, 0.0489, -0.0813, ..., 0.0040, 0.0310, 0.0222], ..., [ 0.0067, -0.0674, 0.0267, ..., -0.0824, 0.0025, 0.0248], [-0.0164, -0.0548, 0.0088, ..., 0.0619, -0.0342, 0.0319], [ 0.0752, 0.0771, 0.0405, ..., 0.0106, -0.0278, 0.0479]], requires_grad=True) Q2:Does it means succeed

I Have Got a.c(178MB)And the file not finished yet /This file is automatically generated from a Pytorch model/

ifdef HAVE_CONFIG_H

include "config.h"

endif

include "nnet.h"

include "nnet_data.h"

static const float fc_weights[8960] = { …… const DenseLayer fc_gb = { fc_gb_bias, fc_gb_weights, 2560, 34, ACTIVATION_SIGMOID };

static const float fc_rb_weights[4352] = { …… have no ‘}’

Q3:is the file dump_percepnet.py error or my doc & training process error?

Hope to get your reply,thanks!

xyx361100238 commented 3 years ago

error of Q3:the nnet_data.c is not finished A: the file dump_percepnet.py need to close file, add f.close() after dump_data API

jzi040941 commented 3 years ago

Hi @xyx361100238, A1: yes your size of model file is same with mine

A2:Yes it means done. It was bit confused for user to recognize it's done or not. so I added code for printing "done" at the end of progress thanks

A3: according to your solution (add f.close() to end of dump_percepnet.py) now it works correctly! thanks for your contribution!

jzi040941 commented 3 years ago

add more info I'm still work in progress for dump_percepnet.py and nnet.c c++ dnn is not working properly now even if you dump torch model by dump_percepnet.py I think it's because of difference between keras and torch in terms of weight saving demension

make issue or pull request If you find any error or solution Thanks!

xyx361100238 commented 3 years ago

Yes, Sure & Thanks for your efforts I have read the overall structure of the project,if api 'compute_conv1d' is correct,i think you will finished list 'DNNModel c++ implementation' soon.

xyx361100238 commented 3 years ago

Actually,I have confused about #11 too,even though the value of loss is keep decreasing,but it's still too large Is it related to Loss Function?

jzi040941 commented 3 years ago

Yes, I think it's related to Loss function I made. usually predefined pytorch Loss function take normalizing but I used sum for my Loss. maybe It's one of the reason that it's too large

xyx361100238 commented 3 years ago

Yes,you are right. the loss value is small if use MSELoss fuction. Thanks again

YangangCao commented 3 years ago

Yes,you are right. the loss value is small if use MSELoss fuction. Thanks again

Hello, I encounter increasing loss when training, can you please tell me some information of your dataset? including: are they original(no up-sampling) 48k wav? the whole size of speech and noise and the count when extracting feature Thanks very much!

xyx361100238 commented 3 years ago

Yes,you are right. the loss value is small if use MSELoss fuction. Thanks again

Hello, I encounter increasing loss when training, can you please tell me some information of your dataset? including: are they original(no up-sampling) 48k wav? the whole size of speech and noise and the count when extracting feature Thanks very much!

I was use Step&Data according #4 ,If u use other more data,it will be increase become the code 'running_loss += loss.item()' in current epoch, u should compare whether the value of each epoch decreases

jzi040941 commented 3 years ago

Yes, Sure & Thanks for your efforts I have read the overall structure of the project,if api 'compute_conv1d' is correct,i think you will finished list 'DNNModel c++ implementation' soon.

I've check compute_conv1d function and complete test code commit 7b8211a Thanks