andabi / music-source-separation

Deep neural networks for separating singing voice from music written in TensorFlow
795 stars 150 forks source link

could it process the data in real-time ? #43

Open ucasiggcas opened 4 years ago

ucasiggcas commented 4 years ago

Hi, Could it process the wav or pcm data in real time, that is , if I sing a song with the music ,could I get the music or singing voice every frame ?

any one use the project in real project ? please help me ! thx

ucasiggcas commented 4 years ago

could I separate the mixture into voice and background music in real time ?? any API in CPP/C or other Language ?

ucasiggcas commented 4 years ago

Now I have some question about the codes,

1-in the train, SECONDS must be 8.192, if not ,what will happen ?could be other number ?I think that's why the results are not good, in _pad_wav need much 0, that's no need !

ucasiggcas commented 4 years ago

2-you can see the train log down,it's not good and very bad step-146352, learning_rate=0.000100, d_loss=104.77, loss=4.275829315185547 step-146353, learning_rate=0.000100, d_loss=-27.20, loss=3.1128158569335938 step-146354, learning_rate=0.000100, d_loss=-42.49, loss=1.790191650390625 step-146355, learning_rate=0.000100, d_loss=1134.97, loss=22.10839080810547 step-146356, learning_rate=0.000100, d_loss=-93.04, loss=1.5390806198120117 step-146357, learning_rate=0.000100, d_loss=1166.83, loss=19.497581481933594 step-146358, learning_rate=0.000100, d_loss=40.43, loss=27.38010597229004 step-146359, learning_rate=0.000100, d_loss=-86.62, loss=3.664541244506836 step-146360, learning_rate=0.000100, d_loss=610.21, loss=26.025968551635742 step-146361, learning_rate=0.000100, d_loss=-84.29, loss=4.087613582611084 step-146362, learning_rate=0.000100, d_loss=-10.23, loss=3.669389486312866 step-146363, learning_rate=0.000100, d_loss=455.32, loss=20.376741409301758 step-146364, learning_rate=0.000100, d_loss=0.91, loss=20.56216049194336 step-146365, learning_rate=0.000100, d_loss=-25.94, loss=15.22756576538086 step-146366, learning_rate=0.000100, d_loss=-73.96, loss=3.9647676944732666 step-146367, learning_rate=0.000100, d_loss=195.38, loss=11.711013793945312 step-146368, learning_rate=0.000100, d_loss=136.57, loss=27.705293655395508 step-146369, learning_rate=0.000100, d_loss=-26.30, loss=20.419530868530273 step-146370, learning_rate=0.000100, d_loss=-82.57, loss=3.5595924854278564 step-146371, learning_rate=0.000100, d_loss=-49.42, loss=1.800356149673462 step-146372, learning_rate=0.000100, d_loss=1037.49, loss=20.478897094726562 step-146373, learning_rate=0.000100, d_loss=-92.51, loss=1.5329692363739014 step-146374, learning_rate=0.000100, d_loss=1233.98, loss=20.449443817138672 step-146375, learning_rate=0.000100, d_loss=-17.30, loss=16.911705017089844 step-146376, learning_rate=0.000100, d_loss=18.10, loss=19.971939086914062 step-146377, learning_rate=0.000100, d_loss=-89.22, loss=2.153000831604004 step-146378, learning_rate=0.000100, d_loss=1109.16, loss=26.033203125 step-146379, learning_rate=0.000100, d_loss=-59.33, loss=10.588550567626953 step-146380, learning_rate=0.000100, d_loss=-76.75, loss=2.4616308212280273 step-146381, learning_rate=0.000100, d_loss=-8.74, loss=2.2464866638183594 step-146382, learning_rate=0.000100, d_loss=1070.55, loss=26.296340942382812 step-146383, learning_rate=0.000100, d_loss=-86.85, loss=3.4584407806396484 step-146384, learning_rate=0.000100, d_loss=8.32, loss=3.746330499649048 step-146385, learning_rate=0.000100, d_loss=-46.09, loss=2.019552707672119 step-146386, learning_rate=0.000100, d_loss=903.55, loss=20.267263412475586 step-146387, learning_rate=0.000100, d_loss=-29.52, loss=14.285148620605469 step-146388, learning_rate=0.000100, d_loss=-25.25, loss=10.678624153137207 step-146389, learning_rate=0.000100, d_loss=145.31, loss=26.195268630981445 step-146390, learning_rate=0.000100, d_loss=-93.44, loss=1.7175146341323853

ucasiggcas commented 4 years ago

now update the train log, you can see the results are also very bad step-1377386, learning_rate=0.000010, d_loss=-92.95, loss=1.9825869798660278 step-1377387, learning_rate=0.000010, d_loss=835.53, loss=18.5477237701416 step-1377388, learning_rate=0.000010, d_loss=17.18, loss=21.73417854309082 step-1377389, learning_rate=0.000010, d_loss=-85.09, loss=3.239492654800415 step-1377390, learning_rate=0.000010, d_loss=608.30, loss=22.94527816772461 step-1377391, learning_rate=0.000010, d_loss=-89.54, loss=2.3993682861328125 step-1377392, learning_rate=0.000010, d_loss=1010.90, loss=26.654502868652344 step-1377393, learning_rate=0.000010, d_loss=9.87, loss=29.285980224609375 step-1377394, learning_rate=0.000010, d_loss=-23.31, loss=22.459503173828125 step-1377395, learning_rate=0.000010, d_loss=27.76, loss=28.695003509521484 step-1377396, learning_rate=0.000010, d_loss=-92.39, loss=2.184333562850952 step-1377397, learning_rate=0.000010, d_loss=130.96, loss=5.044849395751953 step-1377398, learning_rate=0.000010, d_loss=297.36, loss=20.04638671875 step-1377399, learning_rate=0.000010, d_loss=56.41, loss=31.353923797607422 step-1377400, learning_rate=0.000010, d_loss=-93.11, loss=2.1595098972320557 step-1377401, learning_rate=0.000010, d_loss=30.22, loss=2.812201976776123 step-1377402, learning_rate=0.000010, d_loss=-50.51, loss=1.391775369644165 step-1377403, learning_rate=0.000010, d_loss=69.16, loss=2.354325771331787 step-1377404, learning_rate=0.000010, d_loss=-1.38, loss=2.32185697555542 step-1377405, learning_rate=0.000010, d_loss=1196.17, loss=30.095117568969727 step-1377406, learning_rate=0.000010, d_loss=-20.20, loss=24.01531982421875 step-1377407, learning_rate=0.000010, d_loss=-91.55, loss=2.030193328857422 step-1377408, learning_rate=0.000010, d_loss=560.88, loss=13.417107582092285 step-1377409, learning_rate=0.000010, d_loss=-70.88, loss=3.9072742462158203 step-1377410, learning_rate=0.000010, d_loss=639.20, loss=28.8824520111084

who can help me ? any advice or suggestion will be grateful