fgnt / padertorch

A collection of common functionality to simplify the design, training and evaluation of machine learning models based on pytorch with an emphasis on speech processing.
MIT License
71 stars 16 forks source link

Add soft max to SDR and log-MSE losses #120

Closed thequilo closed 3 years ago

codecov-commenter commented 3 years ago

Codecov Report

Merging #120 (919988c) into master (482d1f7) will increase coverage by 0.06%. The diff coverage is 90.62%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #120      +/-   ##
==========================================
+ Coverage   77.08%   77.15%   +0.06%     
==========================================
  Files          46       46              
  Lines        3570     3594      +24     
==========================================
+ Hits         2752     2773      +21     
- Misses        818      821       +3     
Impacted Files Coverage Δ
padertorch/ops/losses/regression.py 88.52% <90.62%> (-0.67%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 482d1f7...919988c. Read the comment docs.

boeddeker commented 3 years ago

I noticed, that the file name is padertorch/ops/losses/regression.py (i.e. no connection to our current use case) and the old implementations seemed to be valid for real and complex numbers. Since torch supports more and more complex operations, we should keep the correct behavior for complex numbers.

Relevance: When you want to use a loss in the STFT domain, you only need to flatten the last two dimensions of estimate and target.