chainer / chainer-chemistry

Chainer Chemistry: A Library for Deep Learning in Biology and Chemistry
MIT License
629 stars 130 forks source link

take auto sum when output_var.size is not 1. #285

Closed corochann closed 5 years ago

codecov-io commented 5 years ago

Codecov Report

Merging #285 into master will decrease coverage by 0.01%. The diff coverage is 60%.

@@            Coverage Diff             @@
##           master     #285      +/-   ##
==========================================
- Coverage   83.24%   83.23%   -0.02%     
==========================================
  Files         147      147              
  Lines        7092     7097       +5     
==========================================
+ Hits         5904     5907       +3     
- Misses       1188     1190       +2
codecov-io commented 5 years ago

Codecov Report

Merging #285 into master will increase coverage by 0.44%. The diff coverage is 100%.

@@            Coverage Diff             @@
##           master     #285      +/-   ##
==========================================
+ Coverage   83.24%   83.69%   +0.44%     
==========================================
  Files         147      165      +18     
  Lines        7092     7643     +551     
==========================================
+ Hits         5904     6397     +493     
- Misses       1188     1246      +58
corochann commented 5 years ago

This PR is for when one want to backward output_var other than loss.

For example, if we want to calculate the gradient of output, not loss we need to write eval_fun as follows.

def eval_fun(x, adj, t):
    pred =  predictor(x, adj)
    pred_summed = F.sum(pred)
    return pred_summed

But usually, we forget to write F.sum, and try to return pred which is multi-dimensional along batch size and backward does not work.

corochann commented 5 years ago

TODO: test

corochann commented 5 years ago

test added, please check @mottodora