BayesWatch / pytorch-experiments-template

A pytorch based classification experiments template
GNU General Public License v3.0
5 stars 1 forks source link

Generalise configs to work with yaml files too #80

Closed jack-willturner closed 3 years ago

jack-willturner commented 3 years ago

I personally prefer working with yaml files since I find them a little easier to read/write.

There are a few other small changes:

sourcery-ai[bot] commented 3 years ago

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 0.40%.

Quality metrics Before After Change
Complexity 10.57 🙂 10.61 🙂 0.04 👎
Method Length 102.80 🙂 105.87 🙂 3.07 👎
Working memory 13.57 😞 13.59 😞 0.02 👎
Quality 50.83% 🙂 50.43% 🙂 -0.40% 👎
Other metrics Before After Change
Lines 524 542 18
Changed files Quality Before Quality After Quality Change
train.py 41.23% 😞 41.20% 😞 -0.03% 👎
utils/arg_parsing.py 70.50% 🙂 67.90% 🙂 -2.60% 👎

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
utils/arg_parsing.py process_args 13 🙂 156 😞 11 😞 48.02% 😞 Try splitting into smaller methods. Extract out complex expressions
train.py housekeeping 7 ⭐ 168 😞 9 🙂 56.28% 🙂 Try splitting into smaller methods
train.py get_base_argument_parser 0 ⭐ 234 ⛔ 9 🙂 58.21% 🙂 Try splitting into smaller methods
train.py train 1 ⭐ 92 🙂 12 😞 65.49% 🙂 Extract out complex expressions
train.py eval 1 ⭐ 72 🙂 12 😞 68.42% 🙂 Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Let us know what you think of it by mentioning @sourcery-ai in a comment.

sourcery-ai[bot] commented 3 years ago

Sourcery Code Quality Report

❌  Merging this PR will decrease code quality in the affected files by 0.40%.

Quality metrics Before After Change
Complexity 10.57 🙂 10.61 🙂 0.04 👎
Method Length 102.80 🙂 105.87 🙂 3.07 👎
Working memory 13.57 😞 13.59 😞 0.02 👎
Quality 50.83% 🙂 50.43% 🙂 -0.40% 👎
Other metrics Before After Change
Lines 524 542 18
Changed files Quality Before Quality After Quality Change
train.py 41.23% 😞 41.20% 😞 -0.03% 👎
utils/arg_parsing.py 70.50% 🙂 67.90% 🙂 -2.60% 👎

Here are some functions in these files that still need a tune-up:

File Function Complexity Length Working Memory Quality Recommendation
utils/arg_parsing.py process_args 13 🙂 156 😞 11 😞 48.02% 😞 Try splitting into smaller methods. Extract out complex expressions
train.py housekeeping 7 ⭐ 168 😞 9 🙂 56.28% 🙂 Try splitting into smaller methods
train.py get_base_argument_parser 0 ⭐ 234 ⛔ 9 🙂 58.21% 🙂 Try splitting into smaller methods
train.py train 1 ⭐ 92 🙂 12 😞 65.49% 🙂 Extract out complex expressions
train.py eval 1 ⭐ 72 🙂 12 😞 68.42% 🙂 Extract out complex expressions

Legend and Explanation

The emojis denote the absolute quality of the code:

The 👍 and 👎 indicate whether the quality has improved or gotten worse with this pull request.


Please see our documentation here for details on how these metrics are calculated.

We are actively working on this report - lots more documentation and extra metrics to come!

Let us know what you think of it by mentioning @sourcery-ai in a comment.