uw-ipd / tmol

TMol
Apache License 2.0
31 stars 3 forks source link

Remove old score graph system and update tests #238

Closed milkshakeiii closed 3 years ago

milkshakeiii commented 3 years ago

This pull request removes the old score graph system and everything including tests uses the new system now. I may have missed some pieces of old code that still need to be removed. The codecov and linting is being slow to run so some more polishing of the tests and removing unused imports may still be needed. Hopefully codecov will help me find any remnants of the score graph I missed.

Andrew and Frank, please let me know how this looks to you, I may have made some decisions that need to be adjusted!

Two questions I still have:

  1. What are the correct names for the terms to use as keys? I have isolated the keywords to https://github.com/uw-ipd/tmol/blob/solberg/remove_old_score_graph/tmol/system/score_support.py and the ScoreMethod files themselves, so once I know the correct words to use I will update those.
  2. For the new constraint ScoreMethod, should it be included in "full score" tests? (My guess is no?) And, in test_constraint.py, there is some disagreement in score results. Is this expected or is it something I did?

Thanks!

milkshakeiii commented 3 years ago

@fdimaio I just realized that some tests were being skipped on my system due to the @requires_cuda flag, so it looks like I'll have to go through and make sure all of those are fixed.

milkshakeiii commented 3 years ago

Note: I'm getting the microscope linting error:

./tmol/tests/kinematics/test_dof_modules.py:9:1: F401 'tmol.system.kinematic_module_support.kinematic_operation_build_for' imported but unused

But that import is actually used because it's a needed overload for KinematicOperation.build_for .

fdimaio commented 3 years ago

Note: I'm getting the microscope linting error:

./tmol/tests/kinematics/test_dof_modules.py:9:1: F401 'tmol.system.kinematic_module_support.kinematic_operation_build_for' imported but unused

But that import is actually used because it's a needed overload for KinematicOperation.build_for .

Try adding: # noqa: F401 to the end of that line

codecov[bot] commented 3 years ago

Codecov Report

Merging #238 (f850b00) into master (3ddd250) will increase coverage by 0.71%. The diff coverage is 99.19%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #238      +/-   ##
==========================================
+ Coverage   92.40%   93.11%   +0.71%     
==========================================
  Files         274      227      -47     
  Lines       13306    11490    -1816     
==========================================
- Hits        12295    10699    -1596     
+ Misses       1011      791     -220     
Flag Coverage Δ
_shrug_Testing_CPU 88.69% <98.39%> (-1.52%) :arrow_down:
_shrug_Testing_CPU_w_o_jit 89.27% <98.39%> (?)
_shrug_Testing_CUDA 91.05% <99.19%> (-1.34%) :arrow_down:

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
tmol/score/chemical_database.py 100.00% <ø> (ø)
tmol/score/modules/constraint.py 97.43% <ø> (ø)
tmol/tests/score/plot_score_component_pass.py 0.00% <0.00%> (ø)
tmol/system/score_support.py 92.59% <92.00%> (-6.77%) :arrow_down:
tmol/score/modules/bonded_atom.py 97.10% <96.42%> (-0.52%) :arrow_down:
tmol/optimization/modules.py 100.00% <100.00%> (ø)
tmol/score/bonded_atom.py 100.00% <100.00%> (ø)
tmol/score/modules/bases.py 94.62% <100.00%> (+1.20%) :arrow_up:
tmol/score/modules/cartbonded.py 100.00% <100.00%> (ø)
tmol/score/modules/coords.py 96.42% <100.00%> (+13.09%) :arrow_up:
... and 45 more

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 3ddd250...f850b00. Read the comment docs.

milkshakeiii commented 3 years ago

It looks like this can be merged. I did notice that on fela, sometimes torch_device is cpu. Is this expected? It seems tests are inconsistent about whether they determine what device they are on themselves (hardcoded), use a parameter, or use some sort of default.

fdimaio commented 3 years ago

It looks like this can be merged. I did notice that on fela, sometimes torch_device is cpu. Is this expected? It seems tests are inconsistent about whether they determine what device they are on themselves (hardcoded), use a parameter, or use some sort of default.

Feel free to merge. Most tests are run on both cpu and cuda so torch_device will be cpu for the former. There are a handful that are only run on 1 (see decorators on the test functions).