Closed mastoffel closed 4 months ago
Click to see where and how coverage changed
File Statements Missing Coverage Coverage
(new stmts)Lines missing
autoemulate
compare.py
243
cross_validate.py
57
hyperparam_searching.py
model_processing.py
printing.py
save.py
56
utils.py
91, 97
autoemulate/emulators
gaussian_process.py
gaussian_process_mogp.py
71-83, 87, 90
gradient_boosting.py
neural_net_sk.py
neural_net_torch.py
polynomials.py
random_forest.py
rbf.py
support_vector_machines.py
xgboost.py
autoemulate/emulators/neural_networks
rbf.py
tests
test_cross_validate.py
test_emulators.py
test_model_processing.py
test_printing.py
test_save.py
test_torch.py
test_utils.py
Project Total
The report is truncated to 25 files out of 29. To see the full report, please visit the workflow summary page.
This report was generated by python-coverage-comment-action
Attention: Patch coverage is 94.77612%
with 7 lines
in your changes are missing coverage. Please review.
Project coverage is 90.31%. Comparing base (
69932f1
) to head (8fedada
).
Files | Patch % | Lines |
---|---|---|
autoemulate/compare.py | 85.71% | 2 Missing :warning: |
autoemulate/plotting.py | 33.33% | 2 Missing :warning: |
autoemulate/hyperparam_searching.py | 0.00% | 1 Missing :warning: |
autoemulate/save.py | 87.50% | 1 Missing :warning: |
autoemulate/utils.py | 87.50% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
LGTM.
Have one suggestion for
NeuralNetTorch
model names.Also, do you think it would be useful to have a getter function / attribute for all emulators that return the model name? e.g.
> model = NeuralNetTorch(module='mlp') > model.model_name NNMLP > model = RandomForest() > model.model_name RandomForest
This would be easy to implement through accessing <Class>.__class__.__name__
in a simple implementation like this:
@property
def model_name(self):
return self.__class__.__name__
...at least I think...?
LGTM. Have one suggestion for
NeuralNetTorch
model names. Also, do you think it would be useful to have a getter function / attribute for all emulators that return the model name? e.g.> model = NeuralNetTorch(module='mlp') > model.model_name NNMLP > model = RandomForest() > model.model_name RandomForest
This would be easy to implement through accessing
<Class>.__class__.__name__
in a simple implementation like this:@property def model_name(self): return self.__class__.__name__
...at least I think...?
NeuralNetTorch(module='mlp')
and NeuralNetTorch(module='rbf')
would have the same name in this case. We can have prefix + module like model_name = f"NN{upper(self.module)}"
for all NeuralNetTorch
architectures, which will lead to NNMLP
, NNRBF
etc.
Sounds like a good solution to me!
Ok, implemented both of @bryanlimy's suggestions.
model_name
property method, and the get_model_name
function from utils
now retrieves the model name even if the model is wrapped in a Pipeline
and Multioutput
. def model_name(self): return f"NN{self.module_name.capitalize()}"
Maybe would be good if you could quickly have an eye on this too @bryanlimy ! Lots of changes here, but I think it's mostly fine.
LGTM!
self.models
todict
withmodel_names : model
get_model_name()
to get model name from that dictresolves #175 and #190