Open emilliman5 opened 11 months ago
Merging #304 (1738599) into master (31854ae) will increase coverage by
0.71%
. The diff coverage is100.00%
.:exclamation: Current head 1738599 differs from pull request most recent head f27fac9. Consider uploading reports for the commit f27fac9 to get more accurate results
@@ Coverage Diff @@
## master #304 +/- ##
==========================================
+ Coverage 61.85% 62.57% +0.71%
==========================================
Files 66 66
Lines 1025 1034 +9
==========================================
+ Hits 634 647 +13
+ Misses 391 387 -4
Files Changed | Coverage Δ | |
---|---|---|
R/pkg_ref_cache.R | 9.09% <ø> (ø) |
|
R/pkg_ref_class.R | 88.80% <ø> (ø) |
|
R/pkg_score.R | 59.09% <100.00%> (+5.24%) |
:arrow_up: |
R/summarize_scores.R | 92.59% <100.00%> (+18.67%) |
:arrow_up: |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
I can confirm the current solution is working:
library(dplyr)
devtools::load_all(".")
packageVersion("riskmetric")
# > [1] ‘0.2.2’
assessed <- "dplyr" %>%
pkg_ref(source = "pkg_cran_remote", repos = c("https://cran.rstudio.com")) %>%
as_tibble() %>%
pkg_assess()
initial_scoring <- assessed %>% pkg_score()
initial_scoring$pkg_score %>% round(2)
# > [1] 0.11
# > attr(,"label")
# > [1] "Summarized risk score from 0 (low) to 1 (high)."
first draft for fixing metric weights errors.
The problem:
If a metric was
NA
it was essentially imputed to0
unless you explicitly set the it's weight to0
. The proposed solution: Set metric weight to0
if its score isNA
.Some caveats: 1) The solution requires that we compute weights and scores per package because a collection of packages could have different patterns of
NAs
. e.g. scoring packages from different sources. 2) What if a user explicitly sets a weight but the metric is missing? Currently,standardize_weights
silently resets the weight to0
. This should at the least issue a warning. 3) If a user sets weights for a subset of metrics shouldpkg_score
: i) error, ii) warn, or iii) only compute a summarized score for the metrics with a weight? 1) very conservative and will break pipelines if/when metrics are added 2) the middle ground? 3) most backward compatible but user could be unknowingly omitting metrics upon upgrade ofriskemtric
This is still a work in progress, but i think it is baked enough to discuss.