Open kellijohnson-NOAA opened 1 year ago
Removing unnecessary digits from all these tables would be beneficial for lots of reasons, including this one.
For tables of parameters that are often in very different units, would it be possible to start using the base R function signif()
which behaves as below? I think that reporting more than 3 digits for any estimated parameter is misleadingly precise. In Table 15 of the lingcod reports there are also some values that are not precise enough, like Wtlen_1_Fem_GP_1
, which is reported as 0.000
. The exec summary reference point table achieves different number of significant digits by sorting out the digits before writing the CSV file on which it is based: https://github.com/pfmc-assessments/lingcod/blob/main/models/2021.s.018.001_fixTri3/tables/e_ReferencePoints_ES.csv.
r$> signif(0.00012345, 3)
[1] 0.000123
r$> signif(1.2345, 3)
[1] 1.23
Thank you to @chantelwetzel-noaa for mentioning the formatting of our tables that include confidence intervals could lead to poor accessibility for others. Either those using a screen reader or those wanting to copy and paste the information from the pdf to another format may be hindered by something as simple as how we format the tables. From my assessment, it does not matter how we format the confidence interval as long as we do not use merged cells for the column header.
The following picture is a screen shot of Table 15 in the 2021 lingcod assessment for the northern portion of their range. I used a screen reader to read the table to me and it read entries in the "Bounds" column as "minus four point zero zero zero 'pause' four point zero zero zero" which is similar to how I read it. Though, we are able to shorten all the zeros to just read four. So, it might be good to not report a lot of zero when they are not needed.
@chantelwetzel-noaa feel free to comment, provide me with more things to test, or close the issue.