Huh / collar

Utilities for exploring telemetry data
Other
6 stars 9 forks source link

Added tests for morph_gps #49

Closed foresthayes closed 5 years ago

foresthayes commented 5 years ago

@Huh I am having issues testing and was hoping you can provide some insight.

  1. Although I could work around this, the test is failing to recognize the latitude column from the Vectronics csv (due to special characters?). What has me irked is that it works outside of the test environment.

Evaluation error: object 'longitude[�]' not found.

  1. When testing assertions the test fails due to an 'unexpected message' although the messages appear to be identical.
test-morph_gps.R:78: failure: Check morph_gps assertions
`morph_gps(...)` threw an condition with unexpected message.
Expected match: "Column id_col must be unquoted. If special characters or spaces exist use back ticks (`A B`)."
Actual message: "Column id_col must be unquoted. If special characters or spaces exist use back ticks (`A B`)."

Thanks!

Huh commented 5 years ago

I will look into both cases today. The second case was tripping me up too, not sure what that is about.

Huh commented 5 years ago

Thanks for doing this

foresthayes commented 5 years ago

Thanks - the first issue probably isn't worth spending much time on since it seems to be constrained to the tests and I can always use a different data source.

Huh commented 5 years ago

Turns out reading the documentation is helpful. When doing matching in expectcondition or any expect* function in testthat you are actually supplying a regular expression, so if you have a . or other special character it will likely fail. To fix this use add fixed = T after the regex and it should solve the problem if you are copying and pasting the anticipated message.

The same logic will be true for your other issue. Not sure if the encoding error is just in these comments or if that is the issue, but either way you should be able to regex your way around that problem.

By the way, the vectronics file is not a csv it is tab delimited, yikes! Not sure how that happened. Man I hate Excel.

I am going to suggest we remove it from the package, sound good?

Huh commented 5 years ago

Unreal...I created a new fetch_delim for custom delimiters. The vectronics.csv file was tab delimited and then was saved as a csv, maybe, which resulted in a tab and comma separated values. To read it you use the example...

vec_fl <- system.file("extdata", "vectronics.csv", package = "collar")
fetch_delim(vec_fl, delim = ",\t", skip = 0)

To fix the column names I think we should do something better than just tolower. I have two options to fix the first issue...base R

> colnames(fetch_delim(vec_fl, delim = ",\t", skip = 0) %>% dplyr::rename_all(make.names))
 [1] "no"           "collarid"     "animalid"     "utc_date"     "utc_time"     "lmt_date"    
 [7] "lmt_time"     "origin"       "scts_date"    "scts_time"    "latitude..."  "longitude..."
[13] "height.m."    "dop"          "fixtype"      "X3d_error.m." "sats"         "mort.status" 
[19] "activity"     "main.v."      "beacon.v."    "temp..c."

And tibble, for the sake of consistency, provides a name_repair option that is meant to replace tidy_names and variants. The URL is here and would allow us to give the user the option to pass a function in the spirit of rlang::as_function. I think this is the most flexible option.

How about we default to make.names for base R like names, but allow users to pass a function. I might even add the when we get a column name like latitude... that we strip the preceding and trailing dots. Do you like that option?

Huh commented 5 years ago

I just pushed to the master. A new version of the adj_col_nms now removes non-ASCII characters and pretty's up column names. I used one test case, so more testing is needed, but the user can always pass a custom function to rename_fun in the read_csv function. I will update the other fetch functions to have the same behavior.