Closed bsaul closed 5 months ago
- I did clean up some functions for style and readability while I was there.
Specifically, I cleaned up quite a bit in process_observers.R
. We don't have any style checks in place for this repo yet, but in general let's aim for:
I should note that although I tried not change the logic of any functions, we don't have any tests in place so we're flying by the seat of our pants.
@IJBG - feel free to squash and merge when you're ready. Or make a note that you approve the PR.
I should note that although I tried not change the logic of any functions, we don't have any tests in place so we're flying by the seat of our pants.
Reading through the tidyr style guide was helpful. Do you have something similar about how to build in tests? I'm always testing as I go, but just in the console and with temporary test datasets or a scratchtesting file, and not, as I should implement, in a systematic manner that I then keep around for later testing.
Like, today the auto-testing was really helpful in #50 as I downloaded a new version of the Orange dataset from ebird, and the auto-testing caught that some of the new 2024 submissions don't have "Orange" in the ebird location name. (I would otherwise catch this later when I QC for the 2024 download, for now I just removed all 2024 submissions from the My_EBird csv so I could get survey_list updated)
Do you have something similar about how to build in tests?
Do you mean best practices for writing tests? Or a framework for doing so? Or both?
As far as a framework goes, I added testthat
in #54 (e.g.)
tests/testthat.R has the framework walk-through I was looking for, thanks!
tests/testthat.R has the framework walk-through I was looking for, thanks!
I'll go ahead and create a PR that simply adds the testthat framework so that we don't have to wait for #54 to add tests for other stuff.
This PR (starts to) address #48 by adding the
.data$
prefix to variables.Notes:
But most of these are in WIP files, so I didn't touch them.