Our tests currently focus primarily on the correctness of the features --- for example, that a given feature outputs the proper values, handles edge cases, and aligns with our intuitions for what that feature is supposed to measure.
Before releasing the package, however, we should also test that the overall system is robust to improper data inputs; for example:
What if the data itself has issues --- e.g., NA values in columns we expect to exist?
What if the user passes in improper column names?
What if the columns are not named as expected?
How do we fail gracefully: check for error conditions, inform the user of their issues, and help them correct it?
Our tests currently focus primarily on the correctness of the features --- for example, that a given feature outputs the proper values, handles edge cases, and aligns with our intuitions for what that feature is supposed to measure.
Before releasing the package, however, we should also test that the overall system is robust to improper data inputs; for example:
How do we fail gracefully: check for error conditions, inform the user of their issues, and help them correct it?