Open rajsite opened 1 year ago
Angular apps should enable errorOnUnknownElements and errorOnUnknownProperties in their TestBeds: https://dev.azure.com/ni/DevCentral/_workitems/edit/2446157/
jasmine test suites should elevate console.error
and console.warn
to test failures
Nimble has a handy pattern for parameterized tests that allows individual cases to be focused / disabled, etc. Handy for dealing with intermittency or debugging a specific case: https://github.com/ni/nimble/pull/1379
Maybe it should be published as a package or a documented convention or maybe an existing package exists that would be better to standardize on, never looked. Such a tool could maybe be a package from the styleguide, @ni/jasmine-utils
or something
Edit: Nimble's parameterize pattern has been published as @ni/jasmine-parameterized
Helper functions should not close around existing state (instead have values as parameters) and should not contain expect statements. Instead helpers should calculate some value that can be used in an expect statement. Keep test logic and expect statements separated. Example discussion: https://github.com/ni/nimble/pull/1335#discussion_r1260440745
Notes:
Avoid conditionally executing expect statements which is a pattern that emerges from new updates / modifications to parameterized tests pretty often.
Some alternatives are:
Refinement: Another example of conditionally executing expects is looping over expect statements from none statically defined test state
Example:
for (const val of obj.dynamicValues()) {
expect(val).toBe(true);
}
In this example if dynamicValues()
has a behavior change the test may not account for that change, for example if timing changes and the result of dynamic values is an empty array until later in the lifecycle.
Alternative:
Have the tests have a statically defined set of values to compare to:
const expected = [true, true, true];
const actual = obj.dynamicValues();
expect(actual).toEqual(jasmine.arrayWithExactContents(expected));
Alternative (discouraged):
This alternative should be discouraged. Without statically defined expectations it takes much more debugging or actual runtime evaluation to interpret test code. As a last result / bare minimum create an expect of the number of expect cases that should run.
// Unfavored alternative
const actual = obj.dynamicValues();
expect(actual.length).toBe(3);
for (const val of actual) {
expect(val).toBe(true);
}
Refinement 2: Looped expects also make it harder to debug tests. Line number no longer points to an expect with a clear responsibility. More discussion here: https://github.com/ni/nimble/pull/1620#discussion_r1450974143
Prefer to use jasmine spy / spies to track events / callbacks instead of tracking and mutating a state Benefits:
Example spy usage but make sure to use an appropriate assertion based on what you need to check, i.e. toHaveBeenCalledTimes
, etc
Be aware of and use the jasmine asymmetric equality testers when appropriate over custom processing code such as jasmine.arrayContaining
, jasmine.objectContaining
, etc.
A collection of thoughts on Jasmine page objects for components
buttonPageObject.click()
and buttonPageObject.text
not buttonPageObject.buttonElement.click()
and buttonPageObject.buttonElement.innerHtml
document.querySelector
as this will produce ambiguous results when more than one instance of the component is on the page. (seen in this PR)We probably also want conventions for Playwright page objects and Playwright tests in general.
It would be nice to capture clearer guidance about how to write Angular component "unit" tests. Should we try to make them pure unit tests with all dependencies mocked? Should services be providing mocks of themselves for other components' tests to use? Should most tests actually be integration tests with real dependencies?
There's a bit more discussion in NI internal SO https://ni.stackenterprise.co/questions/1051/1123#1123
Frankly, I question whether we would ever produce a document on this topic short enough someone would use it. This is not a question unique to NI, and it's still has not been defined in a workflow diagram. It's an art, not a science - or really, the people who do it best answer each of these questions on a case by case basis. Maybe this is best solved with ChatGPT? Give it your situation, and allow it to produce a tailored answer.
When the questions a document attempts to answer are so situationally dependent, it becomes questionable whether it should be written.
Change my mind.
Describe the issue
Making an issue to just start gathering things that if maybe someone was making test conventions would be useful to include / consider.
This issue is just a raw brain dump / discussion thread. Feel free to add items as comments