Closed luminousleek closed 3 months ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 48.97%. Comparing base (
8a5688f
) to head (cab2eb5
). Report is 4 commits behind head on master.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@luminousleek Thanks for the great work! I have a few ideas:
I think it is fine to not print out the message to the console, using toHaveBeenCalledWith()
to check the input of the logger is enough to ensure the correctness of the method.
Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files? For your reference, jest-manual-mocks
If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
jest.clearAllMocks();
});
to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)
Please feel free to add your comments!
- Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files? For your reference, jest-manual-mocks
Yeah I think this is a good idea, will probably do it in another PR. Thanks for the link.
- If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => { jest.clearAllMocks(); });
to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)
Yup I agree, the length issue didn't occur to me until you mentioned it. Thanks for commenting on this PR!
@luminousleek Thanks for the great work! I have a few ideas:
- I think it is fine to not print out the message to the console, using
toHaveBeenCalledWith()
to check the input of the logger is enough to ensure the correctness of the method.- Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files? For your reference, jest-manual-mocks
- If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => { jest.clearAllMocks(); });
to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)
Please feel free to add your comments!
Great point about the clearing mocks! I think that is particularly important to note as bugs that arise from not clearing mocks may be harder to spot when it happens. Great catch!
Hi @luminousleek thank you very much! This looks like excellent and detailed work.
- If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => { jest.clearAllMocks(); });
to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)
Added the afterEach
teardown hook, @yucheng11122017 do please take a look (and sorry for doing this after your review)
Edit: after looking at some code on StackOverflow regarding this, it seems that they put the jest.clearAllMocks()
method in a beforeEach
hook instead of an afterEach
hook. I think this makes a bit more sense in the edge case that this test suite runs after another test suite which didn't reset the mocks after the other tests were run.
- If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => { jest.clearAllMocks(); });
to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)
Added the
afterEach
teardown hook, @yucheng11122017 do please take a look (and sorry for doing this after your review)Edit: after looking at some code on StackOverflow regarding this, it seems that they put the
jest.clearAllMocks()
method in abeforeEach
hook instead of anafterEach
hook. I think this makes a bit more sense in the edge case that this test suite runs after another test suite which didn't reset the mocks after the other tests were run.
Good catch! I agree with the use of beforeEach
as I feel that this would ensure that the tests in the file are guaranteed to have cleared mocks before each test is run rather than depending on another test file to ensure that the mocks are cleared. Here's an example for clarity:
Assume test 1 and test 2 are in two separate files and are run sequentially,
In beforeEach
:
Test 1 clear mocks -> Test 1 runs -> Test 2 clear mocks -> Test 2 runs
In afterEach
:
Test 1 runs -> Test 1 clear mocks -> Test 2 runs -> Test 2 clear mocks
In the case of using afterEach
you can see that the clearing of mocks relies on the previous test 1 to clear it, whereas in beforeEach
if the Test 2 run fails because mocks were not cleared, it is easier to tell that the mocks are not cleared as the Test 2 file does not have the beforeEach
teardown hook. This makes each test file responsible for the clearing of mocks for their own tests.
What is the purpose of this pull request?
Partially addresses #2099 Related to #2053
Overview of changes:
Anything you'd like to highlight/discuss:
Mocking vs Spying
I chose to mock the logger instead of spy on it so as to decouple the logger from this test. If I had spied on the logger, the actual implementation of the logger would have been used, which may also consume (slightly) more resources. Also, errors from
winston
might also interfere with the test (even though that's very unlikely).One side-effect of mocking the logger is that the logger output will not be printed in the console, as seen in the screenshot below (left is spied, right is mocked).
If the logger output is still needed to be printed in the console, then a mock implementation of the
warn
method could look something like thisand would result in this being printed to the console
which does look pretty clunky. Do let me know which approach you prefer.
Inconsistent naming in
NodeProcessor.data.ts
The naming conventions of the test inputs in
NodeProcessor.data.ts
are inconsistent. Many of the elements there have a test to check if the slots do override the attributes of the elements, but they are named in two separate ways:PROCESS_[ELEMENT]_ATTRIBUTES_NO_OVERRIDE
PROCESS_[ELEMENT]_[ATTRIBUTE]_SLOT_TAKES_PRIORITY
I think that's why the author of #2053 wrote another constant called
PROCESS_PANEL_HEADER_SLOT_TAKES_PRIORITY
when the constantPROCESS_PANEL_ATTRIBUTES_NO_OVERRIDE
tests for the exact same thing. That's why I have anexpect
statement after each of these two tests, because in both times the logger is called with the same input.A PR could be done to make the naming convention consistent, as well as to remove the unnecessary test.
Logger warnings still inconsistent
With the merging of #2053, Panels, Dropdowns, Modals and Popovers print warnings to the console if there are slots overriding the element attributes. However, other elements such as Quizzes and QOptions do not print warnings when this happens. This is because in the code of
MdAttributeRenderer.ts
, not every attribute is checked to see if there is an overriding slot, so the logger will not warn if the unchecked attributes are overridden.A simple (though untested) fix could be to have a new method as follows
and replace all the
processAttributeWIthoutOverride
methods with this.Testing instructions:
Proposed commit message: (wrap lines at 72 characters)
Checklist: :ballot_box_with_check:
Reviewer checklist:
Indicate the SEMVER impact of the PR:
At the end of the review, please label the PR with the appropriate label:
r.Major
,r.Minor
,r.Patch
.Breaking change release note preparation (if applicable):