MarkBind / markbind

MarkBind is a tool for generating content-heavy websites from source files in Markdown format
https://markbind.org/
MIT License
134 stars 123 forks source link

Test logger calls in tests for NodeProcessor #2463

Closed luminousleek closed 3 months ago

luminousleek commented 3 months ago

What is the purpose of this pull request?

Partially addresses #2099 Related to #2053

Overview of changes:

Anything you'd like to highlight/discuss:

Mocking vs Spying

I chose to mock the logger instead of spy on it so as to decouple the logger from this test. If I had spied on the logger, the actual implementation of the logger would have been used, which may also consume (slightly) more resources. Also, errors from winston might also interfere with the test (even though that's very unlikely).

One side-effect of mocking the logger is that the logger output will not be printed in the console, as seen in the screenshot below (left is spied, right is mocked).

Screenshot 2024-03-15 at 6 58 16 PM

If the logger output is still needed to be printed in the console, then a mock implementation of the warn method could look something like this

jest.mock('../../../src/utils/logger', () => ({
  warn: jest.fn(msg => console.log(`warn: ${msg}`)),
}));

and would result in this being printed to the console

Screenshot 2024-03-15 at 9 22 20 PM

which does look pretty clunky. Do let me know which approach you prefer.

Inconsistent naming in NodeProcessor.data.ts

The naming conventions of the test inputs in NodeProcessor.data.ts are inconsistent. Many of the elements there have a test to check if the slots do override the attributes of the elements, but they are named in two separate ways:

  1. PROCESS_[ELEMENT]_ATTRIBUTES_NO_OVERRIDE
  2. PROCESS_[ELEMENT]_[ATTRIBUTE]_SLOT_TAKES_PRIORITY

I think that's why the author of #2053 wrote another constant called PROCESS_PANEL_HEADER_SLOT_TAKES_PRIORITY when the constant PROCESS_PANEL_ATTRIBUTES_NO_OVERRIDE tests for the exact same thing. That's why I have an expect statement after each of these two tests, because in both times the logger is called with the same input.

A PR could be done to make the naming convention consistent, as well as to remove the unnecessary test.

Logger warnings still inconsistent

With the merging of #2053, Panels, Dropdowns, Modals and Popovers print warnings to the console if there are slots overriding the element attributes. However, other elements such as Quizzes and QOptions do not print warnings when this happens. This is because in the code of MdAttributeRenderer.ts, not every attribute is checked to see if there is an overriding slot, so the logger will not warn if the unchecked attributes are overridden.

A simple (though untested) fix could be to have a new method as follows

processAttribute(node, attribute, isInline, slotName = attribute) {
    if (!this.hasSlotOverridingAttribute(node, attribute, slotName) {
        this.processAttributeWithoutOverride(node, attribute, isInline, slotName);
    }
}

and replace all the processAttributeWIthoutOverride methods with this.

Testing instructions:

Proposed commit message: (wrap lines at 72 characters)

Test logging calls in NodeProcessor

Previously, calls to the logger were not tested for in testing
NodeProcessor.

Let's mock the logger and test if it is being called correctly.

Checklist: :ballot_box_with_check:


Reviewer checklist:

Indicate the SEMVER impact of the PR:

At the end of the review, please label the PR with the appropriate label: r.Major, r.Minor, r.Patch.

Breaking change release note preparation (if applicable):

Give a brief explanation note about:

  • what was the old feature that was made obsolete
  • any replacement feature (if any), and
  • how the author should modify his website to migrate from the old feature to the replacement feature (if possible).
codecov[bot] commented 3 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 48.97%. Comparing base (8a5688f) to head (cab2eb5). Report is 4 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #2463 +/- ## ========================================== + Coverage 48.90% 48.97% +0.06% ========================================== Files 124 124 Lines 5247 5254 +7 Branches 1112 1112 ========================================== + Hits 2566 2573 +7 - Misses 2373 2376 +3 + Partials 308 305 -3 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

LamJiuFong commented 3 months ago

@luminousleek Thanks for the great work! I have a few ideas:

  1. I think it is fine to not print out the message to the console, using toHaveBeenCalledWith() to check the input of the logger is enough to ensure the correctness of the method.

  2. Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files? For your reference, jest-manual-mocks

  3. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :

    afterEach(() => {
    jest.clearAllMocks();
    });

    to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Please feel free to add your comments!

luminousleek commented 3 months ago
  1. Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files? For your reference, jest-manual-mocks

Yeah I think this is a good idea, will probably do it in another PR. Thanks for the link.

  1. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Yup I agree, the length issue didn't occur to me until you mentioned it. Thanks for commenting on this PR!

itsyme commented 3 months ago

@luminousleek Thanks for the great work! I have a few ideas:

  1. I think it is fine to not print out the message to the console, using toHaveBeenCalledWith() to check the input of the logger is enough to ensure the correctness of the method.
  2. Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files? For your reference, jest-manual-mocks
  3. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Please feel free to add your comments!

Great point about the clearing mocks! I think that is particularly important to note as bugs that arise from not clearing mocks may be harder to spot when it happens. Great catch!

yucheng11122017 commented 3 months ago

Hi @luminousleek thank you very much! This looks like excellent and detailed work.

luminousleek commented 3 months ago
  1. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Added the afterEach teardown hook, @yucheng11122017 do please take a look (and sorry for doing this after your review)

Edit: after looking at some code on StackOverflow regarding this, it seems that they put the jest.clearAllMocks() method in a beforeEach hook instead of an afterEach hook. I think this makes a bit more sense in the edge case that this test suite runs after another test suite which didn't reset the mocks after the other tests were run.

itsyme commented 3 months ago
  1. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Added the afterEach teardown hook, @yucheng11122017 do please take a look (and sorry for doing this after your review)

Edit: after looking at some code on StackOverflow regarding this, it seems that they put the jest.clearAllMocks() method in a beforeEach hook instead of an afterEach hook. I think this makes a bit more sense in the edge case that this test suite runs after another test suite which didn't reset the mocks after the other tests were run.

Good catch! I agree with the use of beforeEach as I feel that this would ensure that the tests in the file are guaranteed to have cleared mocks before each test is run rather than depending on another test file to ensure that the mocks are cleared. Here's an example for clarity:

Assume test 1 and test 2 are in two separate files and are run sequentially,

In beforeEach: Test 1 clear mocks -> Test 1 runs -> Test 2 clear mocks -> Test 2 runs

In afterEach: Test 1 runs -> Test 1 clear mocks -> Test 2 runs -> Test 2 clear mocks

In the case of using afterEach you can see that the clearing of mocks relies on the previous test 1 to clear it, whereas in beforeEach if the Test 2 run fails because mocks were not cleared, it is easier to tell that the mocks are not cleared as the Test 2 file does not have the beforeEach teardown hook. This makes each test file responsible for the clearing of mocks for their own tests.