Lombiq / Open-Source-Orchard-Core-Extensions

Looking for some useful Orchard Core extensions? Here's a bundle solution of all of Lombiq's open-source Orchard Core extensions (modules and themes). Clone and try them out!
https://lombiq.com
BSD 3-Clause "New" or "Revised" License
45 stars 18 forks source link

OSOE-770: Add support for structured html-validate output #738

Closed AydinE closed 4 months ago

AydinE commented 5 months ago

Hi @DemeSzabolcs

Before I continue along this path I wanted to check with you first on my current approach: See also: https://github.com/Lombiq/UI-Testing-Toolbox/pull/354

Currently to make sure that it's backwards compatible and does not break old tests I have not changed the default output formatter. Because of this however when using this the user will have to manually set the formatter before usage, See: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/compare/issue/[OSOE-770](https://lombiq.atlassian.net/browse/OSOE-770)?expand=1#diff-2af6fb34d863dcd5368561a878f904cf43ec23c5e9f70feb218a89f36bee2845R26

configuration.HtmlValidationConfiguration.HtmlValidationOptions.ResultFileFormatter =
                    HtmlValidateFormatter.Names.Json;

If JSON formatter is set, the new validationResult.GetParsedErrorsAsync() can now be used instead of the traditional validationResult.GetErrorsAsync() - I've added some checks and throw specific errors to tell the user about the issue if they try to use the validationResult.GetParsedErrorsAsync() without setting the formatter to JSON first.

After this the errors are formatted to the new HtmlValidationError object see: https://github.com/Lombiq/UI-Testing-Toolbox/pull/354/files/701f9017ecc72947a76501ae9df2f76d7d713b33#diff-befb2930039e072827e787f53aab0f5ae1adda39f5ca02961606e36460738dbc In this way certain RuleId's or whatever else can be excluded in a neater way as before.

Let me know your thoughts and/or any changes you'd like to see

DemeSzabolcs commented 5 months ago

I see, thanks.

Currently to make sure that it's backwards compatible and does not break old tests I have not changed the default output formatter.

Is there a chance that it would break something somehow?

  1. Because if not, I think it should be the default output formatter, based on this part in the issue description:

Keep the simple text output too, but this output should from then on be the recommended way. Change all built-in features to use that.

AydinE commented 5 months ago

I see, thanks.

Currently to make sure that it's backwards compatible and does not break old tests I have not changed the default output formatter.

Is there a chance that it would break something somehow?

  1. Because if not, I think it should be the default output formatter, based on this part in the issue description:

Keep the simple text output too, but this output should from then on be the recommended way. Change all built-in features to use that.

@DemeSzabolcs I would of course change all the references on our side to use the new formatted output but I meant more in the sense of external projects who rely on this for testing those would break with this update right?

DemeSzabolcs commented 5 months ago

I would of course change all the references on our side to use the new formatted output but I meant more in the sense of external projects who rely on this for testing those would break with this update right?

If you can set it to the default inside the https://github.com/Lombiq/UI-Testing-Toolbox project, then I think no. Because these modules are not updated anywhere automatically.

Other projects would still use the old version. And we usually have submodule/ Orchard Core update to-do-s, where the developer will update the module and also fix the the tests based on the breaking changes.

AydinE commented 4 months ago

Alright @DemeSzabolcs I have set JSON as the default and have changed the tests to use the new parsed output let me know if anything else is needed

DemeSzabolcs commented 4 months ago

Okay, I don't know about anything right now. Let me know when it's ready for a review.

AydinE commented 4 months ago

@DemeSzabolcs In that case I think it's ready for a review

DemeSzabolcs commented 4 months ago

Okay, thank you! I will do a review in the upcoming days.

DemeSzabolcs commented 4 months ago

These run was flaky: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8619854435?pr=738 image

I downloaded the failure dump image

And it's failed with this message: image

DemeSzabolcs commented 4 months ago

@AydinE There is a static code analyzer warning: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8738561440/job/23977863448?pr=738#step:6:162

AydinE commented 4 months ago

@AydinE There is a static code analyzer warning: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8738561440/job/23977863448?pr=738#step:6:162

Hey @DemeSzabolcs indeed, I was actually waiting for feedback from you on the notes I put before fixing everything up.

github-actions[bot] commented 4 months ago

@check-spelling-bot Report

:red_circle: Please review

See the :open_file_folder: files view, the :scroll:action log, or :memo: job summary for details.

Unrecognized words (1)

binlog

Previously acknowledged words that are now absent Bubl listheader mediatheme notlike npmrc PII Portainer prebuild taxonomyfield visualstudioextension vuejs 🫥
Some files were automatically ignored :see_no_evil: These sample patterns would exclude them: ``` ^\Qbuild.binlog\E$ ``` You should consider adding them to: ``` .github/actions/spelling/excludes.txt ``` File matching is via Perl regular expressions. To check these files, more of their words need to be in the dictionary than not. You can use `patterns.txt` to exclude portions, add items to the dictionary (e.g. by adding them to `allow.txt`), or fix typos.
To accept these unrecognized words as correct and remove the previously acknowledged and now absent words and update file exclusions, you could run the following commands ... in a clone of the [git@github.com:Lombiq/Open-Source-Orchard-Core-Extensions.git](https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions.git) repository on the `issue/OSOE-770` branch ([:information_source: how do I use this?]( https://github.com/check-spelling/check-spelling/wiki/Accepting-Suggestions)): ``` sh curl -s -S -L 'https://raw.githubusercontent.com/Lombiq/GitHub-Actions/dev/apply.pl' | perl - 'https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8784715177/attempts/1' ```
Available :books: dictionaries could cover words not in the :blue_book: dictionary Dictionary | Entries | Covers | Uniquely -|-|-|- [cspell:typescript/dict/typescript.txt](https://raw.githubusercontent.com/check-spelling/cspell-dicts/v20230509/dictionaries/typescript/dict/typescript.txt)|1098|1|1| Consider adding them (in `.github/workflows/build-and-test.yml`) to `extra_dictionaries`: ``` yml cspell:typescript/dict/typescript.txt ``` To stop checking additional dictionaries, add (in `.github/workflows/build-and-test.yml`): ``` yml check_extra_dictionaries: '' ```
Errors (2) See the [:open_file_folder: files](https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/pull/738/files/) view, the [:scroll:action log](https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8784715177/job/24103531601#step:5:1), or [:memo: job summary](https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8784715177/attempts/1#summary-24103531601) for details. [:x: Errors](https://github.com/check-spelling/check-spelling/wiki/Event-descriptions) | Count -|- [:x: check-file-path](https://github.com/check-spelling/check-spelling/wiki/Event-descriptions#check-file-path) | 1 [:information_source: large-file](https://github.com/check-spelling/check-spelling/wiki/Event-descriptions#large-file) | 1 See [:x: Event descriptions](https://github.com/check-spelling/check-spelling/wiki/Event-descriptions) for more information.
AydinE commented 4 months ago

Hey @DemeSzabolcs while trying to finish up this tasks while running tests locally I've run into an issue and I am wondering if you have the same or if you know how to potentially fix it. When I run the tests the chrome instance that was used for the test does not seem to be disposed. This happens both if I run the tests through the IDE or via the CLI with dotnet test (headless) this actually locks up my PC fully if I try to run the full suite of tests. If I run them one by one I have to run taskkill to get rid of the leftover tasks.

This actually shouldn't happen because of the call to _context.Scope?.Dispose(); here which should underwater also call _driver?.Dispose();.

Because of this, a full test run now takes about 31 minutes on a relatively powerful PC that I have here at home. I am wondering if this issue is also happening for you and the rest of the team, and also perhaps in the pipeline?

Would love to get your thoughts on this

Edit: Just for some extra info, first couple of groups of tests go smoothly and quickly but later tests run veeeerry slow because old instances of chrome keep the CPU at 100% as more tests are run:

image

DemeSzabolcs commented 4 months ago

Hey @DemeSzabolcs while trying to finish up this tasks while running tests locally I've run into an issue and I am wondering if you have the same or if you know how to potentially fix it. When I run the tests the chrome instance that was used for the test does not seem to be disposed. This happens both if I run the tests through the IDE or via the CLI with dotnet test (headless) this actually locks up my PC fully if I try to run the full suite of tests. If I run them one by one I have to run taskkill to get rid of the leftover tasks.

This actually shouldn't happen because of the call to _context.Scope?.Dispose(); here which should underwater also call _driver?.Dispose();.

Because of this, a full test run now takes about 31 minutes on a relatively powerful PC that I have here at home. I am wondering if this issue is also happening for you and the rest of the team, and also perhaps in the pipeline?

Would love to get your thoughts on this

Edit: Just for some extra info, first couple of groups of tests go smoothly and quickly but later tests run veeeerry slow because old instances of chrome keep the CPU at 100% as more tests are run:

image

I see. I don't have this issue, but I recommend checking out this, it might help: https://github.com/Lombiq/UI-Testing-Toolbox/blob/43eba31b50a2273b2c7ae4fe1406e901137877fc/Lombiq.Tests.UI/Docs/Troubleshooting.md?plain=1#L15

github-actions[bot] commented 4 months ago

This pull request has merge conflicts. Please resolve those before requesting a review.

AydinE commented 4 months ago

I see. I don't have this issue, but I recommend checking out this, it might help: https://github.com/Lombiq/UI-Testing-Toolbox/blob/43eba31b50a2273b2c7ae4fe1406e901137877fc/Lombiq.Tests.UI/Docs/Troubleshooting.md?plain=1#L15

It's not an aborted test, this is happening for succesfully run tests. For every single test on multiple machines and multiple IDE's and the CLI too

Piedone commented 4 months ago

While not with that much CPU usage, I also see a leftover Chrome process when running more than one tests in headless (doesn't happen in non-headless):

image

Piedone commented 4 months ago

I've created an issue for this: https://github.com/Lombiq/UI-Testing-Toolbox/issues/356. For now, I suggest you not run many tests in parallel locally, or run them in non-headless mode. You shouldn't need to run all tests locally anyway, since the CI will run them, just a single one during development to test your code.

github-actions[bot] commented 4 months ago

This pull request has merge conflicts. Please resolve those before requesting a review.

AydinE commented 4 months ago

Hey @DemeSzabolcs ,

I am having a bit of a hard time understanding why builds in the pipeline either fail or even sometimes when it succeeds it says it might be "flaky".

The issue is always with the same test: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/pull/738/commits/dd8c6e5dfe2736c915b4257c886eb1771e03cd43#diff-2af6fb34d863dcd5368561a878f904cf43ec23c5e9f70feb218a89f36bee2845R20 weirdly though no matter how hard I try I can not replicate this issue locally (2 different machines, different IDE's) and I even have tried running the exact command that the pipeline seems to run through the CLI. dotnet test --configuration Debug --nologo --logger 'trx;LogFileName=test-results.trx' --logger 'console;verbosity=detailed' --verbosity quiet test/Lombiq.OSOCE.Tests.UI/Lombiq.OSOCE.Tests.UI.csproj None of these ever give the same issues as the pipeline.

Is there anyway to more closely reproduce the pipeline or do you have any ideas why this specific test might be failing in the pipeline but not locally?

github-actions[bot] commented 4 months ago

This pull request has merge conflicts. Please resolve those before requesting a review.

github-actions[bot] commented 4 months ago

This pull request has merge conflicts. Please resolve those before requesting a review.

AydinE commented 4 months ago

Hey @DemeSzabolcs ,

Did you see my message? https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/pull/738#issuecomment-2085828386 Any thoughts?

DemeSzabolcs commented 4 months ago

Hey @DemeSzabolcs ,

Did you see my message? #738 (comment) Any thoughts?

For some reason no, thanks for pinging! I will check it out today.

DemeSzabolcs commented 4 months ago

@AydinE So the flakiness indication works this way: A test is retried 3 times on the CI, if it fails 0 times, then there will be no warning, if it fails 1-2 times, but passes on the third run, there will be a flakiness warning (there will be a generated UI test failure dump too, so you can see what the problem is). If a test fails three times, then the workflow will fail too and the test will be considered as failed.

Some flakiness is okay. By default a test is retried 3 times. If it only fails once and it's not the same test every time, and there are only some runs where a test is flaky but otherwise there are a lot of runs where there are no flaky tests, it's okay.

Of course if it fails 3 times in a row, then the workflow will fail and that indicates that there is a real problem.

If you could not reproduce it locally, for debugging this I would recommend looking into the failure dumps and build logs.

I checked out the failed run of UIKitShowcasePageShouldBeCorrect and this was in the test output log:

2024-05-01 08:48:24.5374 - The test failed with the following exception: Lombiq.Tests.UI.Exceptions.HtmlValidationAssertionException: Invalid JSON, make sure to set the OutputFormatter to JSON. Check the HTML validation report in the failure dump for details.
 ---> System.Text.Json.JsonException: Invalid JSON, make sure to set the OutputFormatter to JSON.

You can download it from here:https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/actions/runs/8907281677/artifacts/1463356716

So it's the same problem as I mentioned here: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/pull/738#issuecomment-2048680723

AydinE commented 4 months ago

I checked out the failed run of UIKitShowcasePageShouldBeCorrect and this was in the test output log:

@DemeSzabolcs Yeah that is the weird part, the error is: Invalid JSON, make sure to set the OutputFormatter to JSON. However as mentioned that makes no sense because the default output is now set to JSON and in one of the latest commits I even specifically set it to JSON just to for that specific test, same issue.

Weirdly enough though this is not the only test that uses this parsed error method and none of the other tests seem to have the same issue. So I am just wondering if there is anything different about the tests running in the pipeline compared to running them locally? because it's hard to debug if I can't replicate the issue.

EDIT: In the last run the test that has the issue is disabled just to see if any of the other tests were having the same problems but as we can see they run fine.

DemeSzabolcs commented 4 months ago

I checked out the failed run of UIKitShowcasePageShouldBeCorrect and this was in the test output log:

@DemeSzabolcs Yeah that is the weird part, the error is: Invalid JSON, make sure to set the OutputFormatter to JSON. However as mentioned that makes no sense because the default output is now set to JSON and in one of the latest commits I even specifically set it to JSON just to for that specific test, same issue.

Weirdly enough though this is not the only test that uses this parsed error method and none of the other tests seem to have the same issue. So I am just wondering if there is anything different about the tests running in the pipeline compared to running them locally? because it's hard to debug if I can't replicate the issue.

EDIT: In the last run the test that has the issue is disabled just to see if any of the other tests were having the same problems but as we can see they run fine.

The difference between the local and CI run could be, that the CI run uses Ubuntu: https://github.com/Lombiq/Open-Source-Orchard-Core-Extensions/blob/dev/.github/workflows/build-and-test.yml#L17

I would also try to compare the failing test, to the other ones where you needed to change the configuration, to see if there is any difference between this and the other ones.

Also the failure dump says that the error is here:https://github.com/Lombiq/UI-Testing-Toolbox/pull/354/files#diff-67847fe58db929ae56cd24e4ba914c28af919cdf0272fd3cbf193c995d2666aeR37-R40

So I guess there is a special case in this test on the CI, when it enters the if. Maybe the if method should be modify to avoid this flakiness. You could also debug it in the CI btw. Instead of throwing inside the if, you could try printing out the output value, to see what is the problem.

Other than this, I don't have any idea.

github-actions[bot] commented 4 months ago

This pull request has merge conflicts. Please resolve those before requesting a review.