Closed simojo closed 10 months ago
Hi @simojo, yes, I was aware of this issue. Thanks for surfacing it! Basically, it would be nice if we could use chasten
during the testing of itself so that we can confirm the tool works. Is there a way in which we can "loosen" the assertions so that these tests will pass? Or, alternatively, might it be possible to remove the counts and simply confirm that the tool runs without crashing? Please let me know what you think!
@gkapfham We could leave checks in that we know will be a part of chasten (e.g. function definitions, and maybe class definitions), but I feel that we need to remove a maximum value, just so that this never becomes a problem again. I'm not sure if we can remove the counts or not. I will have to look into that.
Additionally, it appears that master
is now suffering from the same problem:
✗ id: 'F002', name: 'non-test-function-definition', pattern: './/FunctionDef[not(contains(@name, "test_"))]', min=40,
max=70
• /home/runner/work/chasten/chasten/tests/test_main.py - 1 matches
• /home/runner/work/chasten/chasten/chasten/util.py - 5 matches
• /home/runner/work/chasten/chasten/chasten/main.py - 14 matches
• /home/runner/work/chasten/chasten/chasten/checks.py - 9 matches
• /home/runner/work/chasten/chasten/chasten/output.py - 10 matches
• /home/runner/work/chasten/chasten/chasten/server.py - 2 matches
• /home/runner/work/chasten/chasten/chasten/process.py - 4 matches
• /home/runner/work/chasten/chasten/chasten/filesystem.py - 12 matches
• /home/runner/work/chasten/chasten/chasten/database.py - 6 matches
• /home/runner/work/chasten/chasten/chasten/validate.py - 3 matches
• /home/runner/work/chasten/chasten/chasten/configuration.py - 5 matches
The total matches here totals to 71
, which is just above the maximum specified. This adds weight to the importance of this issue.
Describe the bug When on a development branch, after having made some changes to the source code, I was running tests and realized that chasten is being called to analyze itself:
chasten analyze testing --search-path /path/to/chasten/repo --config /path/to/chasten/repo/.chasten --verbose
The problem with this is that chasten is using a hard-coded config located in the
.chasten
directory, and because chasten's code changes, these checks will not always pass.This hard-coded config was being read by the test case
tests/test_main.py::test_cli_analyze_correct_arguments_analyze_chasten_codebase
. Chasten is essentially shooting itself in the foot, and will not pass test cases when significant changes are made to it that break the following checks:In my case, you can read the output below to see that the
non-test-function-definition
anddouble-nested-if
patterns were not being found. There is no need to enforce these arbitrary checks on our code.To Reproduce Steps to reproduce the behavior:
git checkout e264ae97c4dfa3f408695158bc7fb867ea2cd769
poetry lock; poetry install
poetry run task test-not-randomly
(this ensures no other failed test cases clutter up the output)tests/test_main.py:118 test_cli_analyze_correct_arguments_analyze_chasten_codebase
Expected behavior This test case should pass.
Desktop (please complete the following information):
Proposed Solution
We simply need to remove this hard coded config and rely on something more static, such as a small sample python program located in
tests
that has code that will never change rather than analyzing our own codebase.This is very meta that we are learning how our tool works because we're using our tool to analyze our tool, isn't it?