Open dkogan opened 1 year ago
This is with the latest release: I'm at the v3.3.0 tag
I don't have those failures. What version of Python are you using? Is there a reason you aren't using pytest to run the tests? It will give more details on the error failure if you do. There's a tox.ini in the repo. The README says:
To run the tests::
$ pip install -r requirements.pip
$ tox
Hi. This is Debian. It's doing the management of the dependencies and the environment, not tox. I believe I'm running the correct test; yet?
This is the Python we're about to ship in the new Debian release:
dima@shorty:~$ python3 --version
Python 3.11.2
It looks like the test is trying to match up error reports, but these are meant for humans, so the exact text is allowed to change. The test failure is unhappy because this:
Traceback (most recent call last):
File "test.cog", line 9, in <module>
func()
File "test.cog", line 4, in func
mycode.boom()
File "/tmp/testcog_tempdir_9291911423047634/mycode.py", line 2, in boom
[][0]
IndexError: list index out of range
Does not look like this:
Traceback (most recent call last):
File "test.cog", line 9, in <module>
File "test.cog", line 4, in func
//[[[end]]]
File "/tmp/testcog_tempdir_9291911423047634/mycode.py", line 2, in boom
[][0]
IndexError: list index out of range
Thanks
I don't understand why the traceback would be different than expected. This test passes in many environment, including Ubuntu, which is based on Debian. Is Debian somehow interfering with error reports?
Since cog is a tool for developers, and since it changes how Python is executed, I wanted to make sure the error report were useful. That's why I'm checking the entire traceback.
Hi Ned.
Ned Batchelder @.***> writes:
I don't understand why the traceback would be different than expected. This test passes in many environment, including Ubuntu, which is based on Debian. Is Debian somehow interfering with error reports?
The distro doesn't matter. This is almost certainly something that changes in different versions of Python, and I'm guessing you've only looked at older Python builds. You're looking at a message intended for humans, and expect exact text. This was never guaranteed to be stable, and it looks like it has changed. Can you convert your test to regex-search for the strings you actually care about? Or better yet, the test should look only at output intended for machines.
Also, the warnings fixed by the patch in the previous email were legitimate. Would you consider merging the patch?
Thanks
Thanks, I've changed all the file opening to use with
(including a few more than you found).
I would still like to understand why your traceback looks different. I understand your point about tracebacks meant for humans, but they are also supposed to accurately report the locations causing errors, and yours does not. Is there some way I can reproduce what you are seeing? Is there a Docker image?
Hi. Thanks for making the changes.
To reproduce, install Debian/bookworm (the latest release), and build there. I don't recall how exactly I was trying to run the tests, but currently I simply disable the whole test suite:
You can replicate the package build:
You can remove the "override_dh_auto_test" business from debian/rules, and rebuild the package to see what it says. Let me know if you need help. I'm pretty sure I was doing this without tox, but I don't recall the details at the moment. If you want to try to nail this down, I'm happy to work on it again
I don't think I will be able to install bookworm, and I don't know Debian well enough to update debian/rules. The more details you can provide, the more likely we can get to the bottom of it.
I think Debian (Bookwork or otherwise) is a red herring.
I'm on macOS, running Python 3.11.6 and cog commit 201ad15d95fd.
pytest
works fine:
% pytest
........................................................................................................................................... [100%]
139 passed in 0.16s
unittest
doesn't:
% python3 -m unittest discover -q
Checking changed.cog (changed)
Check failed
Checking unchanged.cog
Check failed
Checking changed.cog (changed)
Check failed
Checking unchanged.cog
Checking changed.cog (changed)
Check failed
Checking hello.txt
Checking bad.txt
bad.txt(9): Output has been edited! Delete old checksum to unprotect.
Checking good.txt
Cogging f0.cog
Cogging f1.cogCogging f2.cogCogging f3.cog
Cogging f4.cog
Cogging f5.cogCogging f6.cogCogging f7.cog
Cogging f8.cog
Cogging f9.cog
Cogging f10.cog
Cogging f11.cog
Cogging f12.cog
Cogging f13.cog
Cogging f14.cog
Cogging f15.cogCogging f16.cogCogging f17.cog
Cogging f18.cog
Cogging f19.cog
/private/var/folders/fd/nvmqqh7n28560k59v7bm2lwh0000gn/T/testcog_tempdir_46002766080147506/code/test.out
Cogging test.cog
Cogging test.cog
Cogging test.cog
======================================================================
FAIL: test_error_report (cogapp.test_cogapp.TestMain.test_error_report)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/kuba/projects/cog/cogapp/test_cogapp.py", line 935, in test_error_report
self.check_error_report()
File "/Users/kuba/projects/cog/cogapp/test_cogapp.py", line 956, in check_error_report
assert expected == sys.stderr.getvalue()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
======================================================================
FAIL: test_error_report_with_prologue (cogapp.test_cogapp.TestMain.test_error_report_with_prologue)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/kuba/projects/cog/cogapp/test_cogapp.py", line 938, in test_error_report_with_prologue
self.check_error_report("-p", "#1\n#2")
File "/Users/kuba/projects/cog/cogapp/test_cogapp.py", line 956, in check_error_report
assert expected == sys.stderr.getvalue()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError
----------------------------------------------------------------------
Ran 139 tests in 0.085s
FAILED (failures=2)
I'm not saying whether it should or shouldn't work but it seems OS-independent. When I print the tracebacks I see what's already posted above (https://github.com/nedbat/cog/issues/24#issuecomment-1465255541).
Digging into the difference between pytest and unittest, I see these values when run under unittest:
== expected =======
Traceback (most recent call last):
File "test.cog", line 9, in <module>
func()
File "test.cog", line 4, in func
mycode.boom()
File "/private/var/folders/6j/khn0mcrj35d1k3yylpl8zl080000gn/T/testcog_tempdir_42158782322914723/mycode.py", line 2, in boom
[][0]
IndexError: list index out of range
== stderr =========
Traceback (most recent call last):
File "test.cog", line 9, in <module>
File "test.cog", line 4, in func
//[[[end]]]
File "/private/var/folders/6j/khn0mcrj35d1k3yylpl8zl080000gn/T/testcog_tempdir_42158782322914723/mycode.py", line 2, in boom
[][0]
IndexError: list index out of range
I don't know why the stack trace is different between the two test runners, but it doesn't matter. The tests pass when run with pytest. Don't use unittest here.
Hi. I'm making a cogapp package for Debian. I'm on a recent Debian system, and there are a bunch of failures of the test suite. Does everything pass for you?
I do this:
There are many loud arnings about unclosed files. This patch makes the complaint happy:
But this still leaves two test failures:
Thanks