Closed gadomski closed 1 year ago
Hi @gadomski !
Thanks for the PR, could you, please, take a look at the failing tests?
So I'm getting similar errors locally when I test against master:
$ git branch --show-current
master
$ tox -e py310
--- 8< ---
============================================================================================================================================= short test summary info ==============================================================================================================================================
FAILED tests/test_blocking_network.py::test_blocked_network_recording_cli_arg - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_blocking_network.py::test_blocked_network_recording_vcr_config - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_blocking_network.py::test_blocked_network_recording_vcr_mark - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_blocking_network.py::test_other_socket - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_blocking_network.py::test_block_network - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_blocking_network.py::test_block_network_via_cmd - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_blocking_network.py::test_block_network_via_cmd_with_recording - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_cassette_recording - AssertionError: assert {'errors': 3,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_record_mode_in_mark - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_override_default_cassette - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_record_mode_in_config - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_cassette_recording_rewrite - AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_custom_cassette_name - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_custom_cassette_name_rewrite - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_forbidden_characters - AssertionError: assert {'errors': 4,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_json_serializer - AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_multiple_marks[\nimport pytest\nimport requests\n\n@pytest.mark.vcr("{}")\n@pytest.mark.vcr("{}")\ndef test_with_network(httpbin):\n assert requests.get(httpbin.url + "/get").status_code == 200\n] - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_multiple_marks[\nimport pytest\nimport requests\n\npytestmark = pytest.mark.vcr("{}")\n\n@pytest.mark.vcr("{}")\ndef test_with_network(httpbin):\n assert requests.get(httpbin.url + "/get").status_code == 200\n] - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_recording.py::test_kwargs_overriding - AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_no_cassette - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_merged_kwargs - AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_single_kwargs - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_global_config[function] - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_global_config[module] - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_global_config[session] - AssertionError: assert {'errors': 1,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_replaying.py::test_assertions_rewrite - Failed: nomatch: "*assert 'POST' == 'GET'"
========================================================================================================================================== 26 failed, 39 passed in 20.14s ==========================================================================================================================================
py310: exit 1 (22.83 seconds) /Users/gadomski/Code/pytest-recording> coverage run --source=pytest_recording -m pytest tests pid=29886
.pkg: _exit> python /Users/gadomski/Code/pytest-recording/.venv/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta
py310: FAIL code 1 (32.36=setup[3.65]+cmd[5.88,22.83] seconds)
evaluation failed :( (32.43 seconds)
This is true even with a vcrpy<5
installed. I'd be curious if current master passes CI, looks like last run was three weeks ago so maybe an upstream changed? I just pushed a reverted commit so we can see if current master passes.
@gadomski I guess that some jobs were skipped because of the *.py
filter we have.
PS: Here's a quick hack to obtain the VCR.py major version at runtime:
vcr_major_version = int(vcr.__version__.split(".")[0])
Per https://github.com/kiwicom/pytest-recording/issues/111#issuecomment-1609805856, would there be an appetite for just bumping the min version of vcrpy? This would raise the minimum Python version of this repo, so isn't trivial, but would make the code simpler.
Side note: 100% agree that >=
is the way to go for a library (not ~=
).
Per #111 (comment), would there be an appetite for just bumping the min version of vcrpy? This would raise the minimum Python version of this repo, so isn't trivial, but would make the code simpler.
@gadomski I think it's important to note here that the support being dropped is for Python <=3.7 which is end-of-life 2023-06-37 — i.e. yesterday — which means that no users in healthy environments will be affected, and that dropping support for Python <=3.7 if desired is probably mostly deleting things and not super hard.
For the other approach — supporting both old and new VCR.py — have you tried:
try:
from vcr.cassette import CassetteNotFoundError
except ImportError: # i.e. with VCR.py <5.0.0
CassetteNotFoundError = ValueError
?
have you tried
Just did, and looks like it also failed CI.
@gadomski
The first failing test was introduced by VCRpy==4.4.0
. #112 made the current builds work, then we can gradually add support for newer VCRpy versions.
I hope this helps
I fixed one test that failed on 4.4.0, the changes are merged now.
I think the self-made exception will not work because it's two different ex[c]eption classes that just happen to have the same name.
(Quick note that I made a logic mistake here^^: The new exception class would be raised against VCR.py <5.0.0 which is only matching against (subclasses of) ValueError
, so besides making linters happy, the approach making a new class inheriting from ValueError
should also work in theory.)
I've figured out what's causing the test failures. E.g. in test_replaying.py::test_global_config
, the before_record_request
is called twice, once during @pytest.mark.vcr
and once during the actual request.get
call. Previously, the ValueError
from the first call was getting caught in _load
: https://github.com/kevin1024/vcrpy/blob/d2281ab646e368e6b0b5d50fd6757adfc7d0a050/vcr/cassette.py#L352-L356. But with the change to CassetteNotFoundError
, the first error is no longer silently swallowed.
I'll try to find a change to the tests that keeps the expected assertions while, you know, not failing.
Converting this to draft, if someone else wants to pick this up please feel free. The juice isn't worth the squeeze for me at the moment.
@hartwork It appears some version of this is now in master. Would it be possible to tag and release it?
@hartwork It appears some version of this is now in master. Would it be possible to tag and release it?
@estahn that's pull request #118. I have no permissions here, but @Stranger6667 seems to have plans in that direction, see https://github.com/kiwicom/pytest-recording/issues/111#issuecomment-1652024553 .
I will close this as I've merged a PR that fixes the remaining tests. Hope to release it in a few minutes after a few more cleanups
And, last but not least, thank you all for your contribution!
@Stranger6667 thank you! :+1:
The new version is released! :tada: Thank you for your patience
Description
v5 requires a
CassetteNotFoundError
to be raised instead of aValueError
in the custom persistor. We just create our own version of the error if vcrpy version is below v5.Checklist