spyder-ide / spyder-unittest

A plugin for Spyder to run tests and view the results
MIT License
79 stars 34 forks source link

PR: Use the Python interpreter set in Preferences to run tests #174

Closed stevetracvc closed 2 years ago

stevetracvc commented 2 years ago

This PR fixes #65 allowing the user to change which python executable will be used for the tests.

This is useful, eg, if you install spyder in its own conda environment and then only install the spyder-kernels in the env you want to run the tests in.

Note, you'll need to install qtpy and pyqt in the target env if it doesn't have spyder installed.

stevetracvc commented 2 years ago

hmm that's odd, I don't remember installing zmq specifically

stevetracvc commented 2 years ago

@jitseniesen OK tests pass on my machine now, sorry about that. Oddly, I have to have one import to make the tests work and one import to make the plugin work (check the comments in that last commit, 7ae8b3b)

codecov-commenter commented 2 years ago

Codecov Report

Merging #174 (c2bbe2b) into master (b788ad9) will decrease coverage by 0.62%. The diff coverage is 77.77%.

Impacted Files Coverage Δ
spyder_unittest/backend/pytestworker.py 92.30% <50.00%> (-2.57%) :arrow_down:
spyder_unittest/backend/pytestrunner.py 94.44% <100.00%> (-5.56%) :arrow_down:
spyder_unittest/backend/runnerbase.py 94.04% <100.00%> (-0.14%) :arrow_down:
spyder_unittest/widgets/unittestgui.py 83.87% <100.00%> (+0.07%) :arrow_up:
jitseniesen commented 2 years ago

Thanks for figuring out how to make the imports work. It took me a while to convince myself that it is now fine.

Regarding translations and the _(...) function, I think it is best to get rid of it. I was never certain about whether to include translations in the pytestworker.py file and how it would work; now it will almost certainly not work as intended. So please remove all references to the _(...) function.

stevetracvc commented 2 years ago

Yea it was weird, I don't know. Clearly that isn't the best way but it seemed to work.

I removed the translation from pytestworker and also added a couple tests for the new input in the dialog box. Let me know if you think I should change anything else!

jitseniesen commented 2 years ago

I added a commit so that the tests now use the Python interpreter that is specified in the Spyder preferences if the user does not specify one in the test configuration. If you have time and feel up to it, can you please tests whether that works for you?

I wonder whether it is now necessary to have a separate option to set the interpreter for tests. In practice, how often would people want to have one interpreter to run the code and another interpreter to run the tests? What do you think?

Apart from deciding on that, I want to have a good look at all the code in this pull request and then it should be good to go in.

stevetracvc commented 2 years ago

@jitseniesen you figured out the thing I couldn't :cry: Your solution is much more elegant :) I agree, we probably don't need to manually select the python executable in the config modal.

However, honestly, I wish I could toggle a setting so that when I create a new console, it asks which python executable I want to use (or maybe it just be a menu option under the special console menu). Maybe I'm an odd duck in how I use Spyder, but sometimes I'd want to have two consoles open, and each one would use a different python (it's been a little while since I did that though). I guess maybe the better way would be to have each as separate projects, and then just open two copies of Spyder so that each project would have its own python too. But, again, it would be nice if that's stored as a config for the project. Currently, I have to go to settings and change it for each project.

Another option is that I could hijack part of your addition and have it auto-fill the selection in the unittest config. But when should it change? Probably only when first opening Spyder, right? Load the default python executable when starting Spyder if the project doesn't already have one set (which would then actually set the project's pyexec config option). Oddly, your addition (decorated with on_conf_change) is called twice when starting up Spyder. I wouldn't have expected it to get called at all.

Your code now sets the unittest executable when the user changes the Spyder config (if not set), but there could also still be a console open that uses the old executable. When should the unittest plugin switch to the new executable? As it stands right now, if the user did NOT select a custom exec, then it'll switch. But if the user DID select a custom exec, then the main spyder config change won't affect the unittest exec. That is probably the behavior that someone would expect, right? Keep the manual override if set.

stevetracvc commented 2 years ago

@ccordoba12 @jitseniesen So I removed the plugin-specific python executable config and cleaned things up, mostly putting it back the way it started :laughing: Thanks Jitse for figuring out how to get the info from spyder itself, I think most of this PR is now yours :smile: .

However, this makes me think that the custom python interpreter should be a project-level setting rather than an application-level setting. Here's my specific use case, let me know what you think:

While working on this plugin, I needed to use my spyder-env conda environment for things like running the tests. But that is NOT the environment I use for most of my projects. Thus, with a unittest-specific python interpreter, it was pretty easy for me to run the tests for spyder-unittest using the unittest plugin. In a separately running instance of spyder, I'm running one of my other projects and can easily run the unit tests there too (it's using the custom interpreter that my spyder config is set to) since its unittest config was using the correct python.

Now, using just the application-level custom (or default) python, if I want to run the spyder-unittest tests in the unittest plugin, I have to go change the interpreter for the whole program, then set it back when I'm done.

This would have to be a change at the spyder level, but my earlier work on this seems to indicate it should work. Carlos, it looks like you discussed it a little back in September, has anyone started on this? I don't see anything in v5.3.0 https://github.com/spyder-ide/spyder/issues/16299#issuecomment-911713545

jitseniesen commented 2 years ago

PR spyder-ide/spyder#17788 got merged and will be part of Spyder 5.3.1, so we can use it here if we depend on that version.

As I see it, project-level settings were never properly implemented in Spyder even though we have talked about it for a long time (see spyder-ide/spyder#9804). It is a pity because it makes projects a lot less useful and especially associating interpreters with project is an often requested feature going way back to 2015 (spyder-ide/spyder#1014). However, the basic functionality in Spyder is there and this plugin uses it in a very hacky way; I am actually surprised it still works.

maurerle commented 2 years ago

Hi @stevetracvc, You can look at the code I added to my fork here: https://github.com/maurerle/spyder-unittest/commit/ab3429acc98d540e357a9ac627f3b3fe536bb140#diff-3896c4e6d6471bfd70003914882842f06d3318322aeade9cfb2dc5175259714cR343

I think you can remove the on_conf_change by directly accessing the current python interpreter used for code execution: executable = self.get_conf('executable', section='main_interpreter') This makes the code a lot cleaner as you don't have to get the value from custom_interpreter if needed.

You should also update spyder in REQUIREMENTS in setup.py to >=5.3.1 - then this can be released after the release of Spyder 5.3.1?

ccordoba12 commented 2 years ago

I agree with @maurerle's suggestion.

stevetracvc commented 2 years ago

OK neat, thanks @maurerle I'll look at this tomorrow. I need to update to 5.3.1.

If what you're saying is really that simple, then maybe some other plugins can be fixed too (like the pylint one)

ccordoba12 commented 2 years ago

I need to update to 5.3.1.

5.3.1 is not released yet, but I'll do it during the weekend.

If what you're saying is really that simple, then maybe some other plugins can be fixed too (like the pylint one)

Yep, that's what @maurerle actually did for the Profiler plugin in Spyder.

stevetracvc commented 2 years ago

@maurerle thanks for the help. I bumped the spyder min version requirements and added your get_conf. Looks like it works!

Also, I can confirm that if you want to use a conda environment that doesn't have spyder installed, you will need both qtpy and pyqt (which installs several other things too) in order to get the unit-test plugin to work

stevetracvc commented 2 years ago

@ccordoba12 what's up with that pytest>=5.0 requirements issue?

ccordoba12 commented 2 years ago

This looks pretty good @stevetracvc! To fix the error on Mac and Windows, please add to this fixture

https://github.com/spyder-ide/spyder-unittest/blob/8f22122e0ae155191348df244d45f1bd82325025/spyder_unittest/widgets/tests/test_unittestgui.py#L26-L31

the following

unittest_widget.self.get_conf(
    'executable',
    section='main_interpreter'
    default=sys.executable)

before the setup line.

In addition, please try to skip the tests that are segfaulting on Linux (hopefully there aren't too many) to see if our test suite passes there.

ccordoba12 commented 2 years ago

what's up with that pytest>=5.0 requirements issue?

Where is that?

stevetracvc commented 2 years ago

what's up with that pytest>=5.0 requirements issue?

Where is that?

I don't know, I don't see it anywhere in the code but all the python3.7 tests just failed with that as the error

ccordoba12 commented 2 years ago

Ok, I understand what's happening now. The problem is that pytest 3.10.1 ends up being installed on Python 3.7, which is not supported by pytest-mock.

I'll try to fix that in another PR. For now, please focus on the fixes I mentioned above.

stevetracvc commented 2 years ago

~OK I pushed a new commit with the test fixes. I can't replicate that test crashing on my machine, using python 3.9~ Wait I got confused which PR I was working on

stevetracvc commented 2 years ago

OK @ccordoba12 ready for a retest. Is there a better way that I can run all those different OS tests locally?

ccordoba12 commented 2 years ago

Is there a better way that I can run all those different OS tests locally?

Meaning?

stevetracvc commented 2 years ago

Is there a better way that I can run all those different OS tests locally?

Meaning?

So I could run the Windows and OSX tests from my Linux laptop? Rather than having to push, wait for github, etc. I'm guessing I'd have to set up docker containers or something. (Sorry, I'm not a developer, I'm a chemical engineer :laughing: )

~Also, I can't figure out why all these pytestrunner tests are failing, they don't fail for me. I think they popped up when I updated the minimum spyder version?~ I think it's a pytest-qt issue.

stevetracvc commented 2 years ago

@ccordoba12 all of the failing tests use qtbot, so I just pushed an update to the requirements tests.txt file for a newer version, see if that fixes the test crashes?

stevetracvc commented 2 years ago

Well that didn't work. Any suggestions other than just skipping all tests using qtbot?

dalthviz commented 2 years ago

Just in case, maybe changing the linux system packages setup step in the workflow could help with the segfaults on Linux. So instead of:

https://github.com/spyder-ide/spyder-unittest/blob/b788ad9bfe37cfcdb6600c973d96f228df8cc208/.github/workflows/linux-tests.yml#L27-L29

Running something like this:

https://github.com/spyder-ide/spyder-line-profiler/blob/3b2bfb83497e0f49abb912eeea5da3d061502aca/.github/workflows/linux-tests.yml#L27-L29

stevetracvc commented 2 years ago

This isn't tenable, here are all the tests using qtbot:

backend/tests/test_zmqstream.py:def test_zmqstream(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_collected(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_collecterror(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_starttest(qtbot):
backend/tests/test_pytestrunner.py:def test_pytestrunner_finished(qtbot, output, results):
backend/tests/test_pytestrunner.py:def test_pytestrunner_process_output_with_logreport_passed(qtbot):
tests/test_unittestplugin.py:def plugin(qtbot):
tests/test_unittestplugin.py:def test_plugin_config(plugin, tmpdir, qtbot):
tests/test_unittestplugin.py:def test_plugin_goto_in_editor(plugin, qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_uses_frameworks(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_indicates_unvailable_frameworks(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_disables_unavailable_frameworks(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_sets_initial_config(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_click_ham(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_ok_initially_disabled(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_ok_setting_framework_initially_enables_ok(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_clicking_pytest_enables_ok(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_wdir_lineedit(qtbot):
widgets/tests/test_configdialog.py:def test_configdialog_wdir_button(qtbot, monkeypatch):
widgets/tests/test_unittestgui.py:def widget(qtbot):
widgets/tests/test_unittestgui.py:def test_unittestwidget_forwards_sig_edit_goto(qtbot, widget):
widgets/tests/test_unittestgui.py:def test_unittestwidget_set_config_emits_newconfig(qtbot, widget):
widgets/tests/test_unittestgui.py:def test_unittestwidget_set_config_does_not_emit_when_invalid(qtbot, widget):
widgets/tests/test_unittestgui.py:def test_run_tests_and_display_results(qtbot, widget, tmpdir, monkeypatch, framework):
widgets/tests/test_unittestgui.py:def test_stop_running_tests_before_testresult_is_received(qtbot, widget, tmpdir):
widgets/tests/test_datatree.py:def view_and_model(qtbot):
widgets/tests/test_datatree.py:def test_go_to_test_definition_with_invalid_target(view_and_model, qtbot):
widgets/tests/test_datatree.py:def test_go_to_test_definition_with_valid_target(view_and_model, qtbot):
widgets/tests/test_datatree.py:def test_go_to_test_definition_with_lineno_none(view_and_model, qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_shows_abbreviated_name_in_table(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_shows_full_name_in_tooltip(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_add_tests(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_replace_tests(qtbot):
widgets/tests/test_datatree.py:def test_testdatamodel_sort_by_status_ascending(qtbot):
stevetracvc commented 2 years ago

OK just pushed that, but this is way outside my knowledge base. I do remember seeing something about xinerama in some SO thread

stevetracvc commented 2 years ago

OK looks like that fixed it, I'm going to remove the skips

stevetracvc commented 2 years ago

OK I reverted all of those skip commits, so I think it's back to normal now. Thanks @dalthviz

I also bumped pytest to v5 since I think that's why the python 3.7 tests were all failing.

stevetracvc commented 2 years ago

@ccordoba12 looks like you were right about the pytest version for python3.7. All tests are now passing :smile: Do you want me to remove it from this PR and do it as a separate PR? Or just leave it in here?