Closed pganssle closed 5 years ago
The pytest-runner project does subclass the test command, and there are projects actively depending on it. I do agree we want to deprecate each, and probably both together.
A bit off-topic, but do we have an overarching issue for deprecating/replacing the various incantations of setup.py
? If not, would probably be good to have this, as well as some documentation about migrating on https://packaging.python.org/.
(I can create these issues if they don't already exist)
do we have an overarching issue for deprecating/replacing the various incantations of setup.py?
Not to my knowledge.
I think there's at least some agreement in #931 that we want to remove the test command. I think we should start by raising deprecation warnings pointing people at
tox
Is there, really? The feeedback there is basically that tox
is a large dependency and is designed for a different use case than setup.py test
.
This is a great way to kill sdists. (why have sdists if they have the same contents as wheels) How quickly the lesson of sourceforge.net has been forgotten. Or more recently Oracle changing the Term of Use of the Java downloads. IMO setuptools should be moving in the opposite direction, automatically including in sdists the test modules found using test_require
or identified using pytest dry-run.
Anyways, unless setuptools also kills its plugin mechanism, someone will recreate the test
command as soon as it is removed from setuptools
. Maybe that is a good thing. (But if the goal is only to get that stuff out of the setuptools repo because it is hindering setuptools, why not split it off to a separate project now, rather than pretend that tox is a drop in solution)
Is there, really? The feeedback there is basically that tox is a large dependency and is designed for a different use case than
setup.py test
.
Let me see if I can clarify/illustrate. There are two main concerns of setup.py test
:
Tox happens to do both of these in a user-friendly, declarative way, which is what makes it a recommended replacement. The user-experience of "run tox" is comparable to "run setup.py test".
tox
does bring more sophistication (and thus friction) to the process, which is why it's somewhat not an exact match.
Users are welcome to seek out other solutions. I wrote pip-run as one such attempt. It has a different behavior in that it always installs dependencies, so doesn't have the second-run speed improvements seen with setup.py test.
why not split it off to a separate project now
You're welcome to consider that. The biggest issue with setup.py test
now is that it relies on easy_install
and setup_requires
, both of which are superseded by pip and PEP 517. Another issue is that setup.py test
has no mechanism to specify the version of setuptools under which to run... dependencies need to be resolved before setup.py
is invoked. Any tools that attempts to use setuptools plugins to run tests will be forever be responsible to ensure the requisite version of setuptools is installed in advance (it's too late by the time setuptools is imported).
This is a great way to kill sdists. (why have sdists if they have the same contents as wheels) How quickly the lesson of sourceforge.net has been forgotten.
I'm not sure I follow. Can you elaborate on what the risks are?
IMO setuptools should be moving in the opposite direction, automatically including in sdists the test modules found using test_require or identified using pytest dry-run.
I don't think you want to bundle any dependencies, even those for tests, for the same reasons as you don't want to bundle dependencies (you don't want to have to re-release your sdists every time a dependency updates). You want a lightweight, declarative means to declare dependencies. If bundled dependencies are valuable, a higher-level tool should build on the fundamental building blocks (independent packages) to produce and publish those bundles (similar to how Dockerhub works to host materialized environments).
This is a great way to kill sdists. (why have sdists if they have the same contents as wheels)
To be clear, this would change nothing about the distinction between sdists and wheels. I personally still recommend that you include your test code in your sdist, but not in the wheel. I also recommend that your tox.ini
and any other test configuration should also go into the sdist.
python setup.py test
has raised an error in dateutil
for a few years now, but it has not affected the content or utility of the source distributions.
To be clear, this would change nothing about the distinction between sdists and wheels.
The reason to include tests in sdists, or even create sdists, becomes less obvious if the existing declarative syntax to describe them causes a deprecation warning. This will obviously mean packages with existing working metadata which describes their tests to sdist recipients will need to remove that metadata to be compatible with the latest setuptools. And the latest setuptools wont run the tests any more ... ? So again, why would those package maintainers bother with including tests in the sdist.
Add to that there are members of PyPA who were already pushing projects to not include tests in sdists (because GitHub!), who will use this deprecation as additional justification for being more vocal and aggressive in their stance.
These are not direct outcomes of this issue. However, they are fairly predictable side-effects.
I also recommend that your tox.ini and any other test configuration should also go into the sdist.
Great, but you are not the typical package maintainer. Unfortunately most sdists these days are not testable, usually missing test configuration or test data, until someone from a distro tries to get the sdist working, and then has to try to convince the package maintainer because the Python Packaging Guide says almost nothing about tests. Rather than trying to use @pypa resources and products to address that growing problem, it seems the developers are washing their hands of the problem, and write so many conflicting PEPs that replace each other before a significant segment of the user base has adopted the previous one usually because the implementations were all only partial completed before the authors started writing a different PEP. Compare with the Perl ecosystem, where running the tests is an automatic default part of the package installation experience, and buildbots can (and do!) run the tests of almost every package because they all use consistent metadata for finding and running the tests - packages unmodified for 10 years still build. Tox manages creating virtualenvs and running python(OS-agnostic) commands in them. It does not declaratively describe the test modules nor the test runner (I dont consider the testenv
commands =
to be declarative.) If setuptools isnt going to allow its metadata to describe unittest
test modules, maybe pytest
needs to become the recommended test runner, as its metadata in setup.cfg/pytest.ini/etc can at least hold the globs needed to know which modules are test modules and which are not.
It would be good to drop sdists entirely for projects that ship pure python wheels
The biggest issue with
setup.py test
now is that it relies oneasy_install
andsetup_requires
, both of which are superseded by pip and PEP 517.
If you are going to drastically reduce the setuptools
' scope from a complete package management solution to just a build tool, please say that explicitly rather than trying to disguise it as a minor change.
(That said, then you probably want to deprecate setup.py install
as well since it relies on the same "superseded" easy_install
and bdist_egg
. And other *requires
arguments as well.)
The biggest issue with setup.py test now is that it relies on easy_install and setup_requires, both of which are superseded by pip and PEP 517
Likewise, if the real reason to deprecate it is that for all practical purposes, it has long been broken (which is true), and you have no desire to fix it, stating that openly is going to remove a whole lot of misunderstanding.
A fix would be to delegate "superseded" stuff to whatever it was superseded by -- to pip
for installation and virtualenv
(or whatever would work, this is just from the top of my head) to add a custom site-packages
location.
@graingert
It would be good to drop sdists entirely for projects that ship pure python wheels
How would I get setup.py
from a built wheel?
@jayvdb
Compare with the Perl ecosystem, where running the tests is an automatic default part of the package installation experience
Perl is much less scalable than Python and a a result, CPAN modules tend to be very small in comparison. I can't name a single Perl module whose tests consume a few gigabytes and take a few hours to run, making it impractical to run them as a part of every installation.
If you are going to drastically reduce the
setuptools
' scope from a complete package management solution to just a build tool, please say that explicitly rather than trying to disguise it as a minor change.
I don't think we've ever tried to disguise this. For several years now we have been actively warning people not to invoke setup.py
directly at every opportunity, and we've been quite explicit that the end goal is for setuptools
to lose as many of its extraneous features as possible and become a standard build tool.
(That said, then you probably want to deprecate
setup.py install
as well since it relies on the same "superseded"easy_install
andbdist_egg
. And other*requires
arguments as well.)
Yes, we fully intend to remove setup.py install
. At the moment bdist_wheel
relies on the install
command, which is the only reason we have not started raising warnings.
I don't think we've ever tried to disguise this. For several years now we have been actively warning people
Could you reference some overarching issue on this initiative in the top post then? That would work much better as justification than "some agreement" in some close-circle discussion.
At the moment
bdist_wheel
relies on theinstall
command, which is the only reason we have not started raising warnings.
Raising a warning only if it's invoked via public interface should address that.
I can't name a single Perl module whose tests [compare with opencv] consume a few gigabytes and take a few hours to run, making it impractical to run them as a part of every installation.
I am not suggesting that Python packages should run tests by default when installing packages (, or that anyone should use Perl). Sure, PyPI hold entries for much larger codebases than are typically in CPAN modules. Large Perl codebases are often not in CPAN at all, and while there are noble attempts like PDL and BioPerl, most people working on large datasets have moved from Perl to R or Python, so there is more Python development occurring now when data sizes considered almost impossible 10 years ago are common place now. fwiw, R has tests also occurring during installation, and CRAN has similar buildbots, and there are some large R testsuites - not sure if there are any as big as opencv . So it is likely more useful to contrast with R instead of Perl.
It would be nice if there was a new "standard" way to discover tests for any Python packages before setuptools deprecates the existing mechanism that packages are exposing that metadata. Deprecation in software engineering usually means there is a documented replacement which is (roughly) fit for purpose.
It would be nice if there was a new "standard" way to discover tests for any Python packages before setuptools deprecates the existing mechanism that packages are exposing that metadata. Deprecation in software engineering usually means there is a documented replacement which is (roughly) fit for purpose.
The problem that I think most people in this thread are having is that there currently is no standard way to execute tests in Python. To the extent that setup.py test
was ever that way, it has dramatically declined in popularity, partially due to the rise in popularity of pytest
and tox
and partially due to the fact that setup.py test
never really worked all that well to start with. Some projects use make test
and some use some custom shell script.
To the extent that we could fix the problems with setup.py test
, we would essentially just be re-inventing tox
, because the requirements of a non-broken test target would be that it has a declarative configuration format (tox.ini
), a clear way to declare dependencies (deps=
) and can build and install those dependencies in an isolated environment (this is what tox
does).
In any case, pretending that setup.py test
is supported or non-broken is not helping anyone. There are multiple good replacements for setup.py test
, I think it's time that we start warning people that they should migrate to one of those.
The problem that I think most people in this thread are having is that there currently is no standard way to execute tests in Python.
setup.py test
is that standard way to execute tests. It has been like that for one and a half decades. It has been baked into uni courses and peoples scripts. It is the basis of Django's manage.py test
.
Failing that, unittest
test discovery is also standard https://docs.python.org/3/library/unittest.html#unittest-test-discovery
pytest
is the very large elephant in the room. A lot of Python direction problems would be solved by declaring that pytest
is the new standard (in addition to unittest
). Wrap it in a PEP. ;-)
setup.py test
never really worked all that well to start with.
It works very well for lots of packages.
It doesnt work well for all packages, of course, but that isnt reason to break it for all packages before there is a good replacement. PEP 517 didnt cover test running, but it has laid the groundwork for it, and it wont be long before PEP 517/8 build systems incorporate solutions for that problem and/or a PEP is accepted which does standardised the hooks for test running.
setuptools should keep supporting its existing user-base, keep them working correctly, with all of the features they rely on.
With PEP 517/8, new build systems can flourish. The goal should be for setuptools to be deprecating itself, and the deprecation notice provide a list of all high quality replacement PEP 517/8 build systems which have implemented drop-in solutions for the many varied types of projects which setuptools has supported for the last decade or more. That would be in the spirit of PEP 517, and would avoid setuptools existing near-monopoly implicitly preventing other build systems from being used and supported.
status quo is setuptools maintaining legacy things with a really thinly stretched set of maintainers, all while nobody actually steps up to make things better
from my pov a "working" solution is the main impediment for having someone step up and do a better/good solution
from my pov a "working" solution is the main impediment for having someone step up and do a better/good solution
A fix would be to delegate "superseded" stuff to whatever it was superseded by -- to
pip
for installation andvirtualenv
(or whatever would work, this is just from the top of my head) to add a customsite-packages
location.
Any volunteers?
A fix would be to delegate "superseded" stuff to whatever it was superseded by -- to
pip
for installation andvirtualenv
(or whatever would work, this is just from the top of my head) to add a customsite-packages
location.
This is also something that we considered, but at the end of the day I think it would be much better to just deprecate these functions because we want to train people to use the right tool for the job, and invoking setup.py
is essentially never the right tool for the job - at best it will end up being a thin wrapper around our preferred tool. What's worse is that if you have an old version of setuptools
it will end up not being a pass-through to a supported tool, which can screw up your environment.
If we continue supporting these things instead of just warning and eventually erroring out with a message that tells you about the migration path, people will continue with the erroneous belief that executing setup.py
is the standard and/or recommended way to do things. They will put it in their documentation and teach it in classes. Instead it's better for us to start communicating clearly that these things are and have been unsupported for a long time and start migrating to the new way of doing things.
the test command is now breaking in common cases due to the fact that setuptools is not honoring python_requires and running under Python 2 with py.test's recipe (now removed from their site) installs the py3-only 5.0 version.
All of this is complete surprise to me, and on twitter I'm told it's been "deprecated" for as long as four years.
I use the latest version of setuptools, why isn't a warning being emitted for all of these deprecations ?
I use the latest version of setuptools, why isn't a warning being emitted for all of these deprecations ?
@zzzeek That is the purpose of this ticket, which you may notice is open awaiting implementation.
There was a decent amount of resistance to this idea as you can see from this ticket and #931, but I think that at the end of the day setup.py test
is out of scope for what we are interested in maintaining as part of setuptools
, so if someone is willing to go through and add the deprecation warnings and update the documentation, I'd be happy to merge the PR.
OK, the problem is, "python setup.py test" is now partially broken for the whole world that used this formerly prominently documented recipe due to pytest becoming python 3 only, thus making this a lot more urgent. Is there an open bug for pypa/setuptools regarding that it doesn't honor python_requires? would such an issue be considered something that has to be fixed ?
Is there an open bug for pypa/setuptools regarding that it doesn't honor python_requires?
@zzzeek , https://github.com/pypa/setuptools/issues/1633 and https://github.com/pypa/setuptools/issues/1787 look relevant.
Done in #1878, thanks @jdufresne!
Most often repeated complaint about Python is that python packaging plainly sucks.
I was often defending that is not true. There's a lot of confusion what to do, because there are many (most contradicting each other), but if you spent some time researching, you actually could arrive to solution that actually works.
This is what I had, I used setup.cfg
to look like this: https://github.com/takeda/example_python_project/blob/master/setup.cfg
Because it provided a declarative way to specify project, I can define:
install_requires
)tests_require
)setup_requires
)If I added setuptools_scm
to setup_requires
I could remove version
field and have it fetch automatically from git tags.
I could use pip-compile
to generate a lock file (requirements.txt
), then during developement I could call ./setup.py test
to run tests, ./setup.py sdist
to make a tarball and ./setup.py bdist_wheel
to make a wheel. I don't care that easy_install
is used for setup_requires
and tests_require
, those are just developer packages, it doesn't matter they are not installed through pip
they are not really part of the application.
Anyway, going back to my point. It was possible to have a decent dev environment setup. Yes, it wasn't perfect, but it was pretty good. Today I realized that existence of the PyPA group is what really harming python packaging. First introduction of PEP 518. Why a new file was pyproject.toml
was introduced? It has arguments for, but the issues with ini
listed are not convincing, it's obvious that someone just desperately wanted to use TOML or what's worse ditch something that was already working. So instead of removing setup.py
another file was added. So that was botched, but until now that failure could be ignored. I even tried to use pyproject.toml
but then when I wanted to install app in devel mode it told me I still need to have setup.py
.
Now, this step of deprecating ./setup.py test
, I'm all for it, but at least provide an alternative for it. And no, tox
is not an alternative, it is intended for a different purpose and not for running tests locally. I don't want it to create another virtualenv and re-download dependencies once again when I just want to re-run unit tests for my project.
But ok, I can just use pytest
instead of ./setup.py test
, except now I don't have a good way to define test dependencies (like pytest and its plugins), so what should I do now? Should I create another requirements.txt
for development? Add a Makefile
with standardized commands? These are exactly the issues people are complaining when they are talking about python packaging. Please don't deprecate functionality when there are no alternative, even broken functionality is better than no functionality.
You could provide much better experience and solve all these issues by simply:
setup.py
and use setup.cfg
setuptools
or python -m setup
) that would use setup.cfg
in current directoryIt would solve the "chicken and the egg" problem setup.py
has and also solve the problem of programmatically modifying dependencies.
Anyway there's still chance that pyproject.toml
could shine if things standardize and one could define everything in that file. I still believe choosing TOML
was a mistake. INI
is built in almost every language.
Hi takeda. Thanks for the detailed user journey. It was only about 8 years ago that I was in the same place as you are now, championing setuptools and setup.py test.
The problem with the test
command was twofold: it depends on easy_install
which is deprecated and can be insecure, and it was too entangled with other behaviors (builder, test runner, installer). PyPA is working hard to create decoupled tools that can specialize in a domain and do that thing well under possibly independent maintenance and enable competing tools to fill the same need perhaps in a more specialized way. The PyPA has made strides in this goal, but you're right that there are still gaps.
I do think there are some answers to your questions:
Why [was] a new file
pyproject.toml
introduced?
This section briefly covers the motivation. Perhaps you weren't convinced, but it's the format on which packaging has rallied, so I'd either accept the standard or propose a change. It'll be a tough battle, though.
Provide an alternative for
./setup.py test
[with] a good way to define test dependencies. Should I create anotherrequirements.txt
?
Yes, perhaps. In my projects for some time, I used tests/requirements.txt
to define the test dependencies. That process worked okay, but it had one major drawback - the test dependencies weren't readily discoverable as package metadata as they were with tests_require
presented.
I have found another pattern that works rather well - define a tests
extra. I use testing
, but a more common idiom is tests
. Then, when you wish to install the test requirements, simply pip install .[tests]
(or pip install -e .[tests]
). This approach also works with tox
because tox allows specifying extras to be installed. I've migrated to tox because it answers the question, "where should these dependencies be installed if not in my system or user site-packages?"
If you want the dependencies transiently installed, the way you get with setup_requires
and tests_require
, you may want to try pip run. It allows you to pip-run .[tests] -- -m pytest
and get behavior very similar to what setup.py test
did, but using pip. The main difference is that the setup time is slower than with setup.py test
because it doesn't save the installed artifacts anywhere. Also editable installs aren't supported. I rarely use this approach, though, because tox is pretty great, and way better than a Makefile. Others like nox instead, which is Python-based, but I've not gotten to know it well.
There is some progress on PEP 582, which promises to provide a standardized place to store a project's packages, and that may provide some relief in the future.
Remove
setup.py
and usesetup.cfg
That's underway and mostly complete. All that's left is for there to be a standard for Python projects to have editable installs, which is an effort underway but with real challenges.
After that, any tool can support PEP 517/518 installs and setuptools-specific behaviors become implementation details and cease to be public interfaces.
Hope that helps.
PyPA is working hard to create decoupled tools that can specialize in a domain and do that thing well under possibly independent maintenance and enable competing tools to fill the same need perhaps in a more specialized way.
The good things about python at one point were that it was batteries-included, that "There should be one-- and preferably only one --obvious way to do it." and "Simple is better than complex.". Specialized micromanagement tools with competing implementations is the opposite of these philosophies. They make it impossible to know what the current best practice is for python packaging is.
Here's some of the fears, uncertainties and doubts anyone I've introduced to python packaging has come across eventually:
test_requires
, install_requires
, build_requires
, or do I use setup.py's extras?https://packaging.python.org has a mixture of old and new information on these subjects, and little clear guidance on what we're supposed to be using now or how it's all supposed to fit together. setup.py was nice -- a single, declarative place for all the important bits about making a package -- now there are 5 files you need -- 6 if you use type annotations -- just to write metadata about a package.
I appreciate that you took the time to explain the flaws in the old ways and some of your best practices, and I appreciate your decade (decades, counting everyone's time in PyPA) of work on corralling the python packaging issue; I know it's too late now but I wish the old system could have been refined into a golden standard instead of being balkanized into a long period of limbo and uncertainty.
- Do I make requirements-test.txt, requirements.txt and requirements-dev.txt or do I use
test_requires
,install_requires
,build_requires
, or do I use setup.py's extras?
The requirements.txt
and variations are generally useful for applications. The _requires
/extras
for libraries. Pick either depending on what you're developing.
- is setup.py getting deprecated? Should we be throwing it out already? Marking our setup.py's with deprecation warnings?
Deprecated. I don't think so. It's recommended to use the declarative setup.cfg
for content that can be defined declaratively. Now setuptools specific though, allowing for other backend implementations to not need to use it. Setuptools is fairly advanced in general and causing step on-boarding to users. E.g. flit is much simpler for new users to the language.
- virtualenv vs conda?
conda is mostly aimed at data science users and proprietary package manager, not developed/endorsed/controlled by the Python ecosystem. Again they optimize for different target groups IMHO and not really comparable.
- pip-run vs pipenv vs virtualenv vs virtualenvwrapper?
Each of these optimizes for different target groups. virtualenv/venv is the low-level abstraction. pipenv is a tool that improves the development of applications, is not adequate for libraries.
- scripts vs entry_points?
Nowadays entry points are the way to go.
setup.py was nice -- a single, declarative place
This is inaccurate. setup.py
is a python file allowing to run arbitrary Python code and as such imperative, not declarative.
Again they optimize for different target groups IMHO and not really comparable.
This is my point though. IMO the division between application and library and scientist and user shouldn't be so crystal-sharp. It means we can't benefit from each others' knowledge and work, and it makes the whole ecosystem scary and overwhelming to new learners, or even to old learners who have to come back and forget everything they thought they knew about it.
setup.py was nice -- a single, declarative place
This is inaccurate.
setup.py
is a python file allowing to run arbitrary Python code and as such imperative, not declarative.
This is a fair point. The arguments to setup()
itself are declarative, but the whole file itself is not.
I don't think that's a problem though. Another thing I liked about python was that it gave you structure but trusted you to use it, instead of hounding you. The slogan is "preferably one obvious way to do it", not just "only one way to do it". Lots of people used this power to generate parts of the declarative data (the official docs even recommend this). But I try to avoid that power whenever possible.
PyPA is working hard to create decoupled tools that can specialize in a domain and do that thing well under possibly independent maintenance and enable competing tools to fill the same need perhaps in a more specialized way.
The good things about python at one point were that it was batteries-included, that "There should be one-- and preferably only one --obvious way to do it." and "Simple is better than complex.". Specialized micromanagement tools with competing implementations is the opposite of these philosophies. They make it impossible to know what the current best practice is for python packaging is.
Here's some of the fears, uncertainties and doubts anyone I've introduced to python packaging has come across eventually:
* Do I make requirements-test.txt, requirements.txt and requirements-dev.txt or do I use `test_requires`, `install_requires`, `build_requires`, or do I use setup.py's extras? * is setup.py getting deprecated? Should we be throwing it out already? Marking our setup.py's with deprecation warnings? * virtualenv vs conda? * pip-run vs pipenv vs virtualenv vs virtualenvwrapper? * scripts vs entry_points?
https://packaging.python.org has a mixture of old and new information on these subjects, and little clear guidance on what we're supposed to be using now or how it's all supposed to fit together. setup.py was nice -- a single, declarative place for all the important bits about making a package -- now there are 5 files you need -- 6 if you use type annotations -- just to write metadata about a package.
I appreciate that you took the time to explain the flaws in the old ways and some of your best practices, and I appreciate your decade (decades, counting everyone's time in PyPA) of work on corralling the python packaging issue; I know it's too late now but I wish the old system could have been refined into a golden standard instead of being balkanized into a long period of limbo and uncertainty.
I think discuss.python.org may be a better location for this discussion
https://discuss.python.org/t/proposal-for-tests-entry-point-in-pyproject-toml/2077/9
I want to warn about promoting a single test package like tox - that would be a great way to kill innovation. You should at least list the major test drivers. WARNING: Highly opinionated statement follows: Especially when there are superior alternatives like nox.
WARNING: Highly opinionated statement follows: Especially when there are superior alternatives like nox
Mute point to debate strenghts of one over other. Please restrain from doing so. setuptools is not in business of choosing winners, however here we just mentioned here the most popular choice as of today. Everyone is free to use whatever tools they prefer nox/tox/poetry/pyflow/etc... We should probably standardize interface to test runners at some point to be fair. I personally prefer the ones that are declarative over imperative, just by going from the lessons learned of setup.py
vs setup.cfg
, e.g. we're moving away from setup.py
for a very good reason.
@gaborbernat Seeing the strong reaction from you, I think I kind of got my point through. I think setuptools (being a core component) should not advocate a specific test tool, which I felt was suggested with "pointing people at tox". I did not mean to start a debate over which tool is better. I have used tox earlier and it is fine, probably the first to solve the problems that it addresses, so lot's of credit for that.
@pganssle @jaraco tests_require
is still not deprecated and is causing problems due to being broken.
To be more precise, tests_require
isn't broken, but rather it doesn't implement newer features (PEP 503) that honor downloading requirements based on Python version. It still works as well as it always has worked, where developers were required to restrict their test dependencies to versions that were compatible with the relevant python versions. And actually, in late versions of Setuptools, if pip is present, it's preferred for installing packages, so should honor PEP 503 Requires-Python directives.
And to be sure, the test
command is deprecated and the pytest-runner project is deprecated and as far as I know, those two projects are the only two that invoke tests_require
, so I'm unsure of the value of deprecating the install_dists method.
If there's something more than needs to be done here, please open a new issue describing the missed expectation and possibly proposing an approach to address it.
I think there's at least some agreement in #931 that we want to remove the test command. I think we should start by raising deprecation warnings pointing people at
tox
, the same way we've done for theupload
andregister
commands.The most likely stumbling block that I see is that I think a huge number of people have created their own
test
command that invokes their preferred test runner,pytest
or whatever. Ideally we'd want to get the deprecation warning to them as well. Hopefully most of them are usingTestCommand
as their base class, though if we want to get really aggressive about it we could try parsingsys.argv
directly.I think we need to warn in the following situations:
setup.py test
command is executedtests_require
is specifiedaliases.test
is specified insetup.cfg
It's likely that at least two of these will be specified, but I think two separate warnings would be useful.
CC: @gaborbernat @RonnyPfannschmidt @nicoddemus