Open dstufft opened 9 years ago
i like the move from pip to _pip, that way pip's implementation goes to a more "private" namespace the expense is breaking tools that go into pips internals
The other expense is that if we ever want to make a public API we're either limited to having a single namespace (whatever is in pip.py
) or we need to change it back to a package (and possibly break Python 2.6 again, unless we've deprecated it by then).
Of course, we may never make a public API in which case, the point is moot.
I can't help but think that @warsaw and @ncoghlan probably have some opinions on this too.
Maybe @bkabrda too! and @tdsmith
Don't underestimate the power of that first point. The intertia is high: lots of tools will assume pip
, and lots of documentation will be wrong. Having a long deprecation cycle is basically mandatory here. Otherwise, I think this is a good idea: +1.
-1 on removing pip I would have to change all of my deployment scripts. +1 on removing pipX and pipX.Y
On Tue, Oct 6, 2015, at 08:16 AM, Cory Benfield wrote:
Don't underestimate the power of that first point. The intertia is high: lots of tools will assume pip, and lots of documentation will be wrong. Having a long deprecation cycle is basically mandatory here. Otherwise, I think this is a good idea: +1.
— Reply to this email directly or view it on GitHub[1].
Links:
Don't underestimate the power of that first point. The intertia is high: lots of tools will assume
pip
, and lots of documentation will be wrong. Having a long deprecation cycle is basically mandatory here.
Yea, completely agree. I essentially assume that we should not have a defined removal date (and possibly never) and just have it log a message to stderr.
I essentially assume that we should not have a defined removal date (and possibly never) and just have it log a message to stderr.
I don't think that will work. Just printing annoying warnings with no defined "your shit will break no later than X" date doesn't really help: people will just ignore the warnings. I think if you want to do this you should decide a date (possibly one far away, but still). One possible date to start the discussion: when the last RHEL LTS release with Python 2.6 stops being supported (that's very far away still, but worth discussing).
-1 on removing pip I would have to change all of my deployment scripts. +1 on removing pipX and pipX.Y
I don't think it makes sense to deprecate (not talking about removal any time soon) pipX
and pipX.Y
without also doing it for pip
since that is arguably the worse offender of them all.
One possible date to start the discussion: when the last RHEL LTS release with Python 2.6 stops being supported (that's very far away still, but worth discussing).
This is good idea.
One possible date to start the discussion: when the last RHEL LTS release with Python 2.6 stops being supported (that's very far away still, but worth discussing).
That's November 30, 2020 for the end of RHEL 6 Production 3 phase. They have a super special extended life cycle beyond that, but perhaps we could just target 2020 and if we roll around to 2020 and Python 2.6 is still somehow in wide support, we push it back further.
Another alternative would be to stop requiring that pip execute in the same Python environment that it is installing things into...
To be honest, I'm not sure how pip install -p python2.7
is any better than python2.7 -m pip install
. We have to inspect the Python we're installing into to get information from it, so either we're going to subshell into that Python to shuffle data back and forth (like my half done virtualenv rewrite does) or we'll need to continue to be executed by the same Python environment. Feels like shuffling deck chairs more than anything else.
For that particular idea the main benefit would be that you would only have to upgrade pip once.
That much is true, the flip side of that is it makes it (somewhat) harder to support versions of pip older than what pip itself supports, since the installs are no longer independent (or you'll need to keep around an older copy installed somewhere else). On the other hand, pip could more easily drop support for running the main pip
binary command in a particular Python, while keeping compatibility for installing into that version of Python. It would (continue) to enable bundling all of pip into a single zip file that can be executed somewhat independently of actually having it installed.
It doesn't address the fact that pip install --upgrade pip
on Windows blows up because the OS has a handle open to pip.exe
though, which I don't think can be solved without using python -m
, at least if I understand @pfmoore correctly.
I think @sYnfo wants to comment on this more than I do, since I no longer maintain Python in Fedora/RHEL.
Ah, that's right, I forgot. Sorry!
(But I personally think that even if these are removed, we'll still provide them on distribution level in form that is best for the set of Python interpreters that we ship; at least that was my first idea when I read the proposal... We have a general policy for Python executables that mandates this in Fedora.)
py -m pip
:-)pip
or advising python2.6 -m pip.__main__
. I don't think it's worth making changes to pip to give them any other solution.The inertia issue is huge, and I don't think we should fight it directly. Rather, we should switch the documentation to use the "python -m pip" form, and make that form official PyPA policy (by which I mean we take pains to use that form consistently in whatever we post, etc). Maybe offer PRs for the install documentation of well-known projects to switch them to the new form. We can worry about formally deprecating and/or removing the scripts once the python -m pip
form starts to gain a bit of traction in common usage.
I think my success criterion is making this the incantation that people get from StackOverflow. Shoot for the moon, etc etc.
I think publishing a linkable intent-to-deprecate message with a rationale may help convince third-party maintainers to accept documentation PRs.
The messaging here is tricky, because deprecating pip
doesn't mean deprecating pip...
I think the only real alternative is Daniel's suggestion. I think the current situation sucks and I can't really think of a way to save the attempt to manage pip versions using a suffix or prefix or anything that doesn't end up actually specifying which interpreter you want to run under.
I'm +1 for deprecation—it seems to make a whole lot of sense from upstream point of view and I don't see any issue this could cause downstream.
In Fedora this would mean shipping all the binaries during the deprecation period, as we do now; and not shipping them at all afterward. Perfect sync with upstream, hopefully. :) I'll make sure all the Fedora docs get updated, when this goes official.
( @bkabrda If I understand the guidelines correctly, they only mandate shipping all the MAJOR.MINOR executables iff there are any executables in the first place, this issue seems to be about removing the pip executables entirely, right? )
Also +1 to either keeping pip or using python -m pip.main with python 2.6.
A couple of points that haven't come up:
python
? Why is python
(of various versions and in various virtualenvs) any easier to keep track of than pip
?pip
unambiguously refers to the active one. An alternative solution would be to encourage people to always use venvs, which solves other problems as well, e.g. dependency conflicts from dropping everything into a global environment.I'm not big on the idea, but I don't think I've ever run pip outside of virtualenv, which is usually activated into my current shell, so the extra typing doesn't gain me anything (I'm selfish!). Of course, if it's not activated, I probably have to type a lot more than that anyway...
How would people feel about making an exception for virtualenvs? There's no mystery for which python it is using, but then again it might be too confusing to have it work differently between for virtualenv and the system install, so I'm feeling pretty humble about its quality as a suggestion. I'm also not sure there's a good solution for checking that.
Also,
pip3.4 isn't specific enough, you might have 3.4.0 and 3.4.1 installed.
Does this really come up often? I'm curious how you're distinguishing between them if so, since I didn't think python would normally install itself any more specific than by the pythonX.Y name (and unless they have different prefixes, sounds like it would also try to share site-packages anyway).
Oh, I guess that does apply having multiple 3.4.0 installs too, but then it seems like you'd definitely be using full paths to distinguish whether you're using "pip" or "python -m pip", but since you're using full paths anyway, the argument against having extra wordiness goes away.
I may be a little disconnected here, but why not replace pip
with an alias that was effectively something like:
alias pip=/usr/bin/env python -m pip
Or a script that was akin to this to be installed into /usr/local/bin/pip
.
That way we don't lose pip
, and it universally works errywhere.
Also not sure if that solves the problems you're facing, just my $0.02. :rage3:
@erikrose
It does sort of shift the problem to python
, but typically the confusion that I've seen arise stems from when python
and pip
disagree. Like you might have /usr/bin/python
and /usr/local/bin/python
and if python
points to /usr/local/bin/python
and pip
points to /usr/bin/python
it's a recipe for confusion. So we completely eliminate the confusion cased by having the two disagree, but removing any ability for it to disagree. You're still left with confusion about what python
means but I don't think there's anything at all that can be done about that, and particularly not by us.
A virtual environment makes the issue harder to hit, but I think that it's still possible. My gut tells me that new users (the ones most likely to get tripped up by this) are probably not going to be religiously using virtual environments (if they're using them at all).
@rschoon
We could possibly do that, it'd require a permanent special case inside of pip when pip installs itself (because there's no setup.py
in a virtual environment) but it's certainly a possibility. My main fear with that is it feels like it'd just always be better to use python -m pip
anyways because it works inside and outside of a virtual environment instead of having to remember to switch commands based on whether you're in one or not.
I'm not sure exactly how often the 3.4.0 and 3.4.3 thing comes up. I know that it's not super unusual on OSX since you have system provided Python sitting in /usr/bin/python
and you might also install the Python.org installer or Homebrew or Macports or pyenv (or some combination of the above).
@mattrobenolt
Basically, because it's super confusing if you're doing something like running myvenv/bin/pip
without activating the virtual environment first or if you have a copy of Python installed to a non standard location /opt/my-app
and you run /opt/my-app/bin/pip
and expect to manage /opt/my-app/bin/python
.
it feels like a loss for virtual environments to lose a basic pip
script when they don't suffer from this problem.
as for real pythons I guess I wouldn't mind seeing a more exact pip-<python binary path>
type console script, and let the distro packagers tack on simpler scripts for system-managed pips.
-1 on deprecating pip
, +1 on the others.
I want to expand on some framing thoughts I have, that might help for discussing what is such a large issue. Skip to the break if you just want to read my thoughts on solutions.
For a start, you want to look at why we have this problem, and is it analogous to anyone else? It comes from having and allowing multiple pythons on the one running system. So system package managers do not have this problem, because for instance, you don't (can't) run debian jessie and debian wheezy live at the same time on one system; so it doesn't need to manage a libreoffice3.5
and libreoffice4.3
for example. However many other language package managers start having to deal with the same problem as pip's when they too have multiple versions installed. As @erikrose mentions, even python itself already runs into this issue on deciding on what python
now is when more than one is installed.
I also want to look at the issue from the POV of majority python users. Note that this has become a real pain point mostly (or more-so) for people with more than 2 pythons installed. Otherwise pip
would work, or simply pip2
and pip3
(and there probably wouldn't be enough inertia for everyone to start discussing). Beyond that it starts getting really complicated. But I'd believe most python users are happy using just one python. Even if their system somehow gives them 2 at some point, or they manage to install multiple, instead of upgrading / (removing previous and installing a new one) - if they got things right, they'd be fine with only one python going, and by extension, that python's pip
. For all of these users suddenly taking away pip
makes absolutely no sense and is just painful.
The other big source of pain is when someone merely tries to install a new python over a previous existing one, but the story for the entire environment being migrated to the new one (or "taking over the old one") isn't there. In that case I want to make the distinction that this is the install story's fault, not the existence of multiple pythons. For instance someone hoping to "install a new python!" but not getting it on their path. Even uninstalling the old python, might do nothing about giving them the environment they want (the new python on their path).
Also note that while the number of users collecting problems with managing pip
s may be small, their complaints are the only ones heard. I'd venture that their opinions are probably the majority on this discussion as well (because they're the ones with the issue). The silent majority doesn't care until things get changed for them, but we should still look to represent their use case fairly. Not of course, that those complaints therefore can't be valid.
Now with that in mind, here's solutions I like:
pip<versionstuff>
doesn't scale well at all as a solution. So I'm in agreement for removing it. Most of the time its better to wait for a decent solution (if it exists) than try and keep going with one that creates as many issues as it solves.pip
(or other pip2
s or pip3
s, even) without asking. Even system package managers do this. They ask beforehand. So should we. This way at the very least, the user sees straight away that there's an issue and perhaps can make a decision for themselves what they want pip
to be. One can do a lot with this - look at who's python installation the existing pip
comes from - its the same one as the current pip
trying to install itself, then this could be fine. Otherwise make the user say --yes
or --replace
or answer Y
to a prompt. Note that this could help with the same problem with other userland-programs-that-come-from-pypi. Make sure the user wants the executable script replaced. Even if this means we have to wait a long time to tell people that they might have to interact after calling pip install
by default (for scripted uses of pip), and give them time to add --replace-scripts
(or w/e) to their callouts, so be it.Start outputting some information about where things come from in an install! This could solve a lot of issues straight away. If I install a package with pip, pip doesn't speak about
This would all be super useful information to know. It will immediately show me if the pip
I'm calling on the command line is not the actual pip that I want, which is what tricks a LOT of users. It will also show me I'm installing for the right python and into the right site-packages
.
I especially believe implementing the last two points above, would remove a lot of average user-problems in relation to this issue. In many cases they would be empowered to know the problem and solution themselves.
One option I just thought of, once we actually remove the scripts (if that's what we do) we could just make a pip-cli package which restores them. This would both make it trivial for people to get the old behavior back (just install pip-cli) and make it easy to keep the scripts inside of virtual environments (just have them install pip-cli too).
Sent from my iPhone
On Oct 6, 2015, at 1:16 PM, Marcus Smith notifications@github.com wrote:
it feels like a loss for virtual environments to lose a basic pip script when they don't suffer from this problem.
as for real pythons I guess I wouldn't mind seeing a more exact pip-
type console script, and let the distro packagers tack on simpler scripts for system-managed pips. — Reply to this email directly or view it on GitHub.
This would both make it trivial for people to get the old behavior back (just install pip-cli) and make it easy to keep the scripts inside of virtual environments (just have them install pip-cli too).
As a point of reference, if you're not aware, this is how grunt
works in the node world. Not sure if best example, but it's an thing.
The tl;dr is you $ npm install grunt
into your project for a local install, then $ npm install -g grunt-cli
to give yourself a global $ grunt
, which ends up just using the local install'd package.
From my understanding, that sounds similar to what you're proposing?
@mattrobenolt in a sense it's not helpful, because that is still only talking about one install of / one version of nodejs on your system. If you have two node installs, which one does the global grunt
script belong to?
Node also has the "advantage" in this sense in that its package installs are location-local by default, which is the opposite of python. Even in a virtualenv, you are installing packages "globally", but inside in an isolated environment (instead of your system one).
If you have two node installs, which one does the global grunt script belong to?
It shouldn't technically matter, since it's just a shim into the real pip
that's installed into your virtualenv or whatever you're doing.
In our case, it could install into system python
, whatever that python is, it's an implementation detail. It just needs to shell out.
because that is still only talking about one install of / one version of nodejs on your system
You're assuming people don't use nvm
or any other virtualenv
-like tools? Same idea.
But again, my context is mostly limited to being a user, so I'm sure there are many contexts that I'm not taking into account. Not claiming to have the solution, just citing examples of other things in the wild that do similar things.
Even though since the grunt-cli
code is fairly stable, it indeed "shouldn't much matter" who put it there, the point of contention is which grunt it calls. As I said in node-land 99% of the time this will be a path-local grunt, and everything is decided for you. You already know what version of nodejs your project on your current path is using. However unfortunately that's not the case with python.
If I have python 2 and python 3 installed, and I call pip
, even though this pip
was provided by a pip-cli
(equivalent of install -g crunt-cli
) from one of the two pythons, which python's pip should the global pip
script call? Here we no longer have a path-local system to guide us.
+1 for Ivoz's take, but in particular:
Start outputting some information about where things come from in an install! This could solve a lot of issues straight away.
This would be very useful (both to novice users and to experienced users from other platforms) no matter what the decision regarding invocation syntax.
+1 on deprecating pip.+
without deprecating pip
but it also shouldn't be too much work to fix automation scripts and in fact I think automation would benefit more from knowing exactly what version of python to run pip from. I haven't read all of the arguments yet but I probably will and then return to give more opinions
My first ever GitHub +1. I think it's fair to consider docs that don't recommend python -m pip
flawed, since this invocation is much less prone to failure on a hand-bodged Python install like the typical novice developer's laptop.
From my perspective: I'm in agreement about the motivations for this change. Django has plenty of analogous problems with people using the wrong python version to run django-admin
; moving to the python -m
approach for both pip and django-admin would be an elegant way to to address this issue.
My only concern is bullet point 2: 10 more characters to write. From a UX perspective, I'm concerned about introducing boilerplate that needs to be typed in order to run anything. Especially when dealing with new users, having a "Just trust me" magic incantation format isn't ideal.
One suggestion (although it requires a change to Python, rather than PyPA): Make py
(or some other shorthand) a shortcut for python -m
. Yes, this means having 2 ways to invoke python, but you could defend it as "py runs modules, python runs code". The other downside to this would be that it would only benefit new python versions, unless it was backported to 2.7/3.[345].
@freakboy3742 yeah, I mean this also would mean that flake8, pep8, etc. should all move to this convention too (which given the fact that pyflakes is very dependent on the version of Python makes a bit more sense). py
could also be distributed as a package for people looking to opt in early, but it conflicts with py.test
's py
module too if I remember correctly.
+1 on advocating for python -m pip
as default and preferred, rather than just pip
.
I've been teaching beginners (kids even) and introducing newbies to Python for quite a while now. In addition to the points by @dstufft , everything gets complicated when you get to virtualenv territory because the Python being used is not the default Python on the system path. In particular, these complexities get worse if using other Python distributions; e.g. with Anaconda Python, you can create a conda env without pip (e.g. you forget to conda install pip
) and then the pip on the path happily continues to install in the root env and not your conda env. In the scientific space, many people are using Anaconda Python as their first, default Python.
In this scenario, using python -m pip ...
will tell you if pip is not present in the active python.
As for perceptions of discomfort, it also exactly mirrors other very common invocations, e.g. python -m pdb
, python -m ipdb
, python -m cProfile
, python -m timeit
, python -m pstats
, python -m SimpleHTTPServer
, python -m json.tool
, python -m gzip
, python -m filecmp
, python -m zipfile
, python -m encodings.*
, python -m mimetypes
, python -m tabnanny
, python -m pydoc
, python -m unittest
, python -m calendar
, and probably a bunch of others I don't know about.
I'm quite sure making this change would make pip easier to explain to beginners. I have to explain the -m
switch anyway for pdb
and cProfile
so this change would be a net simplification for my students. (Venv would become much easier to teach too if all you had to do is call the correct python executable and not mess with paths and "activation", but that's a grumble for another time)
It would be useful to at least keep the pip
command for the case in which it is intuitive and (mostly?) unambiguous: inside a virtualenv.
Doing that would also help with documentation inertia.
I quite like 'Move pip/ to _pip/ and make pip.py.' as an option. It seems viable, and while distruptive to the folk poking around in pip/ today, worth it to improve the user experience - particularly since we don't offe r a public API today.
I'm a +0 on deprecating since it's not a usecase I've ever run into. Using python -m would be a lot of extra typing for users not familiar with aliases.
Since we're looking at a significantly long deprecation path is it necessary to come up with a hack for Python 2.6? Assuming a long enough deprecation path could we not assume that Python 2.6 will be of such a low usage that a hack is not necessary?
I think I agree with @audreyr - in a virtualenv, it's unambiguous, unadorned (no pip2
/pip3
/etc), and the Python layer of tooling (activate
, et. al.) makes it so that you get the "right" binary without resorting to telling users to configure their shells.
I have an even more radical proposal though: what if you didn't even run pip
as an installer? Increasingly often, what I want is virtualenv --requirement project/requirements.txt project
; if I want to "install" a new thing, it's time for a new virtualenv. Of course I break this rule all the time, upgrading existing venvs, removing dependencies and such, but this is mostly a bad habit that I think I should get rid of, especially now that wheels make new-virtualenv-creation fairly fast.
pip
as an command line is really nice to have. Nobody knows about virtualenv yet, and there are some utilities that you want to use outside a virtualenv. What if I want to install the latest Mercurial or Stackless Python with pip
, I can do this right now.
I can never get the npm/bower incantations right on the first try. Even if you remove the executable, somebody will still look at some old docs hanging around in the net, and will use the OS provided python-pip and/or python3-pip, and it will fail. Then that somebody will still try to sudo pip install -U someoldpackageyoudontwanttoupgrade
and oops, there goes the OS yelling at you. Yes, I've done that. Just replacing this with sudo python -m pip
will not avoid this situation.
Wouldn't it be easier to focus on making pip behaviour about packages (and itself) --user installable by default, then get rid of/replace pip2/pip3/whatever with links to the latest pip, so that it's found first in the user's path (I already have mine in ~/.local/bin/
), working from its own wheel, and safely tucked away from whatever the OS "thinks is best". I submitted a workaround to allow pip to work with virtualenv when it's set to user installs by default (https://github.com/pypa/virtualenv/issues/802) that also works with pyenv.
This is for people who don't really care which python version they are working with, and want something to "just work". Then some admin will still want to use "sudo pip install for everyone" and it just works. The user just tries "pip install for me" and it just works.
I'd rather not have to worry about "/whereismypython/python -m pip install -g --userDev --Ireallymeanit --pleaselistentome blah". And going back to "python setup.py install blah" seems to me like a step backwards.
-1 from me on dropping pip
and its version specific symlinks, as the stick of deprecation needs to be wielded very lightly.
We already have a significant ongoing problem with change fatigue in the Python ecosystem, as there are three major low level tech transitions (Python 2 -> Python 3, easy_install/eggs -> pip/wheels, unsafe by default -> secure by default) still in progress, and a lot of work remaining in propagating those out through the redistributor channels (direct upstream consumers that actually come talk to us online are the tip of the iceberg when it comes to Python's user base). Adding a "pip" -> "python -m pip" transition on top of that isn't worth the pain right now (as a rough guesstimate, I'd say my opinion on that might change by 2017 or so).
However, I do think it's worth emitting a message whenever pip is run globally that states:
For example:
"RuntimeWarning: no venv detected, installing into '/usr/lib/python2.7/site-packages'. Pass '--user' for user-specific installation, or run '<other_python> -m pip' to target a different runtime".
I also see an opportunity to tie the python -> py transition into the Python 2->3 migration, but that's a topic for python-ideas rather than here.
@cjrh +1 for advocating python -m pip
especially in documentation of a PyPA preferred install for new deployments (esp. science and data science as well as education). conda and brew complicate things as Caleb mentions.
Agree with @audreyr in a virtualenv.
The transition can initially be documentation. @ncoghlan I think there is already a significant amount of pip/conda/brew troubleshooting being done by maintainers of third party projects in data science/science to walk end users through the many permutations. I agree emitting better warnings would be helpful too.
python -m pip install ...
pip install
in a virtualenv
I also really like @glyph's suggested virtualenv --requirement project/requirements.txt project
Actually, all three approaches deemphasize version numbers in execution. This may actually be an unexpected benefit toward moving some projects to Python 3.
-1 on removing pip I would have to change all of my deployment scripts. +1 on removing pipX and pipX.Y
My reasons are that I have basically 2 use cases for using pip (which have somewhat already been mentioned by @audreyr and others here but I'll repeat):
I realize these are just my personal use-cases but I would believe they are pretty common among people who use pip.
I've only used pip3
a couple times to install some python3-only packages system-wide which is a use-case which should slowly (probably faster than the proposed deprecation warning though) disappear since distributions are slowly moving to python3 by default.
And to close this comment, I'm not for using a different syntax inside and outside virtualenv's, this will just be confusing for a lot of people (cf. @Samureus comment on how confusing npm is)
How about providing pip via a New command that only defers to python -m pip
Currently, people are regularly running into problems over confusion about which particular Python a particular invocation of pip is going to manage. There are many cases where what Python a particular invocation of
pip*
is going to invoke is unclear:pip3 install --upgrade pip
will overwritepip
and possibly switch it from pointing to 2.7 to 3.5.pip3
isn't specific enough, you might have 3.4 and 3.5 installed.pip3.4
isn't specific enough, you might have 3.4.0 and 3.4.1 installed.pip3.4.0
isn't specific enough, you might have multiple copies of 3.4.0 installed in various locations.pip-pypy
? What if we have two versions of PyPy?pip-pypy2.6
? What if we have PyPy and PyPy3?pip-pypy-2.6
andpip-pypy3-2.6
?).Overall, it's become increasingly clear to me that this is super confusing to people. Python has a built in mechanism for executing a module via
python -m
. I think we should switch to using this as our preferred method of invocation. This should completely eliminate the confusion that is caused whenpython
andpip
don't point to the same version of Python, as well as solve the problem of what do you call the binary on alternative implementations.In addition to the confusion, we also have the fact that
pip install --upgrade pip
doesn't actually work on Windows because of problems with the.exe
file being open. Howeverpython -m pip install --upgrade pip
does work there.I see only three real downsides:
pip
and this will be churn for those.For the first of these, I think the answer is to just have a very long deprecation cycle, in the order of years. I wouldn't even put a specific date on it's removal, I'd just add the warnings and re-evaluate in the future. Luckily we've shipped support for
python -m pip
for quite some time, so it won't be something that people need to deal with version differences (mostly).The second of these I don't really have a great answer for, I think that 10 extra letters probably isn't that big of a cost to pay for the reduced confusion and the default answer working on Windows. We could possibly offer a recipe in the docs to restore
pip
,pipX
, andpipX.Y
via shell aliases.This last item is the biggest sticking point for me. As far as I know, Python 2.6 still has far too many users for us to drop it since, as of 6 months ago, it was still ~10% of the traffic on PyPI (source). The problem with Python 2.6 is that it only supports
-m
when the target is a module not a package. I see four possible solutions to this:pip*
on Python 2.6.pipcli.py
that people can invoke likepython -m pipcli
instead ofpython -m pip
.pip/
to_pip/
and makepip.py
.python -m pip.__main__
.I don't really like the
pipcli
idea, the other three all have pros and cons, but I think I personally live with either not deprecatingpip*
on Python 2.6 and/or documenting that it needs to be invoked as ``python -m pip.main on Python 2.6).What do the @pypa/pip-developers think?