Open tacaswell opened 2 months ago
I'm a bit confused. If pip downgraded numpy, it's because one of the packages you installed declared that it depended on numpy < 2.0
. So an environment with that package and numpy 2.0 in it would be broken. Pip won't create a broken environment containing incompatible packages[^1] so what you seem to be asking for simply isn't possible. (And to be clear, if you're asking for an option that asks pip to create a known-broken environment, then we're not going to agree to that).
[^1]: Actually, you can make it do so if you know how, but it's not advised, and isn't supported behaviour...
I beleive the standard way to handle this workflow is to resolve your packages ahead of time install time.
Using a pip-compile
or uv pip compile
, you have a requirements.in
that specifies your requirements, then your requirements are compiled into a requirements.txt
which is everything you need for your environment, then you use pip-sync
or uv pip sync
which gives you everything you explictly requested.
And if you need to override requirements of packages, uv explicitly supports that: https://docs.astral.sh/uv/concepts/resolution/#dependency-overrides
Or am I missing something about what you need?
Sorry, I provided too much context that was a distraction!
I'm a bit confused. If pip downgraded numpy, it's because one of the packages you installed declared that it depended on numpy < 2.0.
Correct, but I would like it to fail the install and tell me why, rather than do the downgrade.
So an environment with that package and numpy 2.0 in it would be broken.
This is not necessarily true, there are a lot of packages out there that pessimistically put upper caps on dependencies[^1] and it is trivial to have an environment that has in-practice incompatible packages (as in the code does not run properly) that the metadata says should be OK.
That is cool that uv has dependency-overrides
(reinforcing the point that the pinning can be disconnected from actually working or not), I hope that also makes its way back to pip
! However that only works if you already know you need an override. I would like a flag on pip (and uv and pixi, but I figured to start with pip for political sequencing issues) that tells you when you need to use an override (or check out the source and remove the pin to see if is there for a good reason).
This flag would also be useful for other packaging ecosystems to replace things like https://github.com/conda-forge/h5py-feedstock/blob/d742663333407a835745e0160b613e63a3feb095/recipe/build.sh#L23
"${PYTHON}" -m pip install . --no-deps --ignore-installed --no-cache-dir -vv
if h5py were to ever gain an extra dependency[^2] and we missed it in updating runtime dependencies this command would happily work and we should distribute broken binary artifacts. If we had --explicit-only
then packagers could use that to leverage the pip/wheel metadata automatically.
I'm sure the compile workflow works for some people, but I do not think everyone can be shoe-horned into it (for example doing cross-library development or if you use some other packaging ecosystem for your base environment).
[^1]: In this particular case numpy advised the hard upper pinning to work around the ABI compatibility issues with wheels (if you used wheels built with np < 2 with np >= 2 it would not work but wheels built with np>=2 work with older numpy) but there is currently no other metadata to track ABI so the only option was to put artificial upper cap on the version. If you built from source there was no (ABI) problem and what I was trying to test was if there were real problems! However that is an entirely different discussion that should not derail this one. [^2]: which is very unlikely, but for arguments sake
Correct, but I would like it to fail the install and tell me why, rather than do the downgrade.
OK, that makes more sense. I think we'd need to see more evidence that this is a common requirement, though. We can't just add a new option to pip whenever someone comes up with something that would be useful to them - we need to consider the costs vs the benefits.
This is not necessarily true, there are a lot of packages out there that pessimistically put upper caps on dependencies
I'm sorry, and I know you won't like this answer, but that's a bug in the package dependency metadata, then. If A says it doesn't work with B, but it does, then A's metadata is at fault, not pip for believing what A said.
I would like a flag on pip (and uv and pixi, but I figured to start with pip for political sequencing issues) that tells you when you need to use an override (or check out the source and remove the pin to see if is there for a good reason).
I don't think it's reasonable to assume that all installers will implement the same feature sets. And I definitely don't think you should "start with pip for political sequencing issues" - much better to start with the installer that is the best match for your use case (which may well be uv
if they already have dependency overrides). If you want all installers to behave in a certain way, that's about standardisation and would need to be handled through the PEP process. But there's a broad understanding that PEPs don't dictate tool user experience issues, so I don't actually think that's a reasonable option. So start with the one that's most likely to support your workflow (which may well be uv
, as they are more of a "workflow manager" than pip is).
Moving away from broad matters of policy, I don't think this is suggestion is a good fit for pip. We don't have dependency overrides, we have a policy of expecting package metadata to be correct and reliable, and the use cases seem to me more around "manually managing lists of dependencies" than "installing packages". I'm going to leave this issue open, to give the other maintainers a chance to give their views (which may well differ from mine) but personally, I'm against this idea.
OK, that makes more sense. I think we'd need to see more evidence that this is a common requirement, though. We can't just add a new option to pip whenever someone comes up with something that would be useful to them - we need to consider the costs vs the benefits.
One way of looking at this could be, --no-deps should still solve dependencies. After all, the option describes itself as "Don't install package dependencies". It doesn't say anything about permitting to create a broken install -- and it does indeed create a broken install, if dependencies aren't available and it installs one wheel.
Maybe changing that would be too big of a breaking change for existing workflows. Still, it's an interesting thing to think about...
OK, that makes more sense. I think we'd need to see more evidence that this is a common requirement, though. We can't just add a new option to pip whenever someone comes up with something that would be useful to them - we need to consider the costs vs the benefits.
I think this would also go a long way to helping prevent issues when using pip
on top of conda
(or any other packaging tool). By asking pip
to only install what you explicitly asked it to and error if there are missing dependencies it gives the user the chance to make a choice about which tool they want to use to satisfy that dependency.
One way of looking at this could be, --no-deps should still solve dependencies. After all, the option describes itself as "Don't install package dependencies". It doesn't say anything about permitting to create a broken install -- and it does indeed create a broken install, if dependencies aren't available and it installs one wheel.
I think pip install --no-deps --but-do-check-consistency foo
or pip install --only-check-deps foo
are also reasonable ways to spell this. I think we need to keep the escape-hatch for "trust me, it is fine, just unpack the zip file please"
What's the problem this feature will solve?
When installing a package with
pip install foo
any required dependencies will also be installed, including possibly downgrading or upgrading already installed packages. In some contexts (such as when usingpip
to install into externally managed or are using a virtual environment but want to keep careful control of what is installed) it is preferable for the install to fail with an error.Describe the solution you'd like
I think something like
with an entry in
pip install help
like:Ideally the error message on failure would also tell you what packages / versions the resolver requested and which of the explicitly requested package is the source of that dependency.
This patch implements the requested behavior (without any attempt to thread through the configuration or a nice error message):
Alternative Solutions
--no-deps
prevents any extra packages from being installed, but throws the baby out with the bathwater by doing no checking at all. It is preferable to catch issues with changing or conflicting dependencies at install time than at runtime.Additional context
I've been using a version of the patch above locally for about a year and it has been very useful.
How I originally discovered the need for this was during the lead up to numpy 2.0. I wanted to test what projects were actually broken with numpy2, so I built an environment with it, started to install things that depended on numpy who had followed the guidance of numpy and the pip helpfully down-graded my numpy to a 1.x version defeating my goal (I went a good while before I noticed this and was very happy with how few issues there were 🤦🏻 ).
The proposed feature is something I use to help catch when things like this happen again (mostly due to projects that (in my view) cap too hard) but for my use case I would rather know and deal with it.
Code of Conduct