Open leileigong opened 5 years ago
True, I am also facing this problem which is annoying. The only way to work is to write the missing packages to Pipfile with system markers.
The problem is the resolver can't know all dependencies when the resolution is performed on a particular platform, which makes the whole environment only reproducible on the same platform. I will mark it as an enhancement for future.
can pipenv generate different Pipfile.lock with specified file name? such as Pipfile_win.lock for windows paltform, Pipfile_mac.lock for macos paltform.
There is also a similar use case for gpu/non-gpu systems even within the same platform and I'm not sure if it fits here. At the moment, we install gpu targeted packages manually over the installation from Pipefile.lock. There are a lot of scientific packages which have different builds depending on whether you want it to use gpu or not. So, we end up with cpu-targeted container image and a hacky one gpu-targeted.
can pipenv generate different Pipfile.lock with specified file name? such as Pipfile_win.lock for windows paltform, Pipfile_mac.lock for macos paltform.
The way pipenv insists on selecting the latest versions all the time kind of makes that solution bad. If the lock file isn't updated for all platforms at the same time it's too likely that developers for different platforms will be sitting with different versions rendering the whole idea of using Pipenv in the first place questionable.
I have btw. suggested support for minimal version selection in pipenv which might help increase predictability of installs https://github.com/pypa/pipenv/issues/3701 even though there are still no guarantees when dependencies are selected dynamically during setup.
I have only written python packages with simple dependencies which can be declared as a list in setup.py/.cfg
so I'm not really aware of what the detailed problems are here...
The only reliable way I could see this working is if Pipenv can know that all dependencies are fully declared in setup.cfg
. Something along the lines of a named "gpu
/nogpu
" section like the extras_require
section works ( https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras ) which could then be selected during install. This would allow pipenv to also record the gpu
/nogpu
sections into it's lock file and understand how to merge/update/read parts of the lock file.. This is of course only viable as a long term solution which I believe would require adding some features to setuptools and force packages to declare that they are specifying a fully declarative list of dependencies . I cannot really tell if this already exist or not because finding and reading the correct PEPs quickly is also not easy.
I've had loads of problems with the unpredictable dynamic behavior of python dependency trees over the years, the problems becomes even more pronounced when pipenv is used.
I'm not sure what message pipenv is sending when it chooses to vendor dependencies instead of using regular dependencies or source packages.. It feels like pipenv itself doesn't even trust the state of python packaging enough to use it which is a slightly troublesome fact on it's own. In many ways pipenv seems like a project that was rushed a little to fast instead of first going to the bottom with the root causes of problems with python packages and dependencies and as a result we now have a large and complicated system that still only almost works.
I have a ython application runs on multiple paltforms(window, linux, macos). I only wanna to manage one
Pipfile
and add it to git control.but the Pipfile.lock generated by
pipenv install
may be different on different platforms. So it seems thatPipfile.lock
should not be controlled by git!for example: Pipfile content:
use
pipenv graph
insight into my dependency.dependency on Windows:
dependency on linux:
So the Pipfile.locks on windows and linux are different. How could i solve it?