ionelmc / cookiecutter-pylibrary

Enhanced cookiecutter template for Python libraries.
BSD 2-Clause "Simplified" License
1.25k stars 207 forks source link

Simpler fix for the cffi version mismatch and backend problems #209

Closed ionelmc closed 4 years ago

ionelmc commented 4 years ago

So it turns out that I enabled system site packges (for speed) and that caused problems when building the pip517 wheels (since pip doesn't account for that options it reinstalls cffi and you get the version mismatch). The unavailablebackend issue is gone too (not sure what actually caused it but I suspect it's a hidden cffi mismatch error).

I've switched the virtualization to lxd to get fast builds again.

Passing builds with publishing disabled here: https://travis-ci.org/github/ionelmc/cookiecutter-pylibrary/builds/740417786 https://travis-ci.org/github/ionelmc/cookiecutter-pylibrary/builds/740360626

Closes #206. Closes #203. Closes #202. Closes #204. Closes #207. Closes #208.

dHannasch commented 4 years ago

--sitepackges caused problems when building the PEP517 wheels (since pip doesn't account for that option, it reinstalls cffi and you get the version mismatch). The UnavailableBackend issue is gone too (not sure what actually caused it but I suspect it's a hidden cffi mismatch error).

I'd like to figure out a minimal example at https://github.com/dHannasch/PyPy-CFFI to raise this as in issue, but I'm not really following what the problem is.

I mean, the bit where BackendUnavailable might appear due to a cffi version mismatch isn't too surprising; there's a known issue where BackendUnavailable can appear as a catch-all error: https://github.com/pypa/pep517/issues/45

The thing with https://github.com/ionelmc/cookiecutter-pylibrary/issues/203 was (I thought) that import cffi got the new version of cffi but import _cffi_backend got the old version.

(I previously thought the backend needed to be installed separately, but now that I see you installing cffi, I went and looked and I see that "CPython includes its own copy to avoid relying on external packages" https://cffi.readthedocs.io/en/latest/installation.html , So I shouldn't have been so confused about the python-cffi package not being installed, what's confusing now is that the python-cffi package even exists...it must be old.)

I'm not sure what it would mean for pip to not account for --sitepackages --- as far as I know, --sitepackages just means the environment isn't isolated from the existing installed packages, so if pip tries to install cffi, pip can notice normally that cffi is already installed...but pip should still notice if the version specifier is higher than the installed version of cffi.

Based on what I'm seeing trying to pip-install manually (ModuleNotFoundError: No module named 'setuptools.build_meta'), I'm suspecting it has nothing directly to do with cffi or cffi_backend. I think having any nontrivial pyproject.toml causes the crash. In fact, at https://github.com/dHannasch/PyPy-CFFI I just replaced cffi with the package 'minimal' and it still crashes.

Another curious thing: When I manually create an env with virtualenv --system-site-packages, it crashes. When I manually create an env with venv --system-site-packages, it works fine.

I think we're actually running into https://github.com/pypa/pip/issues/6264, though https://github.com/pypa/pip/issues/6264 is hard to parse so I'm not sure.

ionelmc commented 4 years ago

So I guess you can look at it like this:

~ 1 year ago the previous setup worked fine

Now: pip has a new way of building the package using "overlays" - essentially a system-site-package'd virtualenv over whatever you had. That had some implications:

--system-site-packages was only good because it made things fast. But now with wheels caching and fast install of wheels it's not such a necessary thing anymore.

ionelmc commented 4 years ago

Or maybe travis changed something that affected import paths in the last year, who knows. I don't think it matters so much. What matters is to avoid adding too many workarounds in the project template, since those will become a burden later on :-)

ionelmc commented 4 years ago

Regarding graviton I'd like to try it first on python-hunter (I already have arm builds there). Arm is a surprising platform (eg: my raspberrypi is way slower than virtualized arm on my laptop).

dHannasch commented 4 years ago

tox --sitepackages obviously doesn't speed up python-nameless, since python-nameless doesn't have any dependencies in the first place.

But for scientific packages with deep trees of dependencies on differential-equations packages, machine learning, visualization, and so forth, tox --sitepackages can make an enormous difference in testing time. (Especially if you need to test on more diverse architectures that don't always have pre-built wheels. I'm sure you remember the not-long-ago days when your raspberrypi didn't have pre-built wheels, for example. People are going to want to continue to experiment with exotica like Graviton.) This is easiest to do with CI that lets you load a custom image such as CircleCI, but it can also be done on Travis with manual cache-hacking. (Incidentally, why did https://github.com/ionelmc/cookiecutter-pylibrary/commit/55b78fe1e38a469c7a27bd47e03eecf88f8b183f remove the cache by default?) (Of course messing with caches across builds can be fiddly, so it's still most useful for runners that allow you to specify an image directly like CircleCI.)

https://circleci.com/docs/2.0/circleci-images/ https://circleci.com/docs/2.0/custom-images/ https://circleci.com/blog/creating-a-custom-docker-image-to-run-your-ci-builds/

Obviously not all packages created with the cookiecutter will want or need to use --sitepackages, but what I don't want is for some seemingly-minor change to the cookiecutter to silently break --sitepackages later. That's why I think it would be preferable to attempt to test tox --sitepackages.

(Obviously sometimes things do break --sitepackages; the current situation is an example. If the underlying pip bug doesn't get fixed for a long time, then I'd like to put in some kind of warning or something that tox --sitepackages can fail on PyPy on packages with a pyproject.toml. But most importantly I want to continue testing --sitepackages on the cases that aren't currently broken, so if some future change does break them, we'll be alerted.)

(I'm not too worried about warning people right this moment, because if I understand the problem correctly, it doesn't even have anything to do with PyPy directly, the problem is simply that pip's overlay-handling is reaching outside Travis's virtualenv and Travis's PyPy installation has an old setuptools, so the actual crash will spontaneously vanish whenever Travis next upgrades their underlying PyPy installation, even if the pip build-isolation bug isn't fixed first, and it'll never be a problem unless someone has an old version of setuptools in their underlying installation for some reason. But if I do figure out for sure quite what exactly is causing the problem, I do want to put in a warning for people in the same situation.)

pip has a new way of building the package using "overlays" - essentially a system-site-package'd virtualenv over whatever you had.

Random aside, where did you read about overlays being essentially a system-site-package'd virtualenv? I've been looking for solid information on how overlays work and I haven't been able to find it. It seems like however it is they work, overlays are in fact the source of the problem. (--system-site-packages appears to work in every situation except when pip-building in an "isolated" overlay.)

ionelmc commented 4 years ago

Ok, fine. We can keep the SSE option. At least now we'll know where to look first for problems.

Not sure if I ever read about pip's overlays specifically. I did read pip source code in the past and did reimplement virtualenv in the past tho. Unfortunately that rewrite wasn't successful (no one was interested in reviewing or conceding maintenance to the project) - not sure how Bernat made it with his way more complicated rewrite.

ionelmc commented 4 years ago

Appveyor is taking ages again, I'll just pull the trigger and fix later.