Open ivsanro1 opened 2 years ago
I am facing the same issue. Python version is 3.10 and I am running Arch Linux. I can't downgrade to lower versions of setuptools since I am facing different issues with them - the package name comes as UNKNOWN (see this issue). Currently I am managing by manually adding the Python source directory to path. Would really appreciate a fix for this issue. Thanks!
Hi guys, thank you for reporting this. Could you try again with the latest version of pip/setuptools? I did the following test and everything seems to be working fine:
cd /tmp
git clone https://github.com/ivsanro1/gft-ner
cd gft-ner
virtualenv -p py37 .venv
.venv/bin/python -m pip install -U pip setuptools
.venv/bin/python -m pip install -e .
cd ..
gft-ner/.venv/bin/python -c 'import ner; print(ner.__version__)' # => 1.0.0
I also can run this without a virtual environment (in a container):
> docker run --rm -it python:3.7.13-alpine ash
apk add --update git
python -m pip install -U pip setuptools
cd /tmp
git clone https://github.com/ivsanro1/gft-ner
cd gft-ner
pip install -e .
cd ..
python -c 'import ner; print(ner.__version__)' # => 1.0.0
Here I am purposefully avoiding using the global pip install
in a Debian-derived system, because pip might have some problems selecting versions of setuptools
there (see pip's issue 6264).
The latest version of setuptools 62.3.2
seems to do it. Thanks!
It still fails in my Ubuntu 18.04 environment. Maybe it's the way in which I'm installing Python3?
Here's my Dockerfile
:
FROM ubuntu:18.04
RUN apt-get update
RUN apt install -y git
################## INSTALL PYTHON & PIP ##################
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y python3.7 python3.7-dev
# If run before install pip, so pip installs for 3.7
RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1
RUN apt-get install -y python3-pip
RUN ln -s /usr/bin/pip3 /usr/bin/pip
##########################################################
RUN python3 -c 'import sys; print(f"\n\n\nPYTHON VERSION: {sys.version}\n\n\n")'
# From here it's basically @abravalheri's steps adapted to the environment
RUN python3 -m pip install -U pip setuptools
WORKDIR /tmp
RUN git clone https://github.com/ivsanro1/gft-ner
WORKDIR /tmp/gft-ner
RUN pip install -e .
WORKDIR /
RUN echo `python3 -c 'import ner; print(f"\n\n\nLIBRARY IS INSTALLED - VERSION: {ner.__version__}\n\n\n")'`
Output (only error):
pypa/setuptools#19 [16/17] RUN echo `python3 -c 'import ner; print(f"\n\n\nLIBRARY IS INSTALLED - VERSION: {ner.__version__}\n\n\n")'`
pypa/setuptools#19 sha256:90ba3264713d1a6c2a43cb2819aeadc3edce8e9a85c7767238c832a740fdb80c
pypa/setuptools#19 0.273 Traceback (most recent call last):
pypa/setuptools#19 0.273 File "<string>", line 1, in <module>
pypa/setuptools#19 0.273 ModuleNotFoundError: No module named 'ner'
pypa/setuptools#19 0.274
pypa/setuptools#19 DONE 0.3s
But still, even if my Python 3
installation is the issue, I did not have this problem with setuptools<60
. Which can be checked very easily if you change the line in the Dockerfile
, from:
RUN python3 -m pip install -U pip setuptools
to:
RUN python3 -m pip install -U pip "setuptools<60"
The output with setuptools<60
:
pypa/setuptools#20 [16/17] RUN echo `python3 -c 'import ner; print(f"\n\n\nLIBRARY IS INSTALLED - VERSION: {ner.__version__}\n\n\n")'`
pypa/setuptools#20 sha256:4ba3cc4faffeddd9175e6a1c21466917b409ec493eda764084b69f0b41983ab7
pypa/setuptools#20 0.269 LIBRARY IS INSTALLED - VERSION: 1.0.0
pypa/setuptools#20 DONE 0.3s
Maybe it's the way in which I'm installing Python3?
@ivsanro1, that is a possibility...
To be completely sincere with you I don't know if the workaround you are using for installing pip works 100%. The deadsnakes repository does not seem to provide any deb package for pip... You can however use the ensurepip
module (that gets installed with python3.7-venv
) to install pip...
I manage to get things working on the selected version of Ubuntu, with the following commands:
> docker run --rm -it ubuntu:18.04 bash
apt-get update
apt-get install -y software-properties-common
add-apt-repository -y ppa:deadsnakes/ppa
apt-get update
apt-get install -y git python3.7 python3.7-dev
apt-get install -y python3.7-venv
# python3.7-venv will provide the 'ensurepip' package as a side effect,
# which in turn is used to install pip
python3.7 -m ensurepip
# In Debian-based systems, there is a well-know issue with pip
# which makes old-versions of setuptools to leak into the build
# environment (see pypa/pip#6264), so let's uninstall and reinstall
# setuptools just in case...
python3.7 -m pip uninstall -y setuptools
python3.7 -m pip install -U setuptools
git clone https://github.com/ivsanro1/gft-ner /tmp/gft-ner
cd /tmp/gft-ner
python3.7 -m pip install -e .
python3.7 -c 'import ner; print(ner.__version__)'
The output is 1.0.0
and python3.7 -m pip list
will result in:
python3.7 -m pip list
Package Version
------------------- ---------------
pip 22.0.4
PyGObject 3.26.1
python-apt 1.6.5+ubuntu0.7
setuptools 62.3.2
unattended-upgrades 0.1
Thank you @abravalheri . This way it works. It definitely looks like it's some issue with my installation, because it works any other environment I've tried. I will close the issue.
Thank you for your time
@ivsanro1 the Debian patches for Python (also present in Ubuntu and deadsnakes) are very complicated and they heavily customize the installation directories for setuptools... That is why I demonstrated first with a virtual environment and then with Alpine.
I am not 100% sure that I am not forgetting anything in the example above, but just to be on the safe side I always recommend using a virtual environment, even if you are using containers (there is some debate in the community if that is really necessary, but I believe that everyone agrees that using a virtual environment inside a container does not hurt).
@abravalheri I think you're right and the best is to use a virtual env even inside of a docker container. I will be using a virtual environment for now
By the way, I think I rushed yesterday to close the issue, as in your example, it works because you're importing the library when your working directory has the ner
folder, but if you leave the directory (e.g. to /
), it will still show that it cannot find the installed library. In other words, in the last example you provided, it would work even if you do not install the library, because it's in the working directory when you call to python from that folder (inside gft-ner
).
After I found that the problem probably relies in the way I'm installing python and pip, installing from deadsnakes repo is quite common to find in Google (in fact, is the only way I've read), so I thought there would be more people in the future that could run to this issue.
Also, this issue happening just from the version 60
of setuptools
indicates that something changed in setuptools
, and maybe the contributors wanted to take a look to it.
For now my problem is solved because I will be using virtual environments, but in that environment made of ubuntu:18
and installing python from the deadsnakes repository, the problem persists.
Thank you very much @ivsanro1 for re-checking that. I forgot to jump outside of the directory in the last example 😝 .
Between v60 and v61 there have been a series of changes imported from pypa/distutils
, which might not be compatible with the patches provided by Debian/deadsnakes.
I tried these steps:
> docker run --rm -it ubuntu:18.04 bash
apt-get update
apt-get install -y git wget python3.7 python3.7-dev python3.7-distutils
# I am using the official Debian distribution of python3.7
# to reduce the "surface" for the bug (minimise the moving parts).
# The deadsnakes distribution should not be very different...
wget https://bootstrap.pypa.io/get-pip.py -P /tmp
python3.7 /tmp/get-pip.py
# In Debian-based systems, there is a well-know issue with pip
# which makes old-versions of setuptools to leak into the build
# environment (see pypa/pip#6264), so let's uninstall and reinstall
# setuptools just in case...
python3.7 -m pip uninstall -y setuptools
python3.7 -m pip install -U pip setuptools
git clone https://github.com/ivsanro1/gft-ner /tmp/gft-ner
cd /tmp/gft-ner
python3.7 -m pip install -e .
Using the following commands we can see that the ner
package is installed to /usr/lib/python3.7/site-packages/
.
ls /usr/lib/python3.7/site-packages/
# easy-install.pth ner.egg-link # <-- installed in this folder
python3.7 -m site
# sys.path = [
# '/tmp',
# '/usr/lib/python37.zip',
# '/usr/lib/python3.7',
# '/usr/lib/python3.7/lib-dynload',
# '/usr/local/lib/python3.7/dist-packages', # <- `site-packages` is missing
# '/usr/lib/python3/dist-packages',
# ]
# USER_BASE: '/root/.local' (doesn't exist)
# USER_SITE: '/root/.local/lib/python3.7/site-packages' (doesn't exist)
# ENABLE_USER_SITE: True
What seems to be happening here is the following:
dist-packages
We can see that they introduce deb_system
and unix_local
installation schemes.
~However it seems to be necessary to explicitly pass the --install-layout=deb
parameter to tap into this behaviour. This parameter is not available for editable installs and not exactly meant for usage with pip.~
The equivalent site-packages
folder is not added to sys.path
. ~Maybe this is done on purpose to prevent users installing packages globally (which could be a way Debian maintainers can prevent users accidentally crashing the system beyond repair).~
UPDATE: The patch seems to select the unix_local
installation scheme by default, which should map to /usr/local/lib/python3.7/dist-packages
and therefore is available on sys.path
(there is no attempt to prevent users for installing packages globally).setuptools/_distutils
instead of Python's standard library, by default.
Therefore, no patch is applied to the setuptools/_distutils
code, and, as a consequence, the packages are installed by default to site-packages
.pypa/distutils
and pypa/setuptools
created a mechanism for Debian to customize the installation layout without the need of patches: they need to add a _distutils_system_mod.py
file to the standard library folder.I don't think there is much we can do from the pypa/distutils
/pypa/setuptools
side in this issue.
The Debian maintainers seem to be aware that _distutils_system_mod.py
is required to interoperate with global installations using the latest versions of setuptools
(as they do implement it for Python3.10
).
Meanwhile, users can do the following to workaround of this issue:
SETUPTOOLS_USE_DISTUTILS=stdlib python3.7 -m pip install -e .
I believe I have the same issue. I'm on a Mac.
$ pip --version
pip 22.1.2
$ python3 -c 'import site; print(site.USER_SITE)'
~/Library/Python/3.9/lib/python/site-packages
With setuptools 59.8.0
, pip install --user -e .
installs the egg-link into ~/Library/Python/3.9/lib/python/site-packages
(=USER_SITE, OK)
With the latest setuptools (62.3.2
), pip install --user -e .
installs the egg-link into ~/Library/Python/3.9/lib/python3.9/site-packages
(NOT OK)
Adding SETUPTOOLS_USE_DISTUTILS=stdlib
(from https://github.com/pypa/setuptools/issues/3301#issuecomment-1135646040) fixes it.
Hi @thomie, this issue is Debian specific.
Maybe the issue you are facing is related to the existence of a distutils/distutils.cfg
file that is not being read by setuptools? Would that be the case? If so, it is related to https://github.com/pypa/distutils/issues/152
Hello @abravalheri,
Thank you for your response.
I don't think my problem is related to (the not reading of) distutils.cfg
, because that file only contains absolute paths (install.prefix, build_ext.include_dirs, build_ext.library_dirs).
The problem I'm facing is that the egg-link gets installed into "{userbase}/lb/python{py_version_short}/site-packages", instead of "{userbase}/lb/python/site-packages".
I can reproduce the problem with a fresh brew install python@3.9
. I cannot reproduce the problem with python@3.10
(and upgrading setuptools to latest 62.3.2). On a whim I tried copying the 3.10 sysconfig.py to the 3.9 installation, and that indeed fixed the issue for 3.9. There are a lot of changes in that file, and I'm not going to dig any deeper.
So I think the problem has been fixed, and it was most likely not an issue with setuptools, but rather something to do with sysconfig.py or the brew python@3.9 formula.
Thank you very much @thomie for the update.
So I think the problem has been fixed, and it was most likely not an issue with setuptools, but rather something to do with sysconfig.py or the brew python@3.9 formula.
Unfortunately it is a bit complicated to trace back and isolate what actually is causing the problem... I believe that the maintainers for the python@3.10
formula changed the way they customize the installation locations and now they no longer rely on distutils.cfg
.
Given the following observation:
I don't think my problem is related to (the not reading of)
distutils.cfg
, because that file only contains absolute paths (install.prefix, build_ext.include_dirs, build_ext.library_dirs).
I am thinking that this can actually be related to setuptools
not being able to read the install.prefix
static path in the distutils.cfg
file (setuptools
will try to read .../site-packages/setuptools/_distutils/distutils.cfg
while the real file exists in .../site-packages/distutils/distutils.cfg
). This would be the exactly same problem as the one described in the issue pypa/distutils#152.
It is now April of 2023 and I am now running into this issue. I am using the workaround of creating a venv inside my container which is undesirable in my case.
In my case I've tracked that my package with the new setup tools was added to:
Where the old one added to:
But sys.path does not load the paths from /usr/lib so it gets lost. So setuptools does not act in parallel with whatever is starting python.
Is there any timeline on when this might be resolved? Thanks!
Hi @sei-amellinger, have you considered the information available in https://github.com/pypa/setuptools/issues/3301#issuecomment-1135624073 and in https://github.com/pypa/setuptools/issues/3301#issuecomment-1135646040 ?
Is there any timeline on when this might be resolved?
If I understood what you are describing correctly, there seems to be no action point on the setuptools side.
Hi @sei-amellinger, have you considered the information available in #3301 (comment) and in #3301 (comment) ?
Is there any timeline on when this might be resolved?
If I understood what you are describing correctly, there seems to be no action point on the setuptools side.
Thanks for the quick response!
When parsing this bug report I saw the issue 3625 issue as deprecating the used of "SETUPTOOLS_USE_DISTUTILS=stdlib" and interpreted it as an approach to try. So, didn't try it.
So on your suggestion I went back and tried it (#3301) and it DOES work for me. (Woot!) How long is this going to be a viable workaround if the plan is to deprecate that switch? And advice?
Thanks!
How long is this going to be a viable workaround if the plan is to deprecate that switch? And advice?
Hi @sei-amellinger, setuptools is driven by volunteer efforts which means that we are limited in terms of commitments. Realistically speaking, we are going to support SETUPTOOLS_USE_DISTUTILS
in a best effort manner.
We do however, bump the "major" version in a release, every time we know there is a breaking change or a "potential" breaking change. This allows users to "freeze" their setup if they want to be absolutely sure about backward compatibility. If you use pypa/pip
or pypa/build
, you can achieve that with pyproject.toml
:
[build-system]
requires = ["setuptools==67.6.1"] # or less dramatically "setuptools~=67.6"
build-backend = "setuptools.build_meta"
Please consider however that a virtual environment is undoubtedly the safest approach. It reduces the influence of Debian's heavy patching and at the same time avoid "dependency-hell problems" that might appear with "frozen/capped dependencies".
Please consider however that a virtual environment is undoubtedly the safest approach. It reduces the influence of Debian's heavy patching and at the same time avoid "dependency-hell problems" that might appear with "frozen/capped dependencies".
That's not always feasible, at the least, i see those use cases:
In both cases, debian specific patches will interfere, and that would be really a wrong move to ignore such a widespread OS instead of making things just retocompatible...
python -m pip install -e
is just basic and just must work as it should be...
I seem to have a similar problem under MacOS. It seems to only create the *.dist-info
folder when installing via pip install -e
, building and installing the wheel afterwards works fine
setuptools version
Since setuptools==60.0.0
Python version
3.7.13
OS
Ubuntu 18.04
Additional environment information
No response
Description
I have a python library in local, which I'm developing. Therefore, I'm interested in installing it with
pip install -e
as I've always done, to develop it while it's installed.However, today I updated
setuptools
from46.1.3
to the newest (62.1.0
) and I realized that even ifpip install -e /path/to/my/lib
reports no error in output, but I cannot import the library in python.I've tracked down the version that broke this and it looks like it's
60.0.0
.Install
setuptools==59.8.0
:Install library with
pip install -e
:Try to import library in python:
It works with
setuptools==59.8.0
.Now let's do the same with
setuptools==60.0.0
:Install library with
pip install -e
:Try to import library in python:
Expected behavior
The installed library via
pip install -e
should be able to be imported like withsetuptools < 60
How to Reproduce
pip install setuptools==60.0.0
git clone https://github.com/ivsanro1/gft-ner
pip install -e
. e.g.pip install -e ~/repos/gft-ner
python3 -c "import ner"
Output