pypa / pip

The Python package installer
https://pip.pypa.io/
MIT License
9.56k stars 3.04k forks source link

Add an opt-out for the “running as root” warning #10556

Closed hholst80 closed 2 years ago

hholst80 commented 3 years ago

What's the problem this feature will solve?

I want to be able to manually remove the warning pip spews out during package installation in root environment:

Running pip as the 'root' user can result in broken permissions and conflicting behaviour ..

Describe the solution you'd like

I want to be able to disable this warning through an environment variable like

env PIP_DISABLE_ROOT_WARNING=1 pip install flask

Alternative Solutions

No in tool workaround known to me.

Additional context

We are all adults here, I know what I am doing and I do not want to see a warning every time I run my build system. Let me disable the warning by setting an environment variable. I do not want my users to think there is anything wrong my system just because of the pip tool spews out indiscriminate warning messages.

Code of Conduct

potiuk commented 3 years ago

One other comment. My friend (whom I asked to help and he actually reviewed the thread and helped to answer some of the questions I asked) told me that my intuition about PATH vs. activation was also right. You need to "activate" the environment for the end users rather than adding Venv bin to the PATH if you (like we do in airflow) use many external packages.

For many packages it's not enough to add a venv bin to PATH because some of the scripts that would normally work with . activate will not work with changing the PATH. This was like that in Airlfow 1.10 and we fixed it in Airflow 2, but many - even popular and actively developed packages still will not work if venv is not activated.

Very prominent example here - where there is a recommendation to dbt to switch the way they create https://github.com/dbt-labs/dbt-core/issues/4035#issuecomment-941674931 to switch the way they are creating dbt script - precisely to be able to run it without venv activation. The dbt is super popular and they have very good integration with Airflow (and plenty of people use Airflow to orchestrate dbt jobs). Unfortunately it would mean that blindly switching to venv without making sure to fully activate the virtualenv would lead to broken integration with dbt.

So it looks like setup_venv.sh in /etc/profile.d is absolutely necessary and just making venv bin to the PATH does NOT solve the problem of using venv in the image. I will do some more testing, I hope it still means that all the installation/commands I run during image building are working fine with just PATH.

But this is a clear sign that my question "will it work with just PATH" was really important to ask.

potiuk commented 3 years ago

OK. I know why our tests fail. And I have a very concrete technical question that maybe the pip experts listening to that will be able to help with. This is yet another 'deep internal of how virtualenv and pip interact` and another example where running from venv in Airflow image is different than running with --user.

Context:

We have PythonVirtuaelnvOperator, that prepares virtualenv for executing a Python callable that the user chooses with the extra requirements that the user chooses (which might override or replace the oriingal versions of dependencies). This is one of the ways how users can combat dependency hell of python code executed in Airflow. Since we have > 500 dependencies, there is a chance that some of those dependencies will need to be uninstalled/upgraded for specific code of the user.

The way how we do that - we allow the user to use PythonVirtualenvOperator, and this operator creates and activates a vuirtualenv for that particular task, it installs (with pip) user's requirements and uses either pickle (fast but limited) or dill (slower but allowing to serialize more callable cases) we serialize the callable to executie. Then we execute this script: https://github.com/apache/airflow/blob/main/airflow/utils/python_virtualenv_script.jinja2 (preprocessed with jinja) to deserialize the callable and execue it.

Wy use ` /.venv/bin/python3 -m virtualenv /tmp/venv4ug22dzm --system-site-packages command to install it (when user chooses to bring system site packages - this is one of the flags you can specify)

Problem

The problem with virtualenv creation in this case that it actually uses "system" versions of packages from the bare installation to create the new virtualenv - rather than the "airflow" ones which contains dill, airflow - and all the other 500 packages we got installed. This is a bit implicit and apparently undocumented behaviour of the Python Virutalenv Operator on our side (and I will bring it to a discussion in Airflow how to best proceed) - because it seems the operator will behave differently when airflow is installed in the Venv and differently when it is not. So this in an "Airflow" problem not pip nor virtualenv.

However there is a different problem here which I need some expert help on. We use latest released debian buster with pip 21.2.4. During my change, I decided to bump pip for airflow to 23.1 released few days ago (It seems to work well so far ! good job!) so i `pip install --upgrade pip==23.1' (in my virtualenv). However when I create a virtualenv, the "system" PIP is used, not he virtualenv one.

So the PythonVirtualenv command to use virtualenv turns into this:

INFO  [airflow.utils.process_utils] Executing cmd: /.venv/bin/python3 -m virtualenv /tmp/venv4ug22dzm --system-site-packages --python=python3
INFO  [airflow.utils.process_utils] Output:
INFO  [airflow.utils.process_utils] created virtual environment CPython3.6.15.final.0-64 in 427ms
INFO  [airflow.utils.process_utils]   creator CPython3Posix(dest=/tmp/venv4ug22dzm, clear=False, no_vcs_ignore=False, global=True)
INFO  [airflow.utils.process_utils]   seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv)
INFO  [airflow.utils.process_utils]     added seed packages: pip==21.2.4, setuptools==58.1.0, wheel==0.37.0

As you can see - pip 21.2.4 is still used in the newly created environment, however I would like it to be the same as the one used in Airflow.. Actually I would like to use the same pip, setuptols, wheel as airflow is using in the image (in our image we specify those versions and upgrade them semi-automatically - but for the sake of reproducibility/avoiding surprises we make sure that they are always specified with fixed versions in our images).

Question Maybe you can help me - is there an easy way to bring the pip/setuptools/wheel versions between the two virtualenvs?

So far the only "good" solution i found (besides always manually reinstalling pip, setuptools, wheel after new venv is created) is to install those packages on system or user level as I did before. However this will again generate the warnings which I wanted to get rid of. Can you think of any other option? Is there another way to carry the pip from one virtualenv to the new virtualenv when you create it that I missed?

potiuk commented 3 years ago

Another option for the last question is we could clone the venv with https://pypi.org/project/virtualenv-clone/ - would love to hear your opinion on this option as well.

potiuk commented 3 years ago

Btw. Pity virtualenv --relocatable has not been added to venv. It would have solved most of those problems.

potiuk commented 3 years ago

Loooks like virtualenv-clone does the job nicely - I started discussing in Airflow how (if) we should fix the current behaviour, but I think I have good proposal to solve the issue with pip version mismatch - when PythonVirtualenv operator will be run with no specified python version, and Airflow is installed in venv itself, my proposal is to clone airflow virtualenv as starting point rather than create a new virtualenv. It willl keep the same behaviour for Airflow installed as 'system' level and as virtualenv.

potiuk commented 2 years ago

Just FYI. I will give up on that one. While trying to make the venv works for us at Airflow I found that installing airflow in the image in venv is a bad idea, as there is really no way to handle Python Virtualenv (and dynamically created venvs which we neede) to behave the way we want without introducing serious incompatibilites.

I got to the point that I stopped believing that 'venv is the one solution to rule them all" and recommending it as "blanket" only way is simply bad idea. There will be cases where it is not good.

I offered my help in clarifying the PEP668 to explain, list and maybe even give some helpful hand to the users who will struggle with it in the future. I thought that was what was suggested as where I could help, but my proposals to help (I asked in the PEP668 discussion were unanswered even if I followed up and asked for advice.

For me this really looks like "we do not want any of your help" statement. So be it. What I am simply going to do is to give up going the venv route, and instead of helping to clarify the vague message in PIP, I will simply add a reassuring mesage to our users that this warning can be ignored as it's not applicable to our case.

I think others might chose a similar route and I think that's far easier than convincing PIP maintainers to clarify the message and provide their user clearer explanation on the messsage.

potiuk commented 2 years ago

Just for your information: the PR that introduces this message is here: https://github.com/apache/airflow/pull/20238

This is how our builds will look like (I am preparing to submit our image to the Docker's "Official Image Program" https://github.com/docker-library/official-images and for it removing all the warnings (or at least explain them during the build) is extremely important, because Official Docker Images are maintained and built by Docker team and giving them all the "relevant" and valid information is important.

This is how it looks like - and it also has the link to detailed explanation, why we cannot remove the warning and why Airflow case is not properly handled by the warning:

Screenshot from 2021-12-13 16-55-29

Here is the text of the detailed explanation - if pIp maintainers have any problem or see some incorrect statements in it - I welcome you to take part in the review and comment on those.

.. note:: Why do we use ``root`` system installation in PROD Docker image despite PIP warning against it?

  The image is multi-segmented image, and the first segment uses "${HOME}/.local" location to install all
  packages (forced by ``PIP_USER="true"``). The multi-segment image is not used to run airflow and finally
  the ``.local`` folder is copied to the .local directory of the Airflow user, which makes it actually safe to
  use, despite the warning provided by PIP. The warning is slightly misleading for an unsuspecting user as
  it complains bout ``root`` user used, and suggests using ``virtual environment`` instead (which is not
  too precise). More information about virtualenv being the ``consensus`` for installing Python
  dependencies, is available at `PEP 668 <https://www.python.org/dev/peps/pep-0668/>`_, however it
  lacks clarity on exceptional cases where virtualenv is not applicable - like Airflow image.
  Unfortunately PIP does not provide a flag to disable this warning (see the issue where it has
  been discussed at length https://github.com/pypa/pip/issues/10556). Attempts to clarify it and possibly
  add more explanation to the PEP 668 failed. In order to make our users reassured, we have to therefore
  provide this explanation to assure our users that it is perfectly fine to install Airflow this way.

  It's perfectly safe to use ``airflow`` PIP installation in ``.local`` folder (as
  happens with ``PIP_USER="true"`` and this is what effectively Airflow image does.
  The virtualenv route suggested is not easily applicable for Airflow because Airflow has built in
  capability of generating their own virtual environments (with ``PythonVirtualenvOperator`` and the attempt
  to use virtualenv failed: https://github.com/apache/airflow/pull/19189
potiuk commented 2 years ago

And final result of this - I decided to finally go with creating a separate user in the build image as well (even if it is additinal complication). There is no point in fighting this one simply,

lhoenig commented 2 years ago

Hi, I'm building a system package manager where pip and cpan (and in the future maybe more) are actually first class citizens. I let them manage the packages of their language completely, and address the conflict with pip this way. There, of course, running pip as root is the default and actually the preferred way. Just this warning doesn't fit.

But I guess I really have to patch that out of pip or do something with venv? Thats all not very satisfying or clean..

hholst80 commented 2 years ago

Following up on this noncritical but irritating issue. It seems the consensus is keep the warning (resonable) and not to provide an escape hatch (IMHO, not reasonable).

PEP 668 is the long term solution here. In the short term, it seems pointless to change something in pip just so that the other half of our user base will be yelling at us

I have no intention of arguing this, but out of interest, what could possibly trigger "the other half" of the user base of Python to start yelling at the maintainers here just because an escape hatch solution has been provided?

hholst80 commented 2 years ago

@uranusjr states

To be clear, the error only appears when you run pip as root, directly on a Python installation. If the goal is to provision the installation across users, it'd be best to use a virtual environment instead. That is enough to suppress the message. And before we go there, yes, we do think it is still best practice to use virtual environments in a container.

I must violently disagree. If anything I would say that it is the complete antithesis of a "best practice" for containerization of software. If the causal users here of containers read this they might think that this is correct - it is not according to me - and for sure it is not an agreed upon "best practice" in general.

NVIDIA provided many of their NGC Python based containers with venv (and probably still does), but in discussions with the architects it turned out that 1) they think its not best practice and 2) the build pipeline required too much work to motivate getting rid of the Matryoshka configuration. In conclusion, it is not unreasonable for a software to use venv inside a container environment. It might be a pragmatic solution without bringing on additional project complexity. That however, is not the same thing as saying it is the preferred or even best practice solution.

RonnyPfannschmidt commented 2 years ago

this best practice suggestion stems from the fact that people keep breaking their system python in strange ways every time

also if you where to do it proper the other way around, then you'd need to actually package the software you want for your target system, i don''t see anyone implementing debs/rpms/other package formats

while i can totally give you that pip "seems" to work most of the time, the occasional subtle breakages have been harrowing to debug every time and strict isolation using virtualenv has been most helpful

potiuk commented 2 years ago

while i can totally give you that pip "seems" to work most of the time, the occasional subtle breakages have been harrowing to debug every time and strict isolation using virtualenv has been most helpful

I think PIP is "right" saying that root without venv is likely to cause subtle prroblems.

However I agree with @hholst80 (again in this thread) that stating venv as the only solution (and reiterating it here several times) is a bad advice for contanairisation.

I would love to see (and I really encourage those who were mentioning it here several times) a good example of complex project that would use venv to build containers for the project. I have counter-example https://github.com/apache/airflow/pull/19189 - this is my failed attempt to do so. So when we want to speak about facts, I really would love (and I dare to say challenge) the next person mentioning here venv as good solution to build contanaires to show an example of a complex python app that has:

For those kind of requirements (which are not uncommon in container world) at least from my tries - containers vs, venv is 1:0.

I hoped - when I started this conversation - that we can simply modify the message to state "use venv or different user than root". I think that would have solved the problem entirely, without inducing any of the subtle bugs @RonnyPfannschmidt mentioned. But it seems to be (for the reason that is a completely mysterious to me) impossible. so users who stumble upon the problem will eventually end up with the thread and continute complaining. It is really, really strange for me why this is such a problem that leads to angry messages where simple solution is at hand. I usually prefer to listen to my users and providing helpfu and actionable messages that can help my users in different situations, rather than arguing, but, maybe there are things I am not aware of and maybe there are other resons why the message in its current shape is important.

I would gladly make a PR with the change, but I am afraid that another angry "AGAIN" will be thrown at me.

I just hope other users who will be mislead by the message, will land here as well. So just to make life easier for those users -I will repeat my workaround (maybe it will work for you too @hholst80).

WORKAROUND:

I gave up on venv and on making "contr-warning" explaining that the warning is actually wrong.

I simply created a non-root user in my "build" segment and changed my image to use that user to run PIP (in the build segment) and install all packages using "--local" flag. That ends up in a nicely working solution for me - no warning, no messing with trying to hide it and (from what I understand) I am not even close to the subltle bugs mentioned by @RonnyPfannschmidt. And I have nice pretty-much relocatable "environment" (even if no venv was used) with ${HOME}/.local.

That also allows to use such .local folder and copy it to later stages, for multi-stage builds (thus allowing to achive much smaller sizes of the images).. The only constraint is that some of the binaries there use hard-coded paths sometimes after they are installed, so the .local folder is not "really" relocatable - it has to stay in the same place for 100% compatibility for all users, but this is easily achivable by giving all users the same home folder. This is 100% fine with the OpenShift Guidelines - where each user has to belong to group 0 and umask should be set to group-writeable. Wtih this approach - there is no problem to dynamically create further virtualenvs in the container or follow-up images. They work in the "natural" way that (unlike with the .venv approach) you can easily create a clone environment with everything using venv --system-site-packages without any extra tools/manual modification.

My PR is here: https://github.com/apache/airflow/pull/20238 (contains more than just that but it should give the picture).

pfmoore commented 2 years ago

I simply created a non-root user in my "build" segment and changed my image to use that user to run PIP (in the build segment) and install all packages using "--local" flag. That ends up in a nicely working solution for me - no warning, no messing with trying to hide it and (from what I understand) I am not even close to the subltle bugs mentioned by @RonnyPfannschmidt.

So the warning about not executing pip as root caused you to investigate your issue and find a safe way of running pip as a non-root user? Sounds like the warning did the job it was intended to do, then 🙂

potiuk commented 2 years ago

So the warning about not executing pip as root caused you to investigate your issue and find a safe way of running pip as a non-root user? Sounds like the warning did the job it was intended to do, then slightly_smiling_face

No. Warning is fine. But the advice was bad @pfmoore. (And still is even if one of the first comments of mine with the thread was "hey let's change the advice because it is bad"

potiuk commented 2 years ago

If you would like to see my perspective - I simply feel misguided after all the discussion. Both message and also several times in the thread (even when I explained my situation) I got an advice here "use venv for your case" . I tried and it was a dead-end which caused more problems that it solved. So I just want to prevent other users (of yours @pfmoore) from falling into the same trap. That's it.

I believe the original intention of the warning (and efect it cause) was really not this. So maybe that's a good time to change it to make it more informative? Is there really any problem with that @pfmoore ?

pfmoore commented 2 years ago

I'm done with this debate, it's just going round in circles. The warning is intended for command line users, not people developing containers. There's only so much nuance you can get into a warning, so we have to draw the line somewhere.

potiuk commented 2 years ago

For those still interested - I just separated out switching to non-root user described above to https://github.com/apache/airflow/pull/20744 - so that it is not part of bigger commit.

Cougar commented 2 years ago

For those still interested - I just separated out switching to non-root user described above to apache/airflow#20744 - so that it is not part of bigger commit.

This single adduser, just to workaround pip message, adds 332 kB layer to the image with changes in /etc, /opt (airflow home), /run and /var/log directories but looks like nobody cares 🤷

Am I the only one who feel this counter-intuitive and even sad that we create a problem and then a fix it with such pointless workaround without any real benefit? 🤦

potiuk commented 2 years ago

I don't think PIP maintainers care about it. @pfmoore was very clear about it:

The warning is intended for command line users, not people developing containers

potiuk commented 2 years ago

I opened a PR #10772 hoping that it will be accepted.

potiuk commented 2 years ago

Just also some further example when things go wrong:

I have a follow-up change in https://github.com/apache/airflow/pull/20747 which actually gets rid of all warnings in the Docker Building Process of ours. The way I've done that is that when "all is OK" - there should be not a single warning generated. Thanks to https://github.com/apache/airflow/pull/20744 I was able to achieve that for PROD image building (yay!). I even managed to do it in the way, that the next time PIP will be released,

After the change, I will get just one warning "New version of PIP is released" - which is precisely what PIP should be super happy about - because this will be a great signal that we should migrate as soon as posisble to get rid of the warning. So really encouraging the behaviour PIP wants. If i was not able to get rid of the warning, the "upgrade PIP" warning would be (it was actually) mostly ignored as it would be lost in the nice of other warnings.

However we have still CI image that I cannot do anything about - because in the CI image I need to use root user and non-virtualenv, non --user approach because I am effectively mounting the sources from local repository to the sources of Airflow that get installed in --editable mode inside the image. This is in order to provide a consistent development environment and there are good reasons (I am happy to explain them in detail) why it has to be done this way.

Effectively what I had to do is to "explain" the users that the warning below is expected:

148654555-31ff5bc6-959c-44b3-8dac-12cacbe7b8b0

This is a "power-user" cases but one that is pretty valid.

I have a kind request to PIP maintainers to consider adding a flag to be able to silence this warning, otherwise I have no really good way to handle the issue.

pfmoore commented 2 years ago

@potiuk You've made your point here. I don't think you're helping anyone by repeating it endlessly. Can I politely request that you leave it at this point, you're not helping your cause by making the same arguments repeatedly.

For what it's worth, I personally think you have made some reasonable points, but I don't think the issue is anywhere near as clear-cut as you suggest - and while I'll happily admit I'm not an expert on UI design, I am a pip user as well as a pip maintainer and I'm trying to look at this from a user's perspective. That's not to say that I support this change, but neither am I so convinced that I will never change my mind. On the other hand, I'm starting to dread seeing yet another post from you on this topic, and as a result I'm less and less willing to consider this issue (not that it's ever been high on my priority list).

If there's any chance of this message getting changed, by this point it would require consensus from the pip maintainers, and honestly, I believe that by repeating the same assertions over and over, you're more likely to reduce the chances of getting consensus that a change is needed than you are of achieving it.

potiuk commented 2 years ago

It's sad when someone admits that there is a point in someone's argument, but they are not willing to consider it, when more and more examples are added when it is needed from a real and actual use and other people having the same problems.

This message is not for you but indeed for other PIP maintainers who might want to consider this.

Happy to not post any more examples if I see a single message "yeah we see the point" (Finally I got it in the previous point), followed by "yeah we are considering it" and thirdly by decision ("we are not implementing it" - followed likely by definite closing this issue or "we are implementing it").

Until the issue is closed, I think that adding more examples and showing (others, not necessarily you) more detailed examples and actual PRs where the previous approach made some users suffer, and showing others examples how they can avoid the suffering is good things for the users.

pfmoore commented 2 years ago

It's sad when someone admits that there is a point in someone's argument, but they are not willing to consider it,

I said precisely the opposite. What I said was that I am willing to consider it, but your repeated posts are putting me off from doing so.

This message is not for you but indeed for other PIP maintainers who might want to consider this.

I think I may be the only pip maintainer still willing to engage with you, to be honest. Most other active maintainers appear to have dropped off this thread by now.

Happy to not post any more examples if I see a single message "yeah we see the point" (Finally I got it in the previous point), followed by "yeah we are considering it" and thirdly by decision ("we are not implementing it" - followed likely by definite closing this issue or "we are implementing it").

"Yeah, we see the point". "Yeah, we are considering it" (at least I am). There you go.

As for a decision, you'll get that when someone makes one. In the meantime, if you keep posting "examples" that add nothing new, you're just encouraging me (I can't speak for other pip maintainers) to say "no" just to stop the endless notifications.

Until the issue is closed, I think that adding more examples and showing (others, not necessarily you) more detailed examples and actual PRs where the previous approach made some users suffer, and showing others examples how they can avoid the suffering is good things for the users.

Personally, I think this is bordering on harrassment of the pip maintainers (who typically receive notifications on every issue). I accept that you believe you're helping here, but I genuinely don't think that in reality you are.

I've just reviewed the code of conduct, to reassure myself that I'm keeping within its bounds (by attempting to have a respectful conversation even while we're in disagreement). I encourage you to do the same.

At this point, though, I'm unsubscribing from this issue. I thought I'd already done so, but maybe I didn't, or maybe it didn't "stick" somehow[^1]. I'll leave it to the other pip maintainers to decide whether this thread should be locked as no longer productive, because I don't believe it's fair of me, as one of the participants in the disagreement, to make that judgement.

[^1]: Edit: Apparently, if I unsubscribe while writing my comment, then when I submit the comment, github resubscribes me "because I have commented". Stupid github. I'm really going to unsubscribe this time...

layday commented 2 years ago

Generally, if every response is a 500-word apologia, it is safe to assume that a solution won't be found.

potiuk commented 2 years ago

This single adduser, just to workaround pip message, adds 332 kB layer to the image with changes in /etc, /opt (airflow home), /run and /var/log directories but looks like nobody cares shrug

Just to clarify this one @Cougar. This is not as bad as you think. In our case at least it does not matter for the final image. The "add-user" we had to add is in a "build" segment of the image - not in the final one. This is one of the reasons why really the case of building containers (when done by power users) is not really vulnerable to the subtle breakages @RonnyPfannschmidt mentioned.

Look at the comment here

# airflow-build-image  - there all airflow dependencies can be installed (and
#                        built - for those dependencies that require
#                        build essentials). Airflow is installed there with
#                        --user switch so that all the dependencies are
#                        installed to ${HOME}/.local
#
# main                 - this is the actual production image that is much
#                        smaller because it does not contain all the build
#                        essentials. Instead the ${HOME}/.local folder
#                        is copied from the build-image - this way we have
#                        only result of installation and we do not need
#                        all the build essentials. This makes the image
#                        much smaller.

We (and most of the optimized Python images should do it this way - this is an absolute best practice) install Python dependencies (and use PIP) only in the segment image that is "throw-away". We only use that segment to install "build-essentials" and make sure that all our dependencies - including those that require compiling - will be installed. So the "add-user" that I added, indeed adds some overhead, but only for the "build" segment. This adds slight overhead at the build time, but it has no effect whatsoever on the final image size.

In the final image we do this:

COPY --chown=airflow:0 --from=airflow-build-image \
     "${AIRFLOW_USER_HOME_DIR}/.local" "${AIRFLOW_USER_HOME_DIR}/.local"

This way we only copy resulting "files" (not layers) from the "throw away" image to the final image.

And as described in Customizing the image chapter (that explains in detail why we need to have two segments) - we can optimize the image a lot:

The above image is equivalent of the “extended” image from previous chapter but it’s size is only 874 MB. Comparing to 1.1 GB of the “extended image” this is about 230 MB less, so you can achieve ~20% improvement in size of the image by using “customization” vs. extension. The saving can increase in case you have more complex dependencies to build.

That's why the "subtle breakages" did not matter at all (the image was anyhow a throw-away). That's why adding the user is not needed at all in this case, but it makes almost no harm - so I finally decided to use it because that was the only way to get rid of the warning.

Cougar commented 2 years ago

This single adduser, just to workaround pip message, adds 332 kB layer to the image with changes in /etc, /opt (airflow home), /run and /var/log directories but looks like nobody cares shrug

Just to clarify this one @Cougar. This is not as bad as you think. In our case at least it does not matter for the final image. /--redacted--/

I know. In your case I actually don't see any problem at all. Airflow build task is already very complicated anyway :D

In contrast, some use pip to build minimal python app containers like I do:

FROM python:alpine
COPY requirements.txt /app
RUN pip3 install -r requirements.txt
COPY app.py /app
ENTRYPOINT [ "/app/app.py" ]

Or in case build tools are needed, it is a little bit more complicated but still simple and builds very fast too:

FROM python:alpine
RUN apk add --update --virtual .build-deps build-base libffi-dev openssl-dev \
 && pip install cryptography==2.8 ansible==2.9.11 \
 && apk del .build-deps \
 && rm -rf /root/.cache /lib/apk /var/cache/apk/*

This build actually installs 8 new python modules, not two. pip does very good job to solve all dependencies.

Please notice that these images does not depend on specific user and work with any UID.

I agree that in many cases it is not a good idea to build images this way - it just does not work at all or things may break in the future. At the same time I see that a lot of docker users need such basic images just to run simple apps.

I can imagine how distro maintainer would follow another process: first find all dependencies (using pip?), install alpine-sdk, create proper APKBUILD file and build package for every python module (apkbuild-pypi is not maintained any more) and then copy these packages to the next build stage and install.

In theory it works great, in practice it needs Dockerfile which looks more like a some long CI job configuration.

potiuk commented 2 years ago

Yep. I see your point and sympathise with it a lot @Cougar.

virtuald commented 2 years ago

Here's my (hopefully productive) contribution to this:

I maintain RobotPy, which allows high school students in the FIRST Robotics Competition to program their robots using Python. For ease of use, we have an installer that executes pip remotely on the robot to install all dependencies (except python) on the robot using pip, including non-python packages.

If my users see this warning, they're going to think they did something bad, when in fact it's just pip being annoying.

You can probably argue that one shouldn't use pip as a system package manager, but it's really convenient and is easier to use than the system package manager. None of my users will ever use the system package manager, the company that provides the base system doesn't distribute a python 3.10 so there will never be conflicts, and I don't use it anymore except to install my Python.

Just add an environment variable please.

potiuk commented 2 years ago

Encouraged by @pradyunsg in https://github.com/pypa/pip/pull/10772#issuecomment-1012434585 and also addressing the worry expressed by @layday about responses here being too long I created a blog post that answer (I hope) the questions of @pradyunsg https://github.com/pypa/pip/issues/10556#issuecomment-938441286

The blog post is here: https://potiuk.com/to-virtualenv-or-not-to-virtualenv-for-docker-this-is-the-question-6f980d753b46

Here just a brief summary - but I encourage everyone to read it and comment (either in Medium or here):

  1. Virtualenv in Docker is an antipattern specifically when you consider that users creating containers have different optimisation goals than (decrease sieze and complexity of the images vs. avoiding subtle bugs when mixing system/pip packages).

  2. Immutability and use patterns of images make it far less (if at all) vulnerable for the said "subtle bugs" coming from "interactive use" of pip

  3. Pip does not provide alternatives for "non-interactive" use of pip (apt vs. apt-get) and containers are the building blocks for modern apps - both in cloud and on-prem and 'abandoning' non-interactive container build use cases in favour of interactive pip command line use is bad idea. The warnings serves the "interactive" users while it is disruptive for "non-interactive" users.

4, Relatively modern (2017 - 2019) features and best practices of Conttainer building made some of the arguments that "promoted" virtualenv usage in docker as recommended in 2014/2015 (https://hynek.me/articles/virtualenv-lives/) obsolete.

Looking forward to constructive discussion on those.

pradyunsg commented 2 years ago

Hi again. This is going to be my final post on this topic, for... quite a while.

Fair warning: The number of swear words I've removed from this post is 15.


I appreciate the Medium post that you've written. Thanks for writing that. It's certainly been useful to see all that you've written, and what your perspective is. Your notes on what a "good warning message" are also appreciated, if repetitive since you've posted those before -- though, a decent amount of that post is, so, I guess that's fine.

Looking forward to constructive discussion on those.

Well, I just spent my time reading through the absolute barage that you've posted in your earlier responses here, seen how you've engaged in discussion with my fellow maintainers here as well as in various other places. Those have not been particularly constructive overall IMO. I don't think we can just flip a switch and make this discussion constructive now.

Yes, I appreciate the blog post. Yes, I acknowledge that you really care about this topic. No, I don't appreciate the general rambling/brigading. No, I don't appreciate the number of times this topic has popped up in my GitHub notifications and email.

To be brutally honest, it's just much less work to just add a --no-warn-when-using-as-a-root-user-to-manage-os-packages [^1] and make you go away because you've got what you want. So... that's what I'll do -- you'll likely have your "I know better" flag in 22.1 (April 2021). I'll make the PR for this myself next month. I don't think this topic is going to be contributor friendly anyway.

For now though, I'd really prefer to spend my energy somewhere that's an order of magnitude more productive and enjoyable, like getting the CI working for #10795, preparing the next release, and making actually-useful UX improvements like #10241.

[^1]: The long name is to discourage interactive use


Now, I don't usually spend my energy publicly posting about how I think person-who-wrote-a-post-on-the-internet is wrong but I'll do so now -- since you've posted that as a link here, and directly referenced that post as a response to stuff I've said -- I will be limiting it to the three most aggregious points:

Virtualenv are meant to be “single-user” only — they are not relocatable, they do not provide the guarantees that mutliple different users will be able to use same virtualenv.

This is not true. Okay, the bit about them not being relocatable is. The rest is not.

I regularly use things in virtualenvs created by a different unix user, which "just works" as long as I have read and execute access. If I have write access, I can even modify that environment.

Those multi-segmented build — by definition — have no mentioned above “subtle bugs” problems.

I can only assume this is in reference to multi-stage builds? Well, that's what Google tells me.

So, the issue is that you'd modify something that the OS depends on / owns, in a way that is incompatible with the OS-provided software. The "subtle bugs" is modifying OS packages, and breaking them. That problem still exists in multi-stage builds as long as you're modifying the same site-packages directory as the OS.

This means that the “Container developer” can build the ritght set of instructions without induding the “subtle bugs”.

This reads too much like a someone who thinks that somehow they're immune to making mistakes or not noticing issues that are happening because the wrong directory was modified three-four layers deep in tooling. I don't think any human/developer is infallible.

I also find it ironic that this sentence has a multiple typos.


Now, for your most recent post here:

Virtualenv in Docker is an antipattern

No, it's not. Please, stop saying this. Your insistence on refusing to stop parroting this, is part of why I want to disengage from this discussion.

My fundamental point, from the start 1 2, is that it is not. I'm also not the only one who has said this, but I'll speak only for myself. As far as I can see, you have not directly acknowledged or responded to this beyond dismissing it outright. I'll restate some points on this:

Honestly, I was somewhat sympathetic to your point that Docker usage should be considered different, but based on reading the article that you've posted, I'm actually convinced that the current behaviour is correct; and that the broader Python community could actually be in a better spot if we flip pip's behavior to have --require-venv be the default.

Immutability and use patterns of images make it far less (if at all) vulnerable

No, it does not.

Take a Docker image for a Debian-based OS. Run pip list as root. Notice how many things are installed already. pip uninstall one of them, and... congratulations! Now, you've messed with what your OS shipped and fucked up that layer. Something in the OS is broken now. This was a pip uninstall. The exact same thing is possible when you pip install something with a dependency on one of the existing packages. Except, now, it's not even plain missing (resulting in a clean start-of-execution failure, in most cases). It's a likely incompatible version of the package. You still have a fucked OS and a fucked layer.

Every layer built on top of this will be fucked, because this layer is. You're now building on a broken foundation. It does not matter that your layers are immutable -- it's still broken. You might be fine with that risk, because you know that the different version of the package you've installed is definitely compatible with the OS. Because you've done the same work that the distro maintainers do. Just fucking ignore with this warning then, because you clearly "know better".

The warnings serves the "interactive" users while it is disruptive for "non-interactive" users.

Nope, this warning is equally applicable to non-interactive use as it is for interactive use. See bullet list above for why.

best practices of Conttainer building made some of the arguments that "promoted" virtualenv usage in docker as recommended in 2014/2015 ([snip link]) obsolete.

Which "best practices of Conttainer building" are these, specifically? Which of the recommendations do they make obsolete, specifically?

These aren't rhetorical questions.

Now, I haven't mentioned this for $reasons until now -- I work with containers and pip on a roughly daily basis as a part of my day job, use them for various personal projects, maintain multiple foundational tools for Python packaging, have had elaborate discussions with Linux distro maintainers on the topic of where Python packages should get installed in various scenarios (including within containers), helped design mechanisms to protect OS packages from Python packaging tooling, worked with various volunteers on other projects on their workflows (occasionally involving containers), was the person who reviewed + merged the PR that added the message that started this whole discussion and there's probably more stuff that's relevant that I'm forgetting. If I've genuinely missed some "best practice" on this topic that you know, I'd really like to know about it because I have a lot of places to apply that.

I'm especially interested if what you're claiming to be is obsolete, is related to one of the following sentences from the post you've referenced:

Stop discussing virtualenv versus system isolation as if they were mutually exclusive.

Whenever you install a system tool written in Python you can expect some kind of breakage.

The operating system Python’s site-packages belongs to the operating system.

Sadly, there are many missionaries boldly proclaiming the end of virtualenv. Mostly because of containers in general and usually because of Docker in particular.

I really hope that the "best practices" you mention is not something wishy-washy. Things like "Containers provide sufficient isolation", or "Containers are for a single application", or "small image sizes are the thing I aim for" ain't it chief.


Finally, please don't feel like you'd need to be in a hurry to provide a response. I'll be unsubscribing from this issue after posting this and hopefully won't be looking at this for, like, multiple weeks. If anyone @-mentions me before then, that'll only serve to annoy me by the fact that this person can't respect boundaries (especially since this has happened before, around this topic already) and that will likely push back when I'd come around to addressing this issue. I will read the responses here, at some point in the not-so-near future though, and the question I've asked at the end here will be the reason -- so please do answer that.

PS: As a fun fact -- Docker usage was my primary concern when adding this warning and the thing that convinced me that this is still a good idea was the fact that my fellow maintainers noted that using virtualenv within Docker is a good idea. That PR still has "changes requested" from me. :)

potiuk commented 2 years ago

Thanks for that. This is encouraging to see that it will be addressed. I it is also interesting. that you had similar concerns initially. This is actuaally a bit ironic as opposed to my slight dyslexia which I do not find ironic really.

I am also not going to mark you here for sure for quite a while. I hope you will read it in a few months so let me leave it as a message here. I will make a reminder to come back to the topic mid-March and if i can help with testing, I am happy to do so. Also I think there are at least few points from my list (optimizing for size, dynamic virtualenvs for example) that you have not responded to. So I hope when things setttle down, we can discuss those.

But also I think this is a good place for others to chime in here and state their opinions. This is what I really meant by "constructive discussion" - to have more voices, opinions coming from different places - various types of users, not only maintainers. I do hope in the coming months (and when the change will be PR'd and discussed) - others, not only me could express their opinions here as well (some of them already did).

BTW. Thanks also for pointing out the typos. I am slighlty dyslexic and while I often review and spell-check my writing, some things fall through the cracks. I've learned that not everyone understand that, especially native speakers, I know how annoying it is for someone who is native speaker. Luckily, I do not treat it as personal comments, but as an oppportunity to improve. Sometimes also some word replacement like segmented/staging stuck in my head and hence the - grave - mistake of mine to use those two interchangeably - already corrected it in my post.

Thanks for that, that was rather unprofessional of me not checking especially the stages vs. segments. I am truly sorry you had to go through it.

webknjaz commented 2 years ago

Since you asked for opinions — my opinion is exactly the same as Pradyun's. I didn't really have energy to elaborate on every tiny bit earlier and only pointed you at the Hynek's article (which was published some time ago but is not outdated at all, despite the speculation that a mere date entry governs whether something is deprecated) because of that so I'm glad that Pradyun had some time to explain everything in detail. Reading his comment, I don't see anything that I would have a different opinion about.

It is dangerous to provide that band-aid option for people because they may break things without even realizing this (which is far more destructive than failing early). So I understand why someone may want an option --i-want-to-dangerously-mutate-the-filesystem-not-waiting-for-pep-and-i-wont-ask-for-support just to stop the complaints from the patience perspective but from the technical standpoint, it's unreasonable to ever allow this unless that PEP is implemented.

glensc commented 2 years ago

I find using the environment variable more convenient than a command-line option. So I can just put a single ENV PIP_ALLOW_ROOT=1 to Dockerfile and the rest just follows through. Unless it's intended by maintainers to be as painful as possible given the name of the option chosen.

ssbarnea commented 2 years ago

@glensc I am, sadly, afraid that there an active desire to inflict pain towards anyone using the upcoming option, so it will likely be implemented in the least convenient way for the user of the option. I see that because many convenient proposals were made but they were refused. On the bright side, I am glad that something will be added as I was expecting to get the thread locked and nothing being accepted.

As long we still aim to find a compromise and to try to understand what the other side is saying we still have some hope.

IMHO pip as a tool overextended its role in policing the way it is used. It assumed for too long that everyone is an "end-user" as in a developer and to some extend ignored those using it professionally and in automation.

What if you use pip to install, as root, a single wheel with no dependencies or you use --no-deps? what if you use a chroot environment? -- especially if you do these from inside your build script and not from interactive cli.

One can easily see how pip could have being too smart for its own good. A false positive warning is toxic and have a serious effect as it make users ignore warnings in general, even when they should really take them seriously.

webknjaz commented 2 years ago

@ssbarnea for the case you mentioned, you should probably use installer, not pip. Also, we have already established that unless the use-case is a very narrow corner case, it is bad to invoke pip as root in general and it is the best practice to use virtualenvs. Knowing that pip is supposed to target the generic use-case, it is only expected to have this warning because it only annoys a very small amount of the users who are either planning to not care about having broken envs with surprising behaviors or unforeseen/undetectable corner cases.

Honestly, seeing how many people just come to thanklessly complain, assign blame and undermine the free labor of the maintainers, dismissing the constructive explanation without trying to understand it, I would be very much in favor of locking this thread. You forget that they don't owe you anything at all, not even that explanation, stop trying to demand your way or highway. The fundamental property of FOSS is that you can fork it or switch to something else if your ideas do not match the project goals. And it would be fine but this sort of entitled behavior we're seeing in the thread is toxic bullying and it is far from okay.

virtuald commented 2 years ago

@webknjaz what's installer?

layday commented 2 years ago

The fundamental property of FOSS is collaboration. It's not "fork or switch". Let's attempt to communicate in a collaborative manner.

layday commented 2 years ago

@webknjaz what's installer?

installer is a wheel installer, see https://github.com/pradyunsg/installer. Currently it does not have a CLI but there's an open PR to add one.

virtuald commented 2 years ago

@layday thanks! That isn't a good fit for my use case, since it doesn't seem to do... well, most things one wants when installing packages. If one were to compare apt/dpkg, pip is more like apt and installer seems to be a lesser version of dpkg.

As (effectively) the python distributor for the robot controller I distribute for, my current solution is to just patch the version of pip that we distribute. But it would be nice to not have to.

webknjaz commented 2 years ago

@virtuald what exactly is stopping you from installing dists into a separate location that does not collide with the system site-packages under a non-root user? Also, would using a zipapp bundle be helpful? It'd solve some of the problems and would be pretty portable, look into pex/shiv for generating such archives.

potiuk commented 2 years ago

Honestly, seeing how many people just come to thanklessly complain, assign blame and undermine the free labor of the maintainers, dismissing the constructive explanation without trying to understand it, I would be very much in favor of locking this thread. You forget that they don't owe you anything at all, not even that explanation, stop trying to demand your way or highway. The fundamental property of FOSS is that you can fork it or switch to something else if your ideas do not match the project goals. And it would be fine but this sort of entitled behavior we're seeing in the thread is toxic bullying and it is far from okay.

Just a note on that @webknjaz. I really love what pip maintainers are doing, I sincerely appreciate it and said it multiple times. I defended pip multiple times against poetry, pipenv which are much more opinionated, but far less powerful and useless for our - rather complex case in Apache Airflow. I even run full presentation on how we run dependencies in Airflow https://www.youtube.com/watch?v=_SjMdQLP30s&t=2549s where I praised the resolver work and versatiliy of pip vs. other package managers. Finally when the new resolver came out last year I did complain about the way it broke our installations just before we released a major release but eventually helped to test it and I think the pain was definitely worth it. And I very, very much appreciate all the work done by PIP maintainers. Heck - even recently I started to discuss (following suggestion from @uranusjr the discussion on switching fully to pipx as the driver for our development environment - even if it is not perfect https://github.com/apache/airflow/issues/20921 - because I really appreciate all the work done by 'pip` maintainers.

I am not a random troll that complains. I have valid concerns and I am maintainer of a really serious FOSS project that has the needs that are likely to be shared with other power users (and I have reasons to believe it's not a 'small' group).

And FOSS is not "all or nothing". By telling "comply or fork" you leave no choice for collaboration, discussion and correction of the course. FOSS is all about collaboration and discussion - and yes - sometimes course correction. I am not "just complaining". I made a huge effort trying to comply witth the 'venv' approach when I was directed there multiple times. It was not working eventually and I had to abandon it. I am not bullying anyone. I read all the materials, I understand the problem. Really. I just think there are cases where other problems are more important and optimisations goals are far more important than 'fucked layers'. Just prioritising this one above all other needs is - in my opinon - bad choice if applied without opt-out possibility. And my actions here are only trying to convince pip maintainers that other goals might be more important in a number of important cases.

Unlike some other people - I never, ever (look at my posts) said that "virtualenv are dead`. Never, ever. I praise and use virtualenv on a daily bases and I know docker containers do not replace virtualenv. What I am merelly about is to make people aware and give enough examples of that where going virtualenv is not the best choice. And I want to be able to do it consciously without raising false alarms when I choose to.

Speaking of installer - If there was an equivalent of "apt-get" in pip for non-intereactive actions, I would gladly use it. I wrote it in my post https://medium.com/@jarekpotiuk/to-virtualenv-or-not-to-virtualenv-for-docker-this-is-the-question-6f980d753b46?postPublishedType=repub - this is one of the multiple reasons which I explained there why going virtualenv is not the only choise and gave several examples where the warning is false positive. Please read that post. Try to be empathetic and understand my point of view - same as I tried, and failed by trying to apply venv solution.

I really want to solve the problem I have (and many people have) at hand, rather than waiting for PEP668 to be approaed and adopted by the distros (that will take years) and PRs to add CLI to installer. I want to use the best tool I know for the job which is under PSF umbrella I do not want to try to search for another 3rd-party.

I am really surprised how this became one of the major discussions in PIP where we already know it can be solved by a simple flag (and as I understand - it will be).

webknjaz commented 2 years ago

@potiuk my rant was caused by the sum of behaviors of people interacting here. I didn't intend for it to be targeted at you personally, I'm sorry that it came out like that.

I do understand that you believe that your case is special enough for pip to allow it and I've read your article. And I disagree with this only after trying to understand your frustrations — I still think that this band-aid is not something that pip should provide. Instead, I'd prefer having some guidance/documentation/helpers to be contributed to help people implement this type of setup without fear of breaking things. FWIW, in your case I'd go for creating a venv, setting env vars at the beginning of the image creation, and invoking things from that venv directly. Maybe this would need some wrappers to make things work nicely together. I see now that may be annoying. But ultimately, I believe that pip, as a project used on the scale across the board, should enforce generally safe practices letting the power users solve got dangerous adventures with external efforts.

Yes, this flag will probably be implemented but I'd say that being annoyed by the users is not the best motivation for doing such changes. I'm sure, as a fellow maintainer, you can appreciate the point that maintainers normally need to consider such things as the well-being of average users and not narrow use-cases.

As for saying that venv is an antipattern in containers — it's not. There's been a lot of evidence provided in this conversation. As a result, some people claim that it may be, others disagree. It's only an antipattern for a subset of people who think so for various reasons, this doesn't make it true in general. FWIW the provided opinions didn't convince me that venv may be unnecessary in containers but only showed that people do tend not to care about the correctness which I believe that pip should care about (even when some subset of users doesn't).

P.S. If somebody wants to challenge the status of the venv in containers recommendations, it's probably best to invoke a discussion @ https://discuss.python.org/c/packaging and have a conversation there. This is not a question that concerns pip specifically but rather the wider Python packaging community. When some consensus is reached, that could go into https://packaging.python.org.

potiuk commented 2 years ago

FWIW, in your case I'd go for creating a venv, setting env vars at the beginning of the image creation, and invoking things from that venv directly. Maybe this would need some wrappers to make things work nicely together. I see now that may be annoying. But ultimately, I believe that pip, as a project used on the scale across the board, should enforce generally safe practices letting the power users solve got dangerous adventures with external efforts.

This is precisely that I tried and failed in https://github.com/apache/airflow/pull/19189 (and I also mentioned about it several times in this thread). So my frustrations are not only because I think using virtualenv is bad for the image but also because I know it did not work for Apache Airflow. Details of it are in the 53 (!) comments in the conversation (where I - discussing with my fellow commiters) tried different tools and solutions to make it work for us. I gave up after some 2 week of trying and another 2 weeks of thinking if it makes sense to continue or not.

As for saying that venv is an antipattern in containers — it's not. There's been a lot of evidence provided in this conversation. As a result, some people claim that it may be, others disagree. It's only an antipattern for a subset of people who think so for various reasons, this doesn't make it true in general. FWIW the provided opinions didn't convince me that venv may be unnecessary in containers but only showed that people do tend not to care about the correctness which I believe that pip should care about (even when some subset of users doesn't).

I thought a bit about this one and I do agree saying that it is a "generaI" antipattern was wrong. And I am super happy to change my statement to make it clear:

Looking at all the above points, and having experiences with both approaches (and even attempting to convert Airflow to use virtualenv) — I truly belive virtualenv is an anti-pattern for container building in a number of cases. Not always, but there are valid and important cases where it is. The virtualenv is an antipattern especialy in cases where you care for the size of the images produced, and when multi-stage builds are used to achieve this optimisation. Also when you want to create dynamic virtualenvs in the image. There are — of course — cases when size of the image, or dynamic virtualenv execution is not important, then — by all means virtualenv in the image might be a good choice.

This is how I just modified my blog post https://potiuk.com/to-virtualenv-or-not-to-virtualenv-for-docker-this-is-the-question-6f980d753b46 (and I added UPDATE: correction at the top explaining it).

I am not really in a camp of people who say "containers replace virtualenv". I never, ever, said that - simply because I don't believe it is true.

If you find my earlier (over-simpified) statement "virualenv is an antipattern for containers" wrong, then I also hope you will find that "recommended virtualenv as the only solution" where there are valid cases it's not, is equally bad statement.

And what I really would love if that statement (similarly as I just did in my post) is corrected. Ideally both in wording and in the "flag" that allows to disable this statement that - in my opinion - might get people misdirected. And I am telling it from my own experience where this misdirection lead us to dead-end, high-effort PR.

PatrickDRusk commented 2 years ago

At the suggestion of pradyunsg, I heeded this:

Take a Docker image for a Debian-based OS. Run pip list as root. Notice how many things are installed already.

In my case, it was an image for an AWS Lambda. After everything OS-level was installed and all that was left was installing my Python code, I did "pip list":

Package    Version
---------- -------
pip        22.0.3
setuptools 57.5.0
wheel      0.37.1

Not too worrying. I would be +1 for an environment variable to shut off that warning.

pradyunsg commented 2 years ago

I guess you didn't read the whole thing.

If anyone @-mentions me before then, that'll only serve to annoy me by the fact that this person can't respect boundaries (especially since this has happened before, around this topic already) and that will likely push back when I'd come around to addressing this issue.

PatrickDRusk commented 2 years ago

Sorry, I didn't realize the impact of the "@". I apologize.

jaraco commented 2 years ago

This issue is affects pip-run also. Consider:

$ docker run -it jaraco/multipy-tox pip-run -q keyring -- -c pass
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 21.3.1; however, version 22.0.3 is available.
You should consider upgrading via the '/usr/bin/python3.10 -m pip install --upgrade pip' command.

I want to use that command (with a real exercise instead of pass) to illustrate to a user how to replicate an issue with keyring in a docker image. Nothing is getting installed to system site packages (only a temporary directory) and venv doesn't work in this scenario (one liner). This warning only adds noise to the otherwise clean output. It's adding distraction to the lesson I'm trying to give.

dmartin-isp commented 2 years ago

I know I'm not going to convince anyone, so this is more a vote than anything else.

It's not pip's job to secure my system. It's pip's job to install python packages. I can't think of another piece of software that behaves this way. Having an escape hatch is totally reasonable. Having this error message in every container log from now to the end of time is not reasonable.