Open f9n opened 6 years ago
https://pypi.org/project/pip/#history shows 18.1 was released a couple hours ago. Rolling back to 18.0 via pip install pip==18.0
resolved this error for me
Interestingly enough. Does not impact CentOS 6.x but does impact CentOS 7.x
Doing
python_runtime '3' do
pip_version '18.0'
end
does not work because of https://github.com/pypa/pip/issues/5857 which is probably the cause of original issue, given that they occurred at the same time.
Any workaround we can implement into the actual cookbook?
You can try setting the get_pip_url
property to an older version of the script, either one you host yourself or one of the legacy scripts like https://bootstrap.pypa.io/3.3/get-pip.py
.
This works for me:
python_runtime '3' do
get_pip_url 'https://github.com/pypa/get-pip/raw/f88ab195ecdf2f0001ed21443e247fb32265cabb/get-pip.py'
pip_version '18.0'
end
Hopefully this will be fixed upstream shortly. I'll monitor the upstream ticket and see how it goes.
I am getting the same thing for python_runtime '2', any tips for this? plz and ty!
@aungger Same fix, the current default get-pip.py
script is broken.
if your dependency on poise-python is through a wrapped cookbook and you can't change the pip_version or get_pip_url parameters of the resource... we had success implementing the following workaround in our wrapper cookbook:
edit_resource(:python_runtime, '2') do
pip_version '18.0'
get_pip_url 'https://github.com/pypa/get-pip/raw/f88ab195ecdf2f0001ed21443e247fb32265cabb/get-pip.py'
end
If you're using a virtualenv resource, you also needed to set the pip_version
and get_pip_url
in the python_virtualenv
resource:
python_virtualenv '/opt/venv' do
pip_version '18.0'
get_pip_url 'https://github.com/pypa/get-pip/raw/f88ab195ecdf2f0001ed21443e247fb32265cabb/get-pip.py'
action :create
end
Update: They merged a fix to get-pip but it seems to only resolve the part where get_pip_url
had to be specified. So
python_virtualenv '/opt/venv' do
pip_version '18.0'
end
works now, but blows up without the pip_version
when it tries to install setuptools.
confirmed, overriding the attribute will force poise-python to use pip 18.0 and our cookbooks now work as expected
"poise-python": {
"options": {
"pip_version": "18.0"
}
}
This issue still appears in wrapper cookbook. Can someone please help?
python_runtime '3' failing on ubuntu 18 `
python_runtime[3] action install
Running handlers:
[2018-10-09T08:45:32+00:00] ERROR: Running exception handlers [2018-10-09T08:45:32+00:00] ERROR: Running exception handlers Running handlers complete [2018-10-09T08:45:32+00:00] ERROR: Exception handlers complete [2018-10-09T08:45:32+00:00] ERROR: Exception handlers complete Chef Client failed. 2 resources updated in 10 seconds [2018-10-09T08:45:32+00:00] FATAL: Stacktrace dumped to /tmp/kitchen/cache/chef-stacktrace.out [2018-10-09T08:45:32+00:00] FATAL: Stacktrace dumped to /tmp/kitchen/cache/chef-stacktrace.out [2018-10-09T08:45:32+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report [2018-10-09T08:45:32+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report [2018-10-09T08:45:32+00:00] FATAL: PoiseLanguages::Error: Unable to find a candidate package for version "3". Please set package_name provider option for python_runtime[3]. [2018-10-09T08:45:32+00:00] FATAL: PoiseLanguages::Error: Unable to find a candidate package for version "3". Please set package_name provider option for python_runtime[3]. `
@lopezm1 Good find! This was the quickest and easiest way to get around this error for me.
This issue still appears in wrapper cookbook. Can someone please help?
I was having this issue if a previous chef run was cached. Have you tried clearing the cache or running the cookbook on a fresh server?
I was having this issue if a previous chef run was cached. Have you tried clearing the cache or running the cookbook on a fresh server?
Right, By overiding the pip_version in attribute file of wrapper cookbook resolving the issue now. like
override['poise-python']['options']['pip_version'] = '9.0.3'
thanks for the help.
Just a note to some folks for whom the workarounds above might show no effect: If pip==18.1 is already installed (and your resource is using the provider: system option), the broken script is executed with that version and fails before pip can be downgraded. So in that case you may need to run pip install pip=18.0
manually once as additional workaround.
Let's hope the patch in #134 gets merged soon. A long-term solution that does not depend on any pip internals would be even nicer.
As a workaround, since we need this to work to deploy our software using poise-python
, I forked into our own copy of this repo, used Rake to build a version from the tip of master that has the fix for using pip 18.1, and saved it to a branch (just a convenient place to put the release). I put this line in my Berksfile:
cookbook "poise-python", "1.7.1", git:'https://github.com/twistle/poise-python.git', branch: 'fix/pip-18-1-problems'
Then when 1.7.1 is really released, I will switch back to using the version from the Chef Supermarket.
Posting this here in case others might find it helpful.
You're welcome to use this or fork it and use it from your own repo.
when is the 1.7.1 version getting released?
Why is 1.7.1 not out yet? :(
I explained this on another issue, but my CI system is pretty much busted so I have no safe way to vet a release. No one seems to be willing to pay for the time it would take to fix and I’m no longer doing pro-bono Chef work when I can avoid it.
@coderanger How could we collaborate to help...? At Twistle we're using this code as part of our chef build for a production project, so maybe there's a way to help. It seems like others are in similar positions. Would you be willing to point me at the other issue where you explained the problem?
@coderanger P.S. I get not wanting to do volunteer work using chef. I feel the same way! 🙂
I’m on my phone so easier to just summarize here again: the first issue is that since my last release, Travis CI has cut the maximum time for builds from two hours to one, which means my tests get killed. But I’m not inclined to fix that because the second issue is my (very generous) infrastructure sponsor will sadly be terminating their open source sponsorship program so the infra that my tests actually run on will go away in a few more weeks. So to get my CI back into a healthy state would require a pretty major rebuild, probably switching providers as well as reworking things to not use the dedicated infra (which will make the tests even slower).
@coderanger I might be able to help with this via the OSUOSL. PM me on slack if you're interested.
@ramereth Cool!
@coderanger I might be able to help with moving things over to a new provider if you need more help.
Maybe there are others that could pitch in too?
I don't understand, it worked in the past.