Closed mivade closed 2 years ago
+1
I run into the same issue.
conda env create -f environment.yml -v
gives me:
`Traceback (most recent call last):
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/conda/exceptions.py", line 640, in conda_exception_handler
return_value = func(*args, **kwargs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/conda_env/cli/main_create.py", line 108, in execute
installer.install(prefix, pkg_specs, args, env)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/conda_env/installers/conda.py", line 32, in install
channel_priority_map=_channel_priority_map)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/conda/plan.py", line 475, in install_actions_list
dists_for_envs = determine_all_envs(r, specs, channel_priority_map=channel_priority_map)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/conda/plan.py", line 540, in determine_all_envs
spec_for_envs = tuple(SpecForEnv(env=p.preferred_env, spec=p.name) for p in best_pkgs)
File "/home/ubuntu/anaconda2/lib/python2.7/site-packages/conda/plan.py", line 540, in
I suspect that this has something to do with "channels" and how conda sorts dependencies. I therefore wanted to see whether the situation would be solved by using the old "sorting algorithm":
conda config --set channel_priority false
and then again:
conda env create -f environment.yml -v
to no avail.
Is anyone looking into this? I am experiencing the same problem after exporting a conda environment on macOS and then trying to reproduce the environment on Ubuntu 16.04. Actually this is literally the only thing I did.
I exported using conda env export > environment.yml
, and tried to re-create using conda env create --name my-project --file environment.yml
. I receive the same error as reported by OP:
ResolvePackageNotFound:
- ca-certificates 2017.08.26 ha1e5d58_0
I also tried to first create a conda environment without the file, then install from the file, with a different error:
$ conda create --name my-project `grep -o 'python=[0-9.]\+' environment.yml`
$ conda install --file environment.yml
CondaValueError: could not parse 'name: my-project' in: environment.yml
This is the same error I receive if I try to create the environment from the file using conda create
rather than conda env create
.
Oftentimes build numbers don't match across platforms. Use the --no-builds
flag with conda env export
.
conda env export --no-builds > environment.yml
What I have generally been doing to work around this is to hand craft environment files rather than using conda env export
and only including packages that I'm explicitly depending on. This is not always ideal (especially for things that might have a lot of dependencies), but it has been working well enough so far.
Thanks for the --no-builds
tip, @kalefranz. That seemed promising. I believe this changes the problem, but I still have similar looking errors, albeit a much smaller list of failed packages:
ResolvePackageNotFound:
- appnope=0.1.0
- libcxx=4.0.1
- libgfortran=3.0.1
- libcxxabi=4.0.1
Note: I also ran conda update conda
, which updated me to 4.4.7 (miniconda and anaconda don't ship with latest conda?).
Is there a simple explanation here? The linux channels are not in-sync with the macOS channels? Or... I'm fairly new to python... Can package dependencies vary depending on the destination platform?
@mivade, good idea... Glad to hear it's a working path forward. It kind of defeats the purpose of using an environment manager, though — even if I am open to it, it would be ridiculous to try to get my team on-board, and expect them to learn how to do it and get it right, with complexities such as channels, etc. But I'm not sure what other option I have right now!
For reference, I believe npm install --save
and yarn add
behave similarly to what @mivade describes. When you explicitly install/add a package, it only writes that package to your manifest, but will also install any dependencies. The dependencies don't need to be written, because they are just that — dependencies — and can be determined dynamically at install-time. I believe this is a better pattern than what is currently followed by conda env export
, which appears to be more equivalent to a lock-file, which works great in node-land, but apparently doesn't work so well in python-land.
I have already wasted too much time on these issues to dig any further on my own. If you have other ideas, I will definitely give them a shot.
The reason this issue is tagged as a feature request and with the “environment-spec” tag is because we know there’s quite a bit of work we need to do here to improve the user experience.
Indeed conda env export
should only include, or have a mode that only includes, “user-provided” specs, and not the equivalent of a lock file that lists every package in the environment.
I also am blocked by the libcxx issue
Using Anaconda API: https://api.anaconda.org
Fetching package metadata .............
ResolvePackageNotFound:
- libcxx 4.0.1*`
Like the original poster said, very aggravating that it takes 10s to see the failure, then fixing it just brings on more.
Can't there be a switch to just install everything possible from the environment, then alert which ones failed afterward? This defeats the #1 promise of anaconda being a solution to the "runs on my computer" problem. The biggest/only thing I want from anaconda is the export reqs and import reqs to work flawlessly.
Is there another workflow I'm missing that is the intended path of deploying code to another computer? I thought it was supposed to be conda env export > env.yml
on one, conda create env -f env.yml
on the other.
Have the same problem, had to manually remove the dependencies causing the error from the env file.
ResolvePackageNotFound:
Exported the env from macOS and tried to import on Linux
+1
Exactly same problem like @gabegm even if I used
conda env export --no-builds > environment.yaml
Had to manually remote the dependencies
libcxxabi=4.0.1
libcxx=4.0.1
which should be handle more gracefully by conda.
Same problem
conda env create --force -f=/Users/ryanrunchey/Downloads/gcloud_ryanrunchey_starling_conda_env_base_2018-07-25/environment.yml -n gcloud_starling_base
Using Anaconda API: https://api.anaconda.org Fetching package metadata .............
ResolvePackageNotFound:
Same problem on ubuntu
conda.exceptions.ResolvePackageNotFound:
@howie, you cannot share env files between OSes, your env file has come from macOS here.
@leemengtaiwan, you said:
which should be handle more gracefully by conda.
There's nothing conda can do about this, environment files are there to explicitly lock the exact package set. Now that's going to differ across OSes for obvious reasons.
My suggestion is that you generate an env file on each OS and then update your tooling to use the appropriate one given the current OS.
@mingwandroid thanks for your information 😂
My suggestion is that you generate an env file on each OS and then update your tooling to use the appropriate one given the current OS.
@mingwandroid Is there a way to get conda
to ignore the unresolved packages, and proceed with the installation?
Same problem here.
ResolvePackageNotFound:
Things I've tried:
These steps seemed to do it, but I think it was only no. 3 that fixed the issue as I tried the first two individually to no avail.
manually removed the above mentioned packages from the .yml file
It appears this is what most people (including me) end up doing most of the time. conda
should probably add a --ignore-missing
flag to do this automatically
I came across this issue when trying to export my environment.yml
and run on other systems. I was mainly interested in getting version numbers for my dependencies. So I wrote a quick dirty script that exports the environment, picks the version numbers and adds them to the environment.yml
. This doesn't add any unexpected dependencies that cause inconsistencies on other systems.
#!/usr/bin/env python
import subprocess
import re
import sys
env_filepath = sys.argv[1]
with open(env_filepath, 'r') as f:
env_yml = f.read()
env_name = re.search('^name: (.+)', env_yml, flags=re.MULTILINE).group(1)
env_export = subprocess.check_output(
'conda env export --name ' + env_name + ' --no-builds', shell=True
).strip().decode('utf-8')
for line in env_yml.split('\n'):
if line:
m = re.search(line.strip() + '(==?.+)', env_export, flags=re.MULTILINE)
if m:
print(line + m.group(1))
else:
print(line)
else:
print()
Just pass your environment.yml
path to the script. Hope this helps someone.
Getting same error while creating conda environment
ResolvePackageNotFound:
+1 on the --ignore-missing suggestion. Please implement this. I can manually drop the incorrect libraries out of generated file, but this doesn't protect me from changes that can happen in future packaging in conda and conda-forge.
Still relevant. Would be fantastic if we could export conda environments with an option to minimally specify user-installed packages, as an extra flag.
Same problem. Export in macos import in ubuntu. libcxx
and libcxxabi
Same problem. libssh2
I would also be interested in a solution that allows to properly move envs around by just creating a yml.
For me it is:
ResolvePackageNotFound:
- libcxxabi=4.0.1
- libcxx=4.0.1
- libgfortran=3.0.1
- appnope=0.1.0
Same problem had to manually remove the packages causing errors.
I had the same problem with libcxx
and libcxxabi
. I am also running MacOS and want to use the environment file on a linux machine.
I circumvented the problem by starting a docker image that runs linux/conda
docker run -it continuumio/miniconda3:latest
creating my conda env there from scratch and exporting it.
conda env export --no-builds > environment_linux.yml
Then I copied the file to my MacOS machine for further use
Same problem.
Getting same error while creating conda environment
ResolvePackageNotFound:
libcxx
libcxxabi
I'm using macos for development and linux for production. I exported conda env on mac, tried to create the same env in linux, and shows these error.
The reason is libcxx and libcxxabi are macos only packages. They are not in linux channel. I think we should think a solution to deal with platform specific packages.
It's been years that folks want this issued to be solved and I'm one of them. While being a fan of miniconda, I see other package managers (like poetry?) deal with cross platform problem well.
A Conda running on Linux can be forced to use macOS packages when resolving environments. I use this to create proper environment.linux.lock.yml
and environment.osx.lock.yml
files from a single script, without access to a Mac.
The trick is to temporarily add this to ~./condarc
:
subdir: osx-64
subdirs:
- osx-64
- noarch
With this, Conda will happily create macOS environments even on Linux. These will not be functional, of course, but can then be conda env export
ed to create an environment file that works on macOS. Using --no-builds
isn’t necessary in this step.
I believe this works also in the other direction, that is, creating Linux environments on macOS.
Here’s a cleaned-up version of the script I’m using: https://gist.github.com/marcelm/32dc00cb5e4018670294d2508883402d
This is slightly hackish of course, but may be helpful until we get a proper solution.
I came across this issue when trying to export my
environment.yml
and run on other systems. I was mainly interested in getting version numbers for my dependencies. So I wrote a quick dirty script that exports the environment, picks the version numbers and adds them to theenvironment.yml
. This doesn't add any unexpected dependencies that cause inconsistencies on other systems.#!/usr/bin/env python import subprocess import re import sys env_filepath = sys.argv[1] with open(env_filepath, 'r') as f: env_yml = f.read() env_name = re.search('^name: (.+)', env_yml, flags=re.MULTILINE).group(1) env_export = subprocess.check_output( 'conda env export --name ' + env_name + ' --no-builds', shell=True ).strip().decode('utf-8') for line in env_yml.split('\n'): if line: m = re.search(line.strip() + '(==?.+)', env_export, flags=re.MULTILINE) if m: print(line + m.group(1)) else: print(line) else: print()
Just pass your
environment.yml
path to the script. Hope this helps someone.
Hello. Thank you for this script. Please how do I use it as in your last statement " Just pass your environment.yml
path to the script".?
I came across this issue when trying to export my
environment.yml
and run on other systems. I was mainly interested in getting version numbers for my dependencies. So I wrote a quick dirty script that exports the environment, picks the version numbers and adds them to theenvironment.yml
. This doesn't add any unexpected dependencies that cause inconsistencies on other systems.#!/usr/bin/env python import subprocess import re import sys env_filepath = sys.argv[1] with open(env_filepath, 'r') as f: env_yml = f.read() env_name = re.search('^name: (.+)', env_yml, flags=re.MULTILINE).group(1) env_export = subprocess.check_output( 'conda env export --name ' + env_name + ' --no-builds', shell=True ).strip().decode('utf-8') for line in env_yml.split('\n'): if line: m = re.search(line.strip() + '(==?.+)', env_export, flags=re.MULTILINE) if m: print(line + m.group(1)) else: print(line) else: print()
Just pass your
environment.yml
path to the script. Hope this helps someone.Hello. Thank you for this script. Please how do I use it as in your last statement " Just pass your
environment.yml
path to the script".?
Hi!
python your_code.py your_file.yml
with your_code.py
the name you gave to this code, your_file.yml
is transparent. sys.argv[1]
points to the second entry of your command. You could also just replace sys.argv[1]
with your .yml file in a string and run the code.
I came across this issue when trying to export my
environment.yml
and run on other systems. I was mainly interested in getting version numbers for my dependencies. So I wrote a quick dirty script that exports the environment, picks the version numbers and adds them to theenvironment.yml
. This doesn't add any unexpected dependencies that cause inconsistencies on other systems.#!/usr/bin/env python import subprocess import re import sys env_filepath = sys.argv[1] with open(env_filepath, 'r') as f: env_yml = f.read() env_name = re.search('^name: (.+)', env_yml, flags=re.MULTILINE).group(1) env_export = subprocess.check_output( 'conda env export --name ' + env_name + ' --no-builds', shell=True ).strip().decode('utf-8') for line in env_yml.split('\n'): if line: m = re.search(line.strip() + '(==?.+)', env_export, flags=re.MULTILINE) if m: print(line + m.group(1)) else: print(line) else: print()
Just pass your
environment.yml
path to the script. Hope this helps someone.
i thought anaconda was specifically doing this... i thought that the whole point of a yml file was to download EXACTLY what a program needs for its dependencies.
@PowerToThePeople111 @howie @Arslan-Zaidi @leemengtaiwan it seems that some precise versions of packages only exist for osx:
Hi there, thank you for your contribution to Conda!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed automatically if no further activity occurs.
If you would like this issue to remain open please:
Verify that you can still reproduce the issue in the latest version of Conda
Comment that the issue is still reproducible and include:
It would also be helpful to have the output of the following commands available:
conda info
conda config --show-sources
conda list --show-channel-urls
NOTE: If this issue was closed prematurely, please leave a comment and we will gladly reopen the issue.
In case this issue was originally about a project that is covered by the Anaconda issue tracker (e.g. Anaconda, Miniconda, packages built by Anaconda, Inc. like Anaconda Navigator etc), please reopen the issue there again.
Thanks!
Hi again!
This issue has been closed since it has not had recent activity. Please don't hesitate to leave a comment if that was done prematurely.
Thank you for your contribution.
is there really no solution to this? We just have to manually remove these packages?
I have the same issue, beginning to think the environment.yml
is nothing more than a myth to give you the illusion of having more portability.
Yeah I've never been a fan of bots that close issues just because there haven't been any comments in a while. AFAIK this is still an issue. I only ever craft environment.yml
files by hand these days which only lists explicit dependencies. This is an OK approach but doesn't really allow for true reproducibility.
is there really no solution to this? We just have to manually remove these packages?
Seems we have to remove these packages manually to get the yml file to work. I tried move these packages from dependencies to the pip section and the problem not solved!
When trying to create a new environment from a environment YAML file that was created on one operating system, I get errors such as the following when running on another OS:
The problem is that as soon as I comment the offending package out (since it is likely not a hard requirement, but rather just something that happened to be installed by whomever created the environment file), I just get another similar error for another package.
Considering it can take 10s of seconds before getting this error, it would be especially helpful to be presented with a list of all packages that can't be found instead of having to hunt them one at a time.