Open rxjx opened 3 years ago
We try to stick to whatever Colab provides. At this point Python 3.7 (that way you can use their own bundled libraries -- most of the time).
You can override this to an extent, but it introduces a lot of complications (how to launch the kernel, etc). Is there a specific reason you need 3.8?
I need 3.8 for consistency with other work outside of Colab. Definitely understand the complications this introduces but I was hoping this was a common case for other folks.
I see. This is the line that pins to "whatever Colab is running". Everything else after that will refer to 3.7 right now.
I guess you can fork the repo, change that line to 3.8 and make sure you provide an Anaconda installer based on Python 3.8.
Something like this:
# This is your fork, with a `py38` branch changing that line mentioned above
!pip install https://github.com/rxjx/condacolab/archive/py38.tar.gz
import condacolab
condacolab.install_from_url("anaconda/url/for/python/3.8")
Cross your fingers and let's see what happens! If it ends up being easier than anticipated, we can add an option to change the default behaviour for advanced users.
I tried it with miniconda:
!pip install https://github.com/rxjx/condacolab/archive/refs/heads/main.zip import condacolab condacolab.install_from_url("https://repo.anaconda.com/miniconda/Miniconda3-py38_4.10.3-Linux-x86_64.sh")
which seemed to work but after the restart condacolab.check() gave me this:
AssertionError Traceback (most recent call last)
Ah, looks like line 296 also needed to be changed.
OK, this seems to sort of work but I have to make every cell a bash cell else Colab seems to revert to its default python 3.7. I'm thinking some PATH magic is missing. Here's what I have to do to see python 3.8:
%%bash
python
import sys
print(sys.version) print(sys.executable)
I guess this is somewhat becoming important due to many projects now dropping support for python 3.7.
Now it's a race between Google and condacolab. Who'll update first? 🤣
I would personally like goolge to get on with it ;) I was actually quite impressed that they are on 3.7. They used to run colab on 2.7.
They updated from 3.6 to 3.7 last March '21 only. According to NEP29, Numpy officially dropped 3.6 in June '20. So... 9 months after? Maybe we can expect Google to drop 3.7 in September '22, but of course this is extrapolation and potentially useless. Right now they are still trying to fix some stuff with Ubuntu 20 so maybe that brings Python 3.8 too.
Is there a post where we can track this progress google is doing?
FWIW, NEP29 was pretty close to not getting followed by numpy and scipy themselves. Now they seem to be following on quite well. So we can hope the community at large will follow suite!
Isn't it possible to make the notebook use an arbitrary kernel? I have seen Colab notebooks for languages such as Julia.
It would be best if we can just create a kernel from a conda env and then force the notebook to use that. Using /usr/local
is buggy anyway.
It would be really nice to run python 3.8 in colab. Snowflake with its snowpark connector requires python 3.8 since June 2022. I am interested in any solution. In making a more general idea, the best thing would be to be able to choose your version of python : 3.8, 3.9, 3.10...
This is being worked on #31
I can't wait for this upgrade.
Very soon! :)
Ok, let's see how it goes with #31 now on main
.
You should be able to do something like:
!pip install -q https://github.com/conda-incubator/condacolab/archive/main.tar.gz
import condacolab
condacolab.install_from_url("https://github.com/conda-forge/miniforge/releases/download/4.14.0-0/Mambaforge-4.14.0-0-Linux-x86_64.sh")
This will bring Python 3.10 (as it's the default on the latest Mambaforge), but you can technically use any other Miniconda-like installer in that URL, possibly one that brings Python 3.8 by default.
BIG BIG thanks to @ssurbhi560 for doing the heavy lifting here!
Thank you very much. It works like a charm 🚀 I can update MetPy to latest version (require python >=3.8). https://colab.research.google.com/drive/1Q1Bf7c7METigE7Bg5FhxdB33cXi470mM?usp=sharing
But still have some problem when mount google drive:
`---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/opt/conda/lib/python3.10/site-packages/google/colab/drive.py in mount(mountpoint, force_remount, timeout_ms) 178 ': timeout during initial read of root folder; for more info: ' 179 'https://research.google.com/colaboratory/faq.html#drive-timeout') --> 180 raise ValueError('mount failed' + extra_reason) 181 elif case == 2: 182 # Not already authorized, so do the authorization dance.
ValueError: mount failed`
And show this error message when condacolab.check():
`---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
/opt/conda/lib/python3.10/site-packages/condacolab.py in check(prefix, verbose) 296 pymaj, pymin = sys.version_info[:2] 297 sitepackages = f"{prefix}/lib/python{pymaj}.{pymin}/site-packages" --> 298 assert sitepackages in sys.path, f"💥💔💥 PYTHONPATH was not patched! Value: {sys.path}" 299 assert ( 300 f"{prefix}/bin" in os.environ["PATH"]
AssertionError: 💥💔💥 PYTHONPATH was not patched! Value: ['/opt/conda/lib/python3.7/site-packages', '/content', '/opt/conda/lib/python310.zip', '/opt/conda/lib/python3.10', '/opt/conda/lib/python3.10/lib-dynload', '', '/opt/conda/lib/python3.10/site-packages', '/opt/conda/lib/python3.10/site-packages/IPython/extensions', '/root/.ipython']`
At this time, I can solve it by mount drive before update python version.
Good point. I think we might want to write the logs elsewhere @ssurbhi560, possibly the default location for system logs.
You should be able to do something like:
!pip install -q https://github.com/conda-incubator/condacolab/archive/main.tar.gz import condacolab condacolab.install_from_url("https://github.com/conda-forge/miniforge/releases/download/4.14.0-0/Mambaforge-4.14.0-0-Linux-x86_64.sh")
When I use this exact method, although !python --version
now gives Python 3.10.6, when I do !conda env update -n base -f env.yml
, where the env.yml file specifies python >= 3.8
, I still get an error because of the pinned specs.
More concretely, this is the env.yml file used: https://github.com/NVlabs/stylegan3/blob/main/environment.yml And this is the complete output:
Solving environment: \ WARNING conda.core.solve:_add_specs(648): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned spec| WARNING conda.core.solve:_add_specs(648): pinned spec cudatoolkit=11.2 conflicts with explicit specs. Overriding pinned specfailed
SpecsConfigurationConflictError: Requested specs conflict with configured specs.
requested specs:
- click[version='>=8.0']
- cudatoolkit=11.1
- imageio=2.9.0
- matplotlib=3.4.2
- ninja=1.10.2
- numpy[version='>=1.20']
- pillow=8.3.1
- pip
- python[version='>=3.8']
- pytorch=1.9.1
- requests=2.26.0
- scipy=1.7.1
- tqdm=4.62.2
pinned specs:
- python_abi=3.7[build=*cp37*]
Use 'conda config --show-sources' to look for 'pinned_specs' and 'track_features'
configuration parameters. Pinned specs may also be defined in the file
/usr/local/conda-meta/pinned.
So, can this method be used with conda env update -n base? Sorry if this question is in the wrong thread.
Thanks!
Hey @deklesen, hello, thanks for the report!
I think you are still using the latest condacolab
release on PyPI, given that the prefix of the installation seems to be /usr/local/
(I am looking at the last line in the error log).
The version available on main
right now has a different mechanism (thanks to @ssurbhi560!) that doesn't pin to the ABIs anymore. I think that should help alleviate your issue! An even better mechanism will be provided by #38.
We might still pin cudatoolkit to whatever colab is using to maximize compatibility (11.2 right now), but your env file wants 11.1. We are open to hearing your thoughts about this restriction! (Compatibility happens at the driver level, which should be recent enough so maybe the pinning should just be based on that?).
I am having some issues with installing a yml file using this approach (hope its fine if I just append this here).
I am trying to set up the environment for https://github.com/Virtsionis/torch-nilm, which is provided at that link.
Before I found condacolab and this issue, I used this approach:
! wget -O miniconda.sh https://repo.anaconda.com/miniconda/Miniconda3-py37_4.10.3-Linux-x86_64.sh
! chmod +x miniconda.sh
! bash ./miniconda.sh -b -f -p /usr/local
! rm miniconda.sh
! conda config --add channels conda-forge
! conda install -y mamba
! mamba update -qy --all
! mamba clean -qafy
import sys
sys.path.append('/usr/local/lib/python3.7/site-packages/')
and then creating the environment like this:
!mamba env create -f drive/MyDrive/torch-nilm.yml
I could not create the environment using the current pip version of condacolab as it requires Python 3.8. I now attempted creating it like deklesen suggests, and condacolab installs fine. But when I get to creating the environment, it gets stuck (just loading forever) after:
Looking for: ['_libgcc_mutex==0.1=conda_forge', ..., 'zipp=3.4.1', 'zlib=1.2.11']
Pinned packages:
- cudatoolkit 11.2.*
Any ideas on what could be the issue here? cudatoolkit is not part of the yml file, so I don't see it being the culprit here...
Is there a way to use condacolab to switch to python 3.8 (or any other version)? I tried : "!conda install -c anaconda python=3.8" but that resulted in:
✨🍰✨ Everything looks OK! Collecting package metadata (current_repodata.json): done Solving environment: | WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed with initial frozen solve. Retrying with flexible solve. Solving environment: / WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: / WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed with initial frozen solve. Retrying with flexible solve. Solving environment: | WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed
SpecsConfigurationConflictError: Requested specs conflict with configured specs. requested specs: