Closed dfeich closed 8 months ago
I posted the same issue earlier today - can I ask what version of jupyterlab you are running?
FYI, I have hacked together a way of making this work in the meantime.
/kernelwrapper.sh
#!/bin/bash
source activate $1 && shift && exec "$@"
chmod +x kernelwrapper.sh
then edit "/usr/local/share/jupyter/kernels/py37/kernel.json"
(or wherever else your kernel is installed):
from:
{
"argv": [
"/home/ubuntu/miniconda/envs/py37/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3.7",
"language": "python"
}
to:
{
"argv": ["/kernelwrapper.sh", "py37",
"/home/ubuntu/miniconda/envs/py37/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3.7",
"language": "python"
}
I'm running
on a current development version of jupyterhub.
Thanks for the idea with the workaround. I will test it. I was a bit surprised that nb_conda_kernels
is not using some similar wrapper that utilizes the standard conda activate myenv
to setup the environments. I have not looked deeply into the code, but probably they had reasons for that. However, this then becomes tougher to maintain since it requires duplication of all these mechanisms in their code.
Yes it's not ideal. I think nb_conda_kernels
is supposed to work in the same way (see here) but I guess something that let's jupyterlab pick that up has become broken.
Hey all
I cornered the error source, I think. That happens when jupyterlab is not installed in the base environment. In that case, nb_conda_kernels
is not using the special runner for executing the python kernel. This can be seen by opening the development tools in the webbrowser and looking at the request kernelspecs
. In the answer below, you can see that for the environment lab_conda
from which jupyterlab is executed, the command is the traditional one and not the modified one like for jlab
environment in the example:
@mcg1969 in light of the previous comment, do you have any suggestion for a fix?
The problem comes from: https://github.com/Anaconda-Platform/nb_conda_kernels/blob/f014e601115e652938dfac926a7e1698733e5ea1/nb_conda_kernels/manager.py#L193
I think I see what you're saying. The intent of course is to skip activation if the kernel environment is the same as the nb_conda_kernels
environment. But it's not clear to me why the line you've marked is wrong? After all, if I were to remove that test, it would use the runner even for kernels that are in the jupyterlab environment, wouldn't it?
@mcg1969 thx for the reply
But it's not clear to me why the line you've marked is wrong?
True I was too imperative on my statement, sorry for that. I think the trouble comes from that line.
After all, if I were to remove that test, it would use the runner even for kernels that are in the jupyterlab environment, wouldn't it? Yes
If I try to sum up the current implementation:
From the reported error and personal tests, the second case is not working as expected. Therefore do you think using the runner all the time is ok or this has some drawback? What about the base environment?
This assumes that the environment has been activated prior to the execution of nb/lab and that the context is passed to the kernel subprocess.
Yes, is is the assumption. Why would that assumption be false? If this is the environment from which the jupyter
command is executed, it should already be activated, so there is no need to re-activate. The behavior should be identical to the case where nb_conda_kernels
is not present.
second case is not working as expected
But the reported problem at the top of this thread is not, by itself, a problem. The activate scripts should not need to be run if the environment does not need changing.
Technically, running the script every time should not pose a problem, but Imam reluctant to check in a change that is either unnecessary or masks a different problem we should be fixing instead.
Thanks for the reply
Technically, running the script every time should not pose a problem, but Imam reluctant to check in a change that is either unnecessary or masks a different problem we should be fixing instead.
Make sense. I'll try to find time to look into this a bit more.
@fcollonval I'm inclined to close this as a wontfix, since it is old, and we're not clear if it's something that should be fixed. I know you've LONG since moved on to better things but if you'd like to comment/disagree I'm open to it
@mcg1969 thanks for the ping. I'm fine with closing this as won't fix.
Thanks for providing this plugin!
The activation/deactivation scripts that can be put into
${CONDA-PREFIX}/etc/conda/{de}activate.d/
do not get triggered when I switch to a kernel in jupyterhub. We are using nb_conda_kernels-2.2.2.The activation scripts are correctly invoked when I do a
conda activate myenv
on the command line. The only workaround that I currently can think of is that users have to preload the missing part of the environment before their jupyter singleservers have been started by the hub. But this makes the whole kernel switching redundant, because it essentially prevents using another conda-env that needs a different environment. We want to use the activation hook system to combine the use of environment modules with conda-envs.