Open ghost opened 5 years ago
And when I run it as Administrator. Nothing wrong
I suspect you're running into this issue: #4907. Could you please update jupyter_client to 5.3.4 and jupyter_core 4.6.0 and try again?
FYI I can reproduce the error in jupyter lab on windows (my students as well) and the docker image has jupyter_client to 5.3.4 and jupyter_core 4.6.0 . The (large) image we use is available at vnijs/rsm-jupyterhub. I'm now trying your suggestion in the #4907 thread to downgrade jupyter_client to 5.3.1.
I'm sure 5.3.1 will work but doesn't have the appropriate security in place. I am currently pulling vnijs/rsm-jupyterhub:latest from dockerhub and see that it is 34 minutes old at the time I write this (13:36 PST). Please let me know if there's a different image that contains jupyter_client 5.3.4 and jupyter_core 4.6.0. Thanks.
Sorry about that @kevin-bates. I needed to upload a working version for students asap. I will build a new one for you and post back when ready (and tested to not work on Windows).
Ok - no rush - that fact is consistent with what I'm seeing in the couple images I have pulled.
I wonder if this has something to do with you using a linux-based image on a Windows file-system. The changes @MSeal made that are coming into play now have specific changes for both Windows and Linux and I wonder if this combination of usage is presenting problems.
Once you go back to using jupyter_client 5.3.4, It might be an interesting experiment to try running with the following added to the command line: -e JUPYTER_RUNTIME_DIR=/tmp
so that the kernel connection file is written to /tmp (and no "os boundaries" are crossed).
I ran with the following and confirmed the kernel's connection file (kernel-<kernel-id>.json
) to be in /tmp after running this is a cell !ls /tmp/kernel-*
.
docker run -it --rm -p 8888:8888 -e JUPYTER_RUNTIME_DIR=/tmp vnijs/rsm-jupyterhub:1.6.2 jupyter-lab --debug
Great. If you have a (smaller) image that replicates that is probably easier. Thanks the temporary fix of using 5.3.1!
Sorry, there may be a misunderstanding. I used your 1.6.2 image. I think it would be good to build a similar image with the newer jupyter_client installed and just point the runtime directory to /tmp so that the connection files are cared on a "like" filesystem. Otherwise, the runtime directory is located in the mounted /home/jovyan volume and you're crossing "os boundaries". If you can confirm that -e JUPYTER_RUNTIME_DIR=/tmp
enables working behavior (or bake that env into the image), then it will be 1) a workaround using the proper versions and 2) a data point that might help someone identify a means of fixing this (not to mention the problem is much better defined).
OK. I'm not sure when I'll get around to this to be honest. I don't have a very good windows machine to build containers and I don't want to upload to docker hub as that may mess up things for my students. Sorry. The relevant files are linked below someone wants to take a stab at this. The only thing that would need to be changed is the requirements.txt files where jupyter-client should be set to 5.3.4
https://github.com/radiant-rstats/docker/tree/master/rsm-msba
I've built an image with jupyter_client==5.3.4
and JUPYTER_RUNTIME_DIR=/tmp/runtime
.
There are a few other files in the runtime dir, so I don't know if that will affect you, but I think it would be helpful if you could run this image on your problematic Windows server(s) to see if kernels can be started.
You can find the image here.
I do indeed have the JUPYTER_RUNTIME_DIR set to a directory in the users home directory. That home directory is mapped to the users home directory on the host OS. Is there a particular (dis)advantage to using a directory like /tmp or some other directory inside the container?
Thanks Vincent, What you're doing is typically fine. Where my curiosity lies is that the 'host OS' is Windows and that differs from the container OS. I'm wondering if your permission issue is due to the hybrid nature of things, particularly since that portion of the code needs to perform more operating system specific operations on the filesystem.
Once you're able to try things again, it would be great if you could provide the permission denied traceback information as well (in the reproduction case), since we've enhanced the information it produces - as well as to confirm you're indeed seeing the same issue as the OP.
Circling back, @blankws - did moving to the newer versions help you? I don't suspect you're also running a Linux-based container on Windows, but need to ask. Thanks.
Also note that jupyter_core 4.6.1 is available, it would be best that you use that, although I doubt the last fix - although in this very area of code - comes into play in this issue.
@kevin-bates I created a new image that has jupyter_core 4.6.1 and jupyter_client 5.3.4. You can get the image from docker hub @ vnijs/rsm-msba-update
I tried adding the below when launching, as suggested, but it doesn't seem get picked up (i.e., the previous setting is still used). Traceback shown below:
-e JUPYTER_RUNTIME_DIR=/tmp/runtime
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tornado/web.py", line 1699, in _execute
result = await result
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 742, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/notebook/services/sessions/handlers.py", line 72, in post
type=mtype))
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 735, in run
value = future.result()
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 742, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/notebook/services/sessions/sessionmanager.py", line 88, in create_session
kernel_id = yield self.start_kernel_for_session(session_id, path, name, type, kernel_name)
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 735, in run
value = future.result()
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 742, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/notebook/services/sessions/sessionmanager.py", line 101, in start_kernel_for_session
self.kernel_manager.start_kernel(path=kernel_path, kernel_name=kernel_name)
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 735, in run
value = future.result()
File "/usr/local/lib/python3.6/dist-packages/tornado/gen.py", line 209, in wrapper
yielded = next(result)
File "/usr/local/lib/python3.6/dist-packages/notebook/services/kernels/kernelmanager.py", line 168, in start_kernel
super(MappingKernelManager, self).start_kernel(**kwargs)
File "/usr/local/lib/python3.6/dist-packages/jupyter_client/multikernelmanager.py", line 110, in start_kernel
km.start_kernel(**kwargs)
File "/usr/local/lib/python3.6/dist-packages/jupyter_client/manager.py", line 240, in start_kernel
self.write_connection_file()
File "/usr/local/lib/python3.6/dist-packages/jupyter_client/connect.py", line 476, in write_connection_file
kernel_name=self.kernel_name
File "/usr/local/lib/python3.6/dist-packages/jupyter_client/connect.py", line 141, in write_connection_file
with secure_write(fname) as f:
File "/usr/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.6/dist-packages/jupyter_core/paths.py", line 433, in secure_write
.format(file=fname, permissions=oct(file_mode)))
RuntimeError: Permissions assignment failed for secure file: '/home/jovyan/.rsm-msba/share/jupyter/runtime/kernel-27214673-94be-4f68-9b83-85fe324f5c73.json'.Got '0o655' instead of '0o0600'
Thanks for the update - I know this is frustrating (it is for me as well).
Just to confirm, you're running the docker image on a Windows host and the target of the docker run command is jupyter-lab (and not juptyer-hub that you then launch lab instances from) - is that correct? The reason I ask about hub, is that if hub was being launched, it might not convey the env to the notebook/lab image.
If you use the image I provided, the -e
is not required (although its using jupyter_core 4.6.0 - but that shouldn't matter). I suppose another way to check this is to not provide the volume information - but I'm assuming you have configuration and data set information in each user's (Windows) home directory. Still, omitting that information add try to start a kernel (i.e., create an empty notebook and successfully perform a cell operation) would provide the same kind of information. If successful, then I would venture this is related to a filesystem anomaly.
What your latest information is saying is that the kernel-xxx.json file is being created with read-execute bits for group and others despite the fact that the creation mode of the file is asking they be off (0) and only read-write (6) be set for the file's owner. None of these permissions were enforced back in juptyer_client 5.3.1, which is why that downgrade "works".
If we can't come to a conclusion (or at least develop new clues), I will need to defer this to someone else. Sorry.
Students run the docker image on Windows laptops. They use jupyter-lab and jupyter-hub is not involved. I clicked on the image link you provided and got the message shown in the screenshot below.
I tried removing the "-v" option to mount the container home directory to the users home directory on the host os. At that point, starting a (python) kernel worked fine! I wonder if this is related to https://github.com/docker/for-win/issues/445
Thanks for the update. ((I'm sorry about the link to my image. I just updated it to the non-cloud url so perhaps you may have better luck - although it probably isn't necessary.))
Nice find on the docker/windows issue - I suspect this may be another manifestation of that.
I'm not entirely sure what else winds up in JUPYTER_RUNTIME_DIR
but I would recommend the following.
JUPYTER_RUNTIME_DIR
is baked into the image (as your repo does), but ensure its in an image-contained directory (like /tmp/runtime
)./home/jovyan
.I believe the secure_write
function is currently used only when writing the kernel connection file and don't expect there to be other cases of this. The connection files will be written properly since we'd now be pointing those writes to a Linux-based filesystem and your current use of the data within the mount volume should still be accessible.
Of course, I can't guarantee other "instances" of this filesystem issue elsewhere in the Jupyter stack, but I don't believe there are areas this stringent about the permissions.
Thank you for your help and patience!
Sorry I have finished my university homework last days. My os is windows 10 1903 And I use the scoop to install it then there was the error. So I uninstall it by the scoop and run Anaconda3 installer as Administrator. Everything ok.
The image mount issue does look like it would cause secure write to correctly fail. From reading the many github issues attached I think it may be safe to say that the windows docker issue(s) are where it should be permanently solved as they just don't allow for chmod operations to succeed.
The only thing I could see us being able to do would be to have an insecure launch option which ignores permissions on files. Though I'd maybe be more inclined to instead just document the pattern isn't supported by OS / docker level unresolved issues.
Thanks @MSeal. Yeah, I don't think we should sacrifice security for this kind of edge case, especially since redirecting JUPYTER_RUNTIME_DIR
to a "like" filesystem appears to be sufficient.
Besides the connection file, are there other callers of secure_write()
, now or anticipated? (I didn't find any in a brief scan.)
There's no other callers at this time, nor any anticipated with current features afaik.
@kevin-bates Are you sure this is an "edge-case"? I expect jupyter's docker images are used a fair amount by windows users. As soon as someone mounts the container home directory somewhere on the host os they would be in trouble right?
I just tried it out and the default runtime path for jupyter is /home/jovyan/.local/share/jupyter/runtime which, if mapped, to be on the users host OS
@vnijs - thanks for the question. "edge-case" was a bit optimistic now that I realize that the runtime dir is a function of the jupyter data dir - which defaults to somewhere in the user's home directory. So yes, users that mount their Windows directory to /home/jovyan
can expect issues provided either jupyter data dir or jupyter runtime dir have not been adjusted.
I still believe this particular "use-case" shouldn't warrant exposing sensitive information. After all, the entire reason @MSeal introduced these changes was to address a security issue. I'm sorry this has affected you and your students, but I think, given the duration this filesystem issue has existed, we're relegated to "workaround mode".
If the files in your user's Windows directory are used exclusively in their Notebook/Lab environment, then I suspect this could be worked around in a slightly different manner by mounting the Windows user's directory to some other location in the container and adding --notebook-dir=<that location>
to the docker run command. This assumes the data in the Windows user's home directory is where the notebook and data files reside, etc. Personally, I think it's best (and more intuitive) to point only runtime dir to a local directory.
I suppose this could be something to bake directly into the jupyter-stack images, but I'd like to get @parente's opinion on this.
@kevin-bates I think I can make things work pointing jupyter's "runtime" path somewhere inside the container. I'm not sure what jupyter's "data" path does. Are there likely to be security issues related to the "data" path that would requite pointing that inside the container as well? It does seem possible to point the "runtime" path and the "data" path to different places (i.e., inside the container and user's host OS)
jovyan@mini ~ jupyter --paths
config:
/home/jovyan/.rsm-msba/jupyter
/usr/etc/jupyter
/usr/local/etc/jupyter
/etc/jupyter
data:
/home/jovyan/.rsm-msba/share/jupyter
/home/jovyan/.local/share/jupyter
/usr/local/share/jupyter
/usr/share/jupyter
runtime:
/tmp/jupyter/runtime
Sounds good Vincent.
Per Matt's response and my general understanding of the server, I don't believe there are other locations where this kind of filesystem issue would come into play.
I fond that every time I change the name of my pc, there was a Kernel error. How can I deal with it? `The error is:
Traceback (most recent call last): File "D:\Applications\Scoop\apps\anaconda3\2019.10\lib\site‑packages\spyder\plugins\ipythonconsole.py", line 1572, in create_kernel_manager_and_kernel_client kernel_manager.start_kernel(stderr=stderr_handle) File "D:\Applications\Scoop\apps\anaconda3\2019.10\lib\site‑packages\jupyter_client\manager.py", line 240, in start_kernel self.write_connection_file() File "D:\Applications\Scoop\apps\anaconda3\2019.10\lib\site‑packages\jupyter_client\connect.py", line 547, in write_connection_file kernel_name=self.kernel_name File "D:\Applications\Scoop\apps\anaconda3\2019.10\lib\site‑packages\jupyter_client\connect.py", line 212, in write_connection_file with secure_write(fname) as f: File "D:\Applications\Scoop\apps\anaconda3\2019.10\lib\contextlib.py", line 112, in enter return next(self.gen) File "D:\Applications\Scoop\apps\anaconda3\2019.10\lib\site‑packages\jupyter_client\connect.py", line 102, in secure_write with os.fdopen(os.open(fname, open_flag, 0o600), mode) as f: PermissionError: [Errno 13] Permission denied: 'C:\Users\blank\AppData\Roaming\jupyter\runtime\kernel�dd8fe4.json'` I reset my pc, and there was no problem when I don't change the name of the PC.