danielfrg / s3contents

Jupyter Notebooks in S3 - Jupyter Contents Manager implementation
Apache License 2.0
248 stars 88 forks source link

Can't create files or folder : "Cannot POST to files, use PUT instead." #62

Closed fabiencelier closed 4 years ago

fabiencelier commented 5 years ago

Hello,

Whenever I try to create a new note or folder I got the following error message :

ErrorInvalid response: 400 Bad Request

And the following logs :

[W 16:48:50.420 LabApp] 400 POST /api/contents/?1551455330316 (127.0.0.1): Cannot POST to files, use PUT instead.
[W 16:48:50.421 LabApp] Cannot POST to files, use PUT instead.
[W 16:48:50.422 LabApp] 400 POST /api/contents/?1551455330316 (127.0.0.1) 102.69ms referer=http://localhost:8888/lab

However some operations work :

Any idea what might be the problem here ?

Deepanshu2017 commented 5 years ago

@danielfrg I'm having the Exact same issue with GCP. I tried to debug the code but I couldn't find any problem in the library. Can it be related to JupyterHub? This issue is just driving me crazy.

danielfrg commented 5 years ago

Not sure, creating a folder works fine for me. Could you send me more info about the environment you have?

Deepanshu2017 commented 5 years ago

@danielfrg This is new issue that I'm facing, not sure should I create new issue or not, but here it is.

I have a jupyterhub_config.py file and I'm trying to create a S3 storage for each user spawned by Dockerspawner. My Dockerspawner setup is working fine i.e. spawing users in separate containers but when I added S3 lines nothing changed, i.e. it is still using local volume.

Below is my jupyterhub_config file

import os
from s3contents import S3ContentsManager

c = get_config()

c.NotebookApp.contents_manager_class = S3ContentsManager
c.S3ContentsManager.access_key_id = "KEYID"
c.S3ContentsManager.secret_access_key = "ACCESS_KEY"
c.S3ContentsManager.endpoint_url = "S3ENDPOINT"
c.S3ContentsManager.bucket = "BUCKET"

# We rely on environment variables to configure JupyterHub so that we
# avoid having to rebuild the JupyterHub container every time we change a
# configuration parameter.

# Spawn single-user servers as Docker containers
c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
# Spawn containers from this image
c.DockerSpawner.container_image = os.environ['DOCKER_NOTEBOOK_IMAGE']
# JupyterHub requires a single-user instance of the Notebook server, so we
# default to using the `start-singleuser.sh` script included in the
# jupyter/docker-stacks *-notebook images as the Docker run command when
# spawning containers.  Optionally, you can override the Docker run command
# using the DOCKER_SPAWN_CMD environment variable.
spawn_cmd = os.environ.get('DOCKER_SPAWN_CMD', "start-singleuser.sh")
c.DockerSpawner.extra_create_kwargs.update({ 'command': spawn_cmd })
# Connect containers to this Docker network
network_name = os.environ['DOCKER_NETWORK_NAME']
c.DockerSpawner.use_internal_ip = True
c.DockerSpawner.network_name = network_name
# Pass the network name as argument to spawned containers
c.DockerSpawner.extra_host_config = { 'network_mode': network_name }
# Explicitly set notebook directory because we'll be mounting a host volume to
# it.  Most jupyter/docker-stacks *-notebook images run the Notebook server as
# user `jovyan`, and set the notebook directory to `/home/jovyan/work`.
# We follow the same convention.
notebook_dir = os.environ.get('DOCKER_NOTEBOOK_DIR') or '/home/jovyan/work'
c.DockerSpawner.notebook_dir = notebook_dir
# Mount the real user's Docker volume on the host to the notebook user's
# notebook directory in the container
c.DockerSpawner.volumes = { 'jupyterhub-user-{username}': notebook_dir }
# volume_driver is no longer a keyword argument to create_container()
# c.DockerSpawner.extra_create_kwargs.update({ 'volume_driver': 'local' })
# Remove containers once they are stopped
c.DockerSpawner.remove_containers = True
# For debugging arguments passed to spawned containers
c.DockerSpawner.debug = True

# User containers will access hub by container name on the Docker network
c.JupyterHub.hub_ip = 'jupyterhub'
c.JupyterHub.hub_port = 8080

# TLS config
c.JupyterHub.port = 443
c.JupyterHub.ssl_key = os.environ['SSL_KEY']
c.JupyterHub.ssl_cert = os.environ['SSL_CERT']

# Authenticate users with GitHub OAuth
c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'
c.GitHubOAuthenticator.oauth_callback_url = os.environ['OAUTH_CALLBACK_URL']

# Persist hub data on volume mounted inside container
data_dir = os.environ.get('DATA_VOLUME_CONTAINER', '/data')

c.JupyterHub.cookie_secret_file = os.path.join(data_dir,
    'jupyterhub_cookie_secret')

c.JupyterHub.db_url = 'postgresql://postgres:{password}@{host}/{db}'.format(
    host=os.environ['POSTGRES_HOST'],
    password=os.environ['POSTGRES_PASSWORD'],
    db=os.environ['POSTGRES_DB'],
)

# Whitlelist users and admins
c.Authenticator.whitelist = whitelist = set()
c.Authenticator.admin_users = admin = set()
c.JupyterHub.admin_access = True
pwd = os.path.dirname(__file__)
with open(os.path.join(pwd, 'userlist')) as f:
    for line in f:
        if not line:
            continue
        parts = line.split()
        # in case of newline at the end of userlist file
        if len(parts) >= 1:
            name = parts[0]
            whitelist.add(name)
            if len(parts) > 1 and parts[1] == 'admin':
                admin.add(name)

I'm using DockerSpawner with Github Authentication and trying to persist the storage with S3contents. Could you please help me in this? @danielfrg Thanks