Closed ericdill closed 4 years ago
Ok ignore ALL of this so far. User error :facepalm: :disappointed_relieved:
In my config, I had "bucket": "s3://BUCKET_NAME"
. So what I'll do instead is strip off the s3://
on S3ContentsManager initialization
lololol, this looks ok to me anyway. I little better UX.
Ok. Now the logs show this when the novice user (i.e., me) provides bucket="s3://BUCKET"
in the jupyter config:
[D 18:55:29.399 NotebookApp] s3manager._validate_bucket: User provided bucket: s3://BUCKET
[W 18:55:29.400 NotebookApp] s3manager._validate_bucket: Assuming you meant BUCKET for your bucket. Using
that. Please set bucket=BUCKET in your jupyter_notebook_config.py file
Lol. make check
and make fmt
don't agree:
$ ~/dev/s3contents make check
./s3contents/s3manager.py:84:65: W291 trailing whitespace
./s3contents/s3manager.py:91:10: W291 trailing whitespace
./s3contents/s3manager.py:93:1: W293 blank line contains whitespace
./s3contents/s3manager.py:98:1: W293 blank line contains whitespace
make: *** [Makefile:61: check] Error 1
$ ~/dev/s3contents make fmt
Skipped 1 files
All done! ✨ 🍰 ✨
16 files left unchanged.
Only the one line really "matters". everything else was to aid in the debugging of wtf was going on.
So what am I fixing here? Note two things. (1) the timestamps are all the default timestamp of 50 years ago and (2) it thinks my s3 directory is a file so if I click on it, jupyter notebook vomits up the 404 : Not Found You are requesting a page that does not exist!" page.
Clearly that's not what we want. Doing a bit of debugging in the logs, I see the following being spit out:
Pulling out some of the interesting lines, we see the following:
So it seems that sometimes we're getting the directory correct and sometimes we're also doubling up on the
bucket/key
.The additional logging in
unprefix
is what helps to explain what's going on. Here we see that unprefix is not working correctly as it's returning the following:s3://BUCKET/KEY/BUCKET/KEY
So on to my hacky fix. If I strip the
s3://
from the return ofget_prefix
, then everything seems to return to normal and we get the correct timestamps and the s3 directories are properly clickable and act as folders.Is this good enough? Are there side effects that I'm not thinking of by stripping off
s3://
from get_prefix so thatunprefix
works properly?