danielfrg / s3contents

Jupyter Notebooks in S3 - Jupyter Contents Manager implementation
Apache License 2.0
248 stars 88 forks source link

fixed: notebook upload that use the chunk #179

Closed machero closed 1 year ago

machero commented 1 year ago

fixed the large ipynb file upload

danielfrg commented 1 year ago

Thanks for you contribution!

Do you know why is this required for large notebook files?

machero commented 1 year ago

seem like jupyter will split the large notebook files(1.5M+) to multiple chunks, the fuction _notebook_model_from_path use to load the notebook file model(save-->_save_large_file-->get-->_notebook_model_from_path)

danielfrg commented 1 year ago

Is this not needed anymore?

machero commented 1 year ago

To be honest,This change is needed for us, but I am not certainly sure whether this change will affect the architecture or have same bad affect, so I closed it temporarily. If there are any problems, I hope you can point them out, i try to fixed them, Thanks ~

danielfrg commented 1 year ago

If it's needed I want to merge it I just havent had time to test it and see if there are any issues. I havent changed that code in a while so I am not sure of any consequences.

Since test pass I might just merge it.

machero commented 1 year ago

Thanks, can you release a new version?

danielfrg commented 1 year ago

Yes, I will release one tomorrow

danielfrg commented 1 year ago

I just uploaded a new version