Closed DenisaCG closed 9 months ago
Comment for the future, for now the code only allows to configure a single source of drives. In the future, this extension should evolve to allow a list of source configuration that may be based on different technology (like two providers are S3, another on is SSH,...).
That could probably wait the point where adding a new provider is handle from the frontend.
Thank you for the thorough review and suggestions @fcollonval! It was very helpful.
And as you mentioned, once we reach the point on the frontend where we can add drives from multiple providers, we will also work on the functionality on the backend, but we will keep this in mind from now.
how do the added handlers relate to the proposed OpenAPI spec in https://github.com/QuantStack/jupyter-drives/pull/1 ?
@trungleduc The ListJupyterDrives
handler takes care of the GET request to list available drives, as well as the POST request to mount a drive and has the simple endpoint drives
, while the ContentsJupyterDrives
handler deals with the GET request of retriving the contents, the POST request of adding a new file and the PATCH request of renaming a file and has the endpoint of drives/drive_name/path
.
The idea would be for the handlers to follow the proposed API spec, but I'm not sure if my implementation is the correct one, specifically for the contents handler.
Ideally, the API specs need to be fixed before implementing the handlers. Since we already had the handlers, you can add an openapi.yml
file to describe your handlers. This would make implementing the front-end client easier.
Thanks @DenisaCG
Thank you for the review and all the helpful suggestions @trungleduc! I will merge it with the current state and keep a list of possible improvements for the future.
Set up an
S3ContentsManager
for every S3 drive the user wants to open.Implement the functionalities of:
untitled_1
,untitled_2
) is not clear yets3contents
does this by somehow moving the file, so it might not be what we want in the long runThe tests are currently not working well, as the mock S3 environment doesn't seem to work as expected.