Closed slinnarsson closed 8 years ago
Implemented the server part of this issue:
See updated API docs at /docs/loom_server_API.md.
TODO: refactor the pipeline, and implement upload.py to replace the missing upload functionality.
Question: should uploading be removed from the client side (and replaced e.g. by a python script)? Or should we keep it but enable it only if the pipeline server component is detected? Since we're basically separating uploading and browsing, wouldn't it make sense if the front-end also reflected that?
A third option would be to split the client into two: an upload and a browser client. It's functionally not that different: the upload client would work as it does now, then redirect to the browser client's page, which would also work as it does now. It would require running two servers, but the load would be the same. Similarly it would require refactoring the client-side into two apps, but that shouldn't be that big a hassle. If we want to separate the repositories we can make the upload-client part of the pipeline repository to keep things neatly organised.
What do you think of this option?
I think, yes, we should remove upload from the current client. The upload functionality is very Linnarsson Group-specific, and only bioinformatics-savvy people will use it anyway. Therefore, I think for now it could be replaced by a simple Python script (or even simpler, by manually uploading files to the right place).
This greatly simplifies the client and gets rid of complicated dependencies. I will try to create a packaging script so that you could simply do "pip install loom" and then you would be able to type "loom" in the Terminal. This would start the server locally with sensible defaults, launch a browser and show you the client. This is similar to how Jupyter (iPython) works.
OK, fixed. The server now automatically launches a browser window pointing to itself. The defaults are suitable for running locally:
--port 8003
(which works without sudo)
--dataset-path ~/loom-datasets
(will be created if it doesn't exist)
Thus, you can start the server without arguments and it will just work.
To run a "real" server:
--no-browser
and --port 80
. sudo
(to permit binding to port 80)Great! Combined with the off-line loom browsing this will make it much easier to develop too, since I won't even need an internet connection.
Now that we decided to use VMs instead of Containers, there is an opportunity to simplify the server architecture. For example, we can now store datasets simply in a local folder in a sensible structure. As a consequence, it will be relatively easy to make the server run locally e.g. on somebody's laptop. This would have a lot of benefits, mainly that it can be a nice browser without complicated server deployment. It would be great for example if the browser could be installed using
pip install loom-browser
and then simply run from the Terminal. Deployment to a "real" server would be a simple matter of running the same thing on a public server somewher (e.g. a VM).To make this happen, the server must be separated from dependencies on Google Cloud. All such dependencies should be moved to the Pipeline component. Furthermore, we need to relax the naming of
.loom
files, since users might want to name the files freely.In particular:
/loom/transcriptomes
should be removed and its functionality replaced by the loom pipeline.transcriptome__project__dataset.loom
naming convention, loom files should be allowed to be named whatever is legal in the filesystem (but still with a.loom
file extension). They should be organized into projects, which again should be named whatever is legal as a folder name. This means that some endpoints need to be modified.