Closed soichih closed 6 years ago
I tried uploading by specifying the file IDs
Like..
bl dataset upload -p 5a74ccd66ed91402ce400cc6 -d neuro/dwi -s test --dwi 20171117_114153DSIMB3253dAPSTS3S1s029a001.nii.gz --bvecs 20171117_114153DSIMB3253dAPSTS3S1s029a001.bvec --bvals 20171117_114153DSIMB3253dAPSTS3S1s029a001.bval
This somehow ended up uploading text files that contains the filename as the file content (instead of the actual content of the file..) We need to fix this.
Also, we should update the README to make this the default / preferred method of uploading datasets (by specifying files)
Currently, we require users to prepare a directory containing files that are expected for each dataset and specify the directory name as an argument. For simple datasets (like t1w, t2w) this seems an unnecessary work that user must do to upload datasets.
We should keep the existing directory based upload approach, but I believe we should also allow users to upload dataset by simply specifying file paths for each input file/dir IDs. Like..
The CLI can then on-the-fly create a tar ball (using npm tar or archiver module) and stream it to task upload API.