Open ceesem opened 8 months ago
Not a bad idea. Are there any other +
types? Should I just selectively delete middleauth+
or be more generic?
I wouldn't delete the middleauth+
per se, but rather treat it as an explicit indication to use the CAVE token header. We actually have one case where we have imagery that's behind authentication, and the Neuroglancer cloudpath is precomputed://middleauth+https://
. It would actually be perfect if that that indicated how to add the correct header information.
Can you share that image case with me? I would like to see if I can get it to work.
This remains an issue, since Neuroglancer requires this auth formatting for its data sources and cloud-volume does not know how to interpret it.
We also now have a neuroglancer skeleton source deployed on datastack minnie65_phase3_v1
that should be compatible with cloudvolume whose endpoint is precomputed://middleauth+https://minnie.microns-daf.com/skeletoncache/api/v1/minnie65_phase3_v1/precomputed/skeleton/
, but again cloudvolume does not know how to read the format required by Neuroglancer.
First step maybe complete. I think I have middleauth+https
working in CloudFiles.
This is released in 10.2.0!
In cloudvolume 11.0.2, the middleauth+
paths are not working with the same error described above.
Two example cloudpaths that should work to create cloudvolume objects using CAVE tokens are:
precomputed://middleauth+https://minnie.microns-daf.com/skeletoncache/api/v1/minnie65_phase3_v1/precomputed
graphene://middleauth+https://minnie.microns-daf.com/segmentation/table/minnie3_v1
Note that the first case of this is a skeleton source serving precomputed skeletons but requires a CAVE token in the header.
However, both throw an UnsupportedProtocolError
:
Cloud Path must conform to FORMAT://PROTOCOL://BUCKET/PATH
Examples:
precomputed://gs://test_bucket/em
gs://test_bucket/em
graphene://https://example.com/image/em
Supported Formats: None (precomputed), graphene, precomputed, boss, n5, zarr, zarr2, zarr3
Supported Protocols: gs, file, s3, http, https, mem, matrix, tigerdata
I think that the ideal solution would involve treating the path as FORMAT://AUTH+PROTOCOL://...
where AUTH
determines how to send authentication information regardless of format (e.g. middleauth
would say to add your CAVE token appropriately to request headers but not change anything else). The AUTH values would then be optional, but would override format defaults (e.g. graphene://
would default to middleauth
and precomputed://
would default to some null passthrough). I guess AUTH would also only make sense with https
since the other protocols have their specific needs.
Hi Casey, I just ran those two paths on my machine and didn't see an error. Is your cloud-files version up-to-date? That could be the issue.
Ah, you're right, thank you! My cloudvolume was up to date, but the cloud-files install was at 4.26.0. Upgrading cloud-files to 4.28.0 fixed that.
Unless there's a good reason not to, I feel like you should bump the minimum version of cloud-files from 4.24.0 up to whenever this feature was added, since it is a cloud-volume feature as much as a cloud-files one and the fix was not obvious.
As part of the development of graphene integration in Neuroglancer, the auth protocol is provided to neuroglancer as an additional scheme:
graphene://middleauth+https://
instead of justgraphene://https://
. Currently, however that gives the error:For convenience in coming to cloudvolume from neuroglancer, it would be nice if this multi-scheme URL were acceptable and treated as equivalent to
graphene://https://
.