seung-lab / cloud-volume

Read and write Neuroglancer datasets programmatically.
https://twitter.com/thundercloudvol
BSD 3-Clause "New" or "Revised" License
134 stars 47 forks source link

"graphene://middleauth+https://" protocol in a cloudpath does not work #610

Open ceesem opened 8 months ago

ceesem commented 8 months ago

As part of the development of graphene integration in Neuroglancer, the auth protocol is provided to neuroglancer as an additional scheme: graphene://middleauth+https:// instead of just graphene://https://. Currently, however that gives the error:

Cloud Path must conform to [FORMAT://PROTOCOL://BUCKET/PATH](format://PROTOCOL//BUCKET/PATH)

For convenience in coming to cloudvolume from neuroglancer, it would be nice if this multi-scheme URL were acceptable and treated as equivalent to graphene://https://.

william-silversmith commented 8 months ago

Not a bad idea. Are there any other + types? Should I just selectively delete middleauth+ or be more generic?

chrisj commented 8 months ago

ngauth possibly

https://github.com/google/neuroglancer/blob/145ce39e54bfaaa6fbf106d46c5f5bf613c04155/src/util/special_protocol_request.ts#L76

ceesem commented 8 months ago

I wouldn't delete the middleauth+ per se, but rather treat it as an explicit indication to use the CAVE token header. We actually have one case where we have imagery that's behind authentication, and the Neuroglancer cloudpath is precomputed://middleauth+https://. It would actually be perfect if that that indicated how to add the correct header information.

william-silversmith commented 6 months ago

Can you share that image case with me? I would like to see if I can get it to work.

ceesem commented 4 months ago

This remains an issue, since Neuroglancer requires this auth formatting for its data sources and cloud-volume does not know how to interpret it.

We also now have a neuroglancer skeleton source deployed on datastack minnie65_phase3_v1 that should be compatible with cloudvolume whose endpoint is precomputed://middleauth+https://minnie.microns-daf.com/skeletoncache/api/v1/minnie65_phase3_v1/precomputed/skeleton/, but again cloudvolume does not know how to read the format required by Neuroglancer.

william-silversmith commented 3 months ago

First step maybe complete. I think I have middleauth+https working in CloudFiles.

https://github.com/seung-lab/cloud-files/pull/106

william-silversmith commented 3 months ago

This is released in 10.2.0!

ceesem commented 2 days ago

In cloudvolume 11.0.2, the middleauth+ paths are not working with the same error described above.

Two example cloudpaths that should work to create cloudvolume objects using CAVE tokens are:

Note that the first case of this is a skeleton source serving precomputed skeletons but requires a CAVE token in the header.

However, both throw an UnsupportedProtocolError:

Cloud Path must conform to FORMAT://PROTOCOL://BUCKET/PATH
Examples:
  precomputed://gs://test_bucket/em
  gs://test_bucket/em
  graphene://https://example.com/image/em

Supported Formats: None (precomputed), graphene, precomputed, boss, n5, zarr, zarr2, zarr3
Supported Protocols: gs, file, s3, http, https, mem, matrix, tigerdata

I think that the ideal solution would involve treating the path as FORMAT://AUTH+PROTOCOL://... where AUTH determines how to send authentication information regardless of format (e.g. middleauth would say to add your CAVE token appropriately to request headers but not change anything else). The AUTH values would then be optional, but would override format defaults (e.g. graphene:// would default to middleauth and precomputed:// would default to some null passthrough). I guess AUTH would also only make sense with https since the other protocols have their specific needs.

william-silversmith commented 1 day ago

Hi Casey, I just ran those two paths on my machine and didn't see an error. Is your cloud-files version up-to-date? That could be the issue.

ceesem commented 1 day ago

Ah, you're right, thank you! My cloudvolume was up to date, but the cloud-files install was at 4.26.0. Upgrading cloud-files to 4.28.0 fixed that.

Unless there's a good reason not to, I feel like you should bump the minimum version of cloud-files from 4.24.0 up to whenever this feature was added, since it is a cloud-volume feature as much as a cloud-files one and the fix was not obvious.