Closed ak0ska closed 4 years ago
If I understand correctly, in order to use different resolution levels with neuroglancer, it is the user's responsibility to downscale the source data outside of Dvid
Yes, particularly for large immutable data, such as grayscale volumes. We typically pre-compute the scale pyramid using a compute cluster, and load it into a dvid uint8blk
instance using the POST .../raw
endpoint. Our frameworks for doing this are sadly undocumented at the moment, if FWIW the cluster code is here and the relevant function to wrap the /raw
endpoint is here.
then ingest the the pre-processed data into data type names differing only in the suffix
If you store your grayscale data as a uint8blk
in dvid, neuroglancer can read it. But yes, you need to store the different scales as separate uint8blk
instances, named as follows (note that there is no mygrayscale_0
): mygrayscale
, mygrayscale_1
, mygrayscale_2
, etc.
Which viewer was intended to be used for the imagetile format
We no longer use imagetiles in our own work. We use uint8blk
, and we use neuroglancer and NeuTu/neu3 to view them. I believe there is support in CATMAID for reading imagetiles directly from dvid, but I have no experience with it so YMMV.
Is it planned to add support for downscaling for the imageblk types as well?
We don't plan to implement it in dvid itself, but we do plan to better document our solution for doing it externally and loading it into dvid. For now, I may be able to provide some assistance with our tools if you're interested. If so, can you please answer the following:
Thank you for your explanation, and for offering assistance with your own workflow! For the latter I think I can set it up myself, now that I know how you are also doing it externally.
I did some experimenting by adding a Dvid server as a data source for Neuroglancer, but using the precomputed
protocol. There was a proxy in front of Dvid, which rewrote the Neurogrlancer generated urls to match Dvid's <api URL>/node/<UUID>/<data name>/raw/<dims>/<size>/<offset>
format, and that worked too. Why does the dvid
protocol to append the jpeg format to that endpoint instead of working with the raw octets? Isn't there a performance penalty for generating these JPEGs compared to serving a raw octet as it is stored by the server?
That endpoint can return raw octets for 3D requests. You can always do http://myserver/api/help/uint8blk
to see the full API doc, where uint8blk
can be replaced by any data type. Click here for the documentation for that raw
endpoint. For 2D requests, it's png by default and jpg as an option. For 3D requests it's an octet-stream. If you really need octet-stream for 2D, that could be easily added.
Thank you for your response. Perhaps I was not phrasing my question clearly, I was interested why the dvid backend for neuroglancer is behaving as it does. I understand the default response from the raw endpoint is an octet stream, but in my case, neuroglancer configured with the dvid backend requests JPEGs for the 3D data through the RAW endpoint. The URLs generated by neuroglancer are http://myserver/api/node/{REPOID}/{DATATYPE}/raw/0_1_2/64_64_64/{OFFSET-X}_{OFFSET-Y}_{OFFSET-Z}/jpeg
and response is Content-Type: image/jpeg
, not octet-stream.
As I was looking at the code this comes down to checking the compressionName
in the datatype info and setting the VolumeChunkEncoding
accordingly. My compression is LZ4 so encoding is set to RAW
, which is turn genereates the above urls with jpeg format.
My question is why is the jpeg format used in these urls, instead of fetching the raw octets, and the possible effect this behaviour may have on performance.
Ah, yes I see what you mean now. I believe this is a path added by @stephenplaza to both neuroglancer and dvid that uses a form of 2D JPEG compression for 3D grayscale blocks. You can see the code on the dvid side here. The API documentation for the raw
endpoint should be modified to reflect this possibility (@stephenplaza).
For some of our repositories, we have grayscale stored using JPEG-compressed blocks (via a 2d layout). I believe this is documented here on neuroglancer's github pages.
Hello,
I noticed that the Dvid backend for neuroglancer cannot use the Imagetiles to display data. However, Dvid only seems to be able to downscale data into the Imagetile format.
If I understand correctly, in order to use different resolution levels with neurogrlancer, it is the user's responsibility to downscale the source data outside of Dvid, then ingest the the pre-processed data into data type names differing only in the suffix, eg
mygrayscale_0, mygrayscale_1, ..
. If I misunderstood something please correct me!Which viewer was intended to be used for the imagetile format, to take advantage of Dvid's own downscaling? Is it planned to add support for downscaling for the imageblk types as well?
Thank you for your help!
Edit: Just found the dvid-tileviewer project, so please ignore the first question.