Open manoaman opened 3 years ago
Please pardon my ignorance.
In the shader, you had
vec3(
scale(
toNormalized(getDataValue()))
+ brightness)
I thought normally, to initialise a vec3, one would need to provide three components: x,y,z. (?)
getDataValue()
only returns the value of the first channel. Instead you can use getDataValue(0)
, getDataValue(1)
, getDataValue(2)
.
Alternatively you can use an invlerp UI control rather than separate min/max sliders:
#uicontrol invlerp red(channel=0)
#uicontrol invlerp green(channel=1)
#uicontrol invlerp blue(channel=2)
void main() {
emitRGB(vec3(red(), green(), blue()));
}
Thank you @xgui3783 and @jbms for the insights. I tried both approaches in the shader. However, they don't seem to show the image as expected. I also made sure to include the RGB channel in numpy array on the CloudVolume side. In fact, I didn't include these values in the first place so I thought that could also be an issue. Could it be something I'm forgetting on the info
file? A status view seems to send tiles over the network so not sure where I'm failing.
Config from a Screenshot1
#uicontrol invlerp red(channel=0)
#uicontrol invlerp green(channel=1)
#uicontrol invlerp blue(channel=2)
void main() {
emitRGB(vec3(red(), green(), blue()));
}
Config from a Screenshot2
#uicontrol vec3 color color(default="white")
#uicontrol float min slider(default=0, min=0, max=1, step=0.01)
#uicontrol float max slider(default=1, min=0, max=1, step=0.01)
#uicontrol float brightness slider(default=0, min=-1, max=1, step=0.1)
#uicontrol float contrast slider(default=0, min=-3, max=3, step=0.1)
float scale(float x) {
return (x - min) / (max - min);
}
void main() {
emitRGB(
color * vec3(scale(toNormalized(getDataValue(0))) + brightness, scale(toNormalized(getDataValue(1))) + brightness, scale(toNormalized(getDataValue(2))) + brightness) * exp(contrast)
);
}
Screenshot1
Screenshot2
{
"data_type": "uint8",
"num_channels": 3,
"scales": [
{
"chunk_sizes": [
[
613,
2753,
1
]
],
"encoding": "raw",
"key": "1613_1613_10000",
"resolution": [
1613,
1613,
10000
],
"size": [
14712,
11012,
35
],
"voxel_offset": [
0,
0,
0
]
}
],
"type": "image"
}
The CDF plots shown in the invlerp controls seem to indicate that all data values are equal to 0. You can also confirm that by open the selection details panel (control+rightclick) and hovering over positions with mouse.
I probably need to get familiarized with invlerp controls because changing the range or switching the arrows don't seem to show the image but filled RGB colors instead. In the meantime, I see both zero and non-zero values on the panel from hovering over on the x-y view. Could it be the values I'm providing with invlerp need to be adjusted?
Hmm, that's odd. If you can share a sample volume that reproduces the issue I can take a look.
One doubt I have is if I processed numpy array properly on the CloudVolume. I believe reshape should have assigned RGB channel as opposed to my initial try excluded this value.
Sure I can share a sample volume. Where could I email you @jbms to my shared google drive?
(width, height) = image.size
#array = np.array(list(image.getdata()), dtype=np.uint16, order='F')
array = np.array(list(image.getdata()), dtype=np.uint8, order='F')
array = array.reshape((3, 1, height, width)).T
# vol[:, :, z] = array
vol[:, :, z, :] = array
My email is jbms@google.com
It turns out the problem is that the chunk size you are using, [613, 2753, 1]
is too large, as it results in Neuroglancer attempting to create a 3-d texture of size [613, 2753, 3]
.
The developer console shows:
texture_access.ts:317 WebGL: INVALID_VALUE: texImage3D: width, height or depth out of range
On my machine, webglreport.com (under webGL2 tab) shows a maximum 3d texture size of 2048, meaning all dimensions must be <= 2048.
Currently Neuroglancer always stores each chunk as a single texture. In principle Neuroglancer could do rechunking client side to avoid exceeding the maximum chunk size, but that is not implemented.
If you rewrite your volume to use a smaller chunk size it should work. I would recommend e.g. [512, 512, 1].
I see, the chunk size need to be smaller than what I had. That is great to know and I should've looked at the Dev Console. Thank you for reminding me.
I rewrote my volume with [512,512,1] chunk size and I think I came one step closer. Although, it seems that images are flipped from the center on both vertically and horizontally. And the Red and Green values seems to stand out since this image is mostly blue. Could it be configurable on shaders end? I also uploaded raw images (tiff) for you to view the originals @jbms Please advise me if you have any thoughts. Thank you!
I'm not seeing your updated export, but it looks like you may have assumed the wrong data layout when writing your images. It looks like you may have the channel dimension (rgb) incorrectly interleaved in your spatial dimensions.
@jbms Does Neuroglancer assumes RGB channel to be ordered RGB
or BGR
?
The emitRGB call expects the values to be in red, green, blue order. But Neuroglancer doesn't make any assumptions about how the data itself maps to red, green, and blue. You control that with the shader.
For example, with the sample shader I provided:
#uicontrol invlerp red(channel=0)
#uicontrol invlerp green(channel=1)
#uicontrol invlerp blue(channel=2)
void main() {
emitRGB(vec3(red(), green(), blue()));
}
You can swap the channel=
values to change the ordering. Note that you can also change the channel using the UI control itself., by clicking on the "0", "1", "2" shown next to the CDFs.
It does appear to be channel dimensions are not shaped properly. With grayscale images, conversion worked fine with X,Y,Z
order.
With RGB tuples, I probably need to reshape it properly. @jbms Does Neuroglancer expect precomputed data to be X,Y,Z,RGB
or X,Y,Z,R,G,B
or X,Y,RGB,Z
or X,Y,R,G,B,Z
?
When using the "raw" encoding for the neuroglancer precomputed format, each individual chunk should be stored in lexicographical order c z y x.
So for example, if you have num_channels=3 and xyz chunk_size=[2, 2, 2], the successive values would correspond to:
Offset 0: c=0 z=0 y=0 x=0
Offset 1: c=0 z=0 y=0 x=1
Offset 2: c=0 z=0 y=1 x=0
Offset 3: c=0 z=0 y=1 x=1
Offset 4: c=0 z=1 y=0 x=0
Offset 5: c=0 z=1 y=0 x=1
Offset 6: c=0 z=1 y=1 x=0
Offset 7: c=0 z=1 y=1 x=1
Offset 8: c=1 z=0 y=0 x=0
Offset 9: c=1 z=0 y=0 x=1
Offset 10: c=1 z=0 y=1 x=0
Offset 11: c=1 z=0 y=1 x=1
Offset 12: c=1 z=1 y=0 x=0
Offset 13: c=1 z=1 y=0 x=1
Offset 14: c=1 z=1 y=1 x=0
Offset 15: c=1 z=1 y=1 x=1
Offset 16: c=2 z=0 y=0 x=0
Offset 17: c=2 z=0 y=0 x=1
Offset 18: c=2 z=0 y=1 x=0
Offset 19: c=2 z=0 y=1 x=1
Offset 20: c=2 z=1 y=0 x=0
Offset 21: c=2 z=1 y=0 x=1
Offset 22: c=2 z=1 y=1 x=0
Offset 23: c=2 z=1 y=1 x=1
However, if you are using cloudvolume to write, it should be taking care of that for you.
Yes, I am using CloudVolume. And simply put, I am transposing (c, z, y, x) to (x, y, z, c) in numpy array similar to what has been working for grayscale images. For grayscale images, it is (z,y,x) to (x,y,z). But for whatever the reason, I'm only seeing a portion of the entire image as you see in attached images. In this example, changing blue channel will only give me 1/3 (?) of the entire image. Not quite sure where I'm failing at this point.
After all, I was able to get the working example for structuring the numpy array in CloudVolume for RGB channel images. Thanks for the help here also.
Hi jbms,
Do you have a working example for 3 channel image on Neuroglancer? I am trying to troubleshoot if chunking on CloudVolume is failing or the shader control needs to be configured differently. Perhaps, three channel image need to be converted to a gray scale image and then chunked?
Thank you so much for your help. -m