Closed thoth291 closed 6 years ago
Can you explain a bit more exactly what you've done and what you observe?
Have you tried running the example file python/examples/example.py ?
If you set neuroglancer.server.debug = True, you will see debug logs from the server, which may provide useful information.
Thank you for your reply.
I carefully followed for the instructions but couldn't make that example to work.
When I run it like this (while I'm in python directory of this repo) python examples/example.py
I get something like this:
Traceback (most recent call last):
File "examples/example.py", line 31, in <module>
viewer = neuroglancer.Viewer()
File "/CONDA/neuro/lib/python3.6/site-packages/neuroglancer-1.0.7-py3.6-linux-x86_64.egg/neuroglancer/viewer.py", line 43, in __init__
server.register_viewer(self)
File "/CONDA/neuro/lib/python3.6/site-packages/neuroglancer-1.0.7-py3.6-linux-x86_64.egg/neuroglancer/server.py", line 269, in register_viewer
start()
File "/CONDA/neuro/lib/python3.6/site-packages/neuroglancer-1.0.7-py3.6-linux-x86_64.egg/neuroglancer/server.py", line 262, in start
global_server = Server(ioloop=ioloop, **global_server_args)
File "/CONDA/neuro/lib/python3.6/site-packages/neuroglancer-1.0.7-py3.6-linux-x86_64.egg/neuroglancer/server.py", line 62, in __init__
SockJSHandler, SOCKET_PATH_REGEX_WITHOUT_GROUP, io_loop=ioloop)
File "/CONDA/neuro/lib/python3.6/site-packages/sockjs/tornado/router.py", line 111, in __init__
self.io_loop)
TypeError: __init__() takes 3 positional arguments but 4 were given
The way how I'm testing this right now is part of the nyroglancer
.
It works perfectly fine for me (except for the fact that I don't see the mesh being generated when I click on the object).
My raw.bin
data is just a grayscale 3D image and I'm using it to overlay segments from segs.bin
.
segs.bin
has one integer value per voxel (3D pixel) and those values are visually look like a cells from one of your demos.
What I get is ability to browse through the data, but when I click on the object I don't get a mesh.
So I was wondering if I need to do a preprocessing of some sort or convert my data format into something else in order for neuroglancer to pick it up.
P.S. I realize that I'm not using neuroglancer directly here - but it's only because I couldn't make it work even for the examples.
Thanks for your help!
Regarding nyroglancer, I think it unfortunately does not support the automatic meshing.
The error you list at the top of init takes 3 positional arguments is a recent breakage in sockjs-tornado due to the release of tornado 5.0 just a few days ago. https://github.com/mrjoes/sockjs-tornado/issues/113
To fix that, you could either downgrade tornado to 4.5.3 via pip install tornado==4.5.3
or install this version of sockjs tornado from github.
https://github.com/mathben/sockjs-tornado/tree/fix_tornado_5.0_%23113
You can do that with this command: pip install 'git+git://github.com/mathben/sockjs-tornado@212ba27' --upgrade
Thank you, @jbms. I installed the patch from git using the command you've provided. And also was able to run the command with no errors. But now it dies (or at least looks like it dies) immediately right after I type it.
(neuro) [thoth@host neuroglancer]$ python python/examples/example.py -a 0.0.0.0
http://host:38474/v/fbd8fb5da6fa030ee4f82d24164425f487ce6320/
(neuro) [thoth@host neuroglancer]$ echo $?
0
I enabled debugging as you have suggested earlier - but that changed no behavior. Thanks in Advance!
Run it with python -i so that python stays running.
Thank you, Now I'm getting this in the browser:
404: Not Found
Are you sure you are using the right URL? It changes every time.
yes, I double checked it. I also downgraded tornado to make sure that it's not patch failure. Was no help as well.
Unfortunately I'm not sure what the issue is. What I'd recommend is that you install neuroglancer in development mode, by (per instructions in python/README.md) cloning the git repository, and then running:
python setup.py develop
python setup.py bundle_client
Then in python/neuroglancer/server.py, I'd recommend adding print statements to the get method of the StaticPathHandler class so as to be able to get a better idea of what is going on.
Thank you, @jbms. I did this in server.py:
class StaticPathHandler(BaseRequestHandler):
def get(self, viewer_token, path):
print(f"{viewer_token} at {path}")
print(f"{self.server.token} is server token")
print(f"{self.server.viewers} are server viewers")
if viewer_token != self.server.token and viewer_token not in self.server.viewers:
self.send_error(404)
return
try:
print(f"global={global_static_content_source}")
data, content_type = global_static_content_source.get(path)
except ValueError as e:
self.send_error(404, message=e.args[0])
return
self.set_header('Content-type', content_type)
self.finish(data)
And it produced this during the test:
python -i examples/example.py
http://127.0.0.1:40671/v/2956cea95659d30405f79bae0217b64ca20265be/
>>> 2956cea95659d30405f79bae0217b64ca20265be at
2c479cad54909b2caf87caed44d4c4117b5d7b3d is server token
<WeakValueDictionary at 0x2ae698f796a0> are server viewers
global=<neuroglancer.static.PkgResourcesContentSource object at 0x2ae698f8b400>
2956cea95659d30405f79bae0217b64ca20265be at styles.css
2c479cad54909b2caf87caed44d4c4117b5d7b3d is server token
<WeakValueDictionary at 0x2ae698f796a0> are server viewers
global=<neuroglancer.static.PkgResourcesContentSource object at 0x2ae698f8b400>
2956cea95659d30405f79bae0217b64ca20265be at main.bundle.js
2c479cad54909b2caf87caed44d4c4117b5d7b3d is server token
<WeakValueDictionary at 0x2ae698f796a0> are server viewers
global=<neuroglancer.static.PkgResourcesContentSource object at 0x2ae698f8b400>
As you can see the viewer is not matching server token. More over if I use server token in place of viewer (added one more print to see the server token) - then everything the same happens.
Please keep helping... I really want to make it to work - many many people will be amazed by this Google technology!
Thanks in Advance.
It is confusing that requests are coming in for styles.css and main.bundle.js, if you are just getting a 404 error --- since the browser would not know to fetch those paths if it didn't receive the index.html file.
That suggests that StaticPathHandler may not in fact be returning a 404 error --- and you can verify that by adding additional print statements after the data, content_type = ... line, and also in the except handler.
You might try using the Chrome or Firefox developer tools to investigate what network requests are coming through.
Ok. I've tried chrome now and it kinda works (with lot's of flickering). I deleted everything and started over. Still flickering. I don't know if it suppose to be like that. (ignore the green color - it's artifact of gifmaker - but flickering is true). Thank you for your help!
I have observed flickering like that when Chrome falls back to software rendering using swiftshader. I don't know the cause, but rendering tends to be too slow to be usable with swiftshader anyway.
Go to webglreport.com to see what driver is being used for webgl. Neuroglancer should work with Intel and Nvidia graphics at least.
Chrome will fall back to Swiftshader if it is unable to load the hardware rendering driver. Sometimes that can happen, at least on Linux, if your graphics driver has been updated since you last rebooted, and is fixed by rebooting your computer.
On Fri, Mar 16, 2018, 23:21 Anar Z. Yusifov notifications@github.com wrote:
Ok. I've tried chrome now and it kinda works (with lot's of flickering). I deleted everything and started over. Still flickering. I don't know if it suppose to be like that. (ignore the green color - it's artifact of gifmaker - but flickering is true). [image: neuroglancer] https://user-images.githubusercontent.com/2971670/37552348-44f6fd1e-2981-11e8-8c99-e421ff43bb24.gif Thank you for your help!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-373897975, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6qQ9FzvzwRdTEzAkC4zZth1IPYYJks5tfKt8gaJpZM4SiAuV .
When I do nyroglancer
it works fine.
But I checked the settings of my chrome and got this:
Graphics Feature Status
Canvas: Software only, hardware acceleration unavailable
CheckerImaging: Disabled
Flash: Software only, hardware acceleration unavailable
Flash Stage3D: Software only, hardware acceleration unavailable
Flash Stage3D Baseline profile: Software only, hardware acceleration unavailable
Compositing: Software only, hardware acceleration unavailable
Multiple Raster Threads: Enabled
Native GpuMemoryBuffers: Software only. Hardware acceleration disabled
Rasterization: Software only, hardware acceleration unavailable
Video Decode: Software only, hardware acceleration unavailable
WebGL: Hardware accelerated but at reduced performance
WebGL2: Software only, hardware acceleration unavailable
I'll see how I can fix these hardware rendering issues and will do some of my tests again.
Thanks again!
Thank you for your help.
I installed new version of graphical drivers and went to this tab in chrome chrome://flags/
to override the software rendering settings.
It works!
Finally after all the technical issues (on my side) we can come back to the actual topic of this question. In the example.py I see 2 arrays:
a
: 4D - (3,z,y,x) - uint8 datasetb
: 3D - (z,y,x) - uint32 dataset
I assume that b
is used for automatic mesh generation. In my case that will be segs
. Thankfully the format and shape is matching.
But the format and shape of overlay image a
and raw
are not matching in my case.
What should I do to my raw
(if I need to do anything at all) for it to work as it works for a
in example.py
(if I understood it correctly)?
Neuroglancer can work with uint16 single channel/3-d images. Just remove the custom shader property and the offsets.
On Sat, Mar 17, 2018, 10:53 Anar Z. Yusifov notifications@github.com wrote:
Thank you for your help. I installed new version of graphical drivers and went to this tab in chrome chrome://flags/ to override the software rendering settings. It works!
Finally after all the technical issues (on my side) we can come back to the actual topic of this question. In the example.py I see 2 arrays:
- a: 4D - (3,z,y,x) - uint8 dataset
- b: 3D - (z,y,x) - uint32 dataset I assume that b is used for automatic mesh generation. In my case that will be segs. Thankfully the format and shape is matching.
question
But the format and shape of overlay image a and raw are not matching in my case. What should I do to my raw (if I need to do anything at all) for it to work as it works for a in example.py (if I understood it correctly)?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-373939653, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6mm8jc6DWNj7UPARXkjEetXuHKIxks5tfU2ngaJpZM4SiAuV .
Thank you, @jbms
At first it looked not so right - I expected grayscale, though.
Once I asked for the mesh I got some strange behavior.
It looks like there is a lot of noise in my data. Or I interpret it the wrong way?
I just don't understand where it comes from - my raw
data or my segs
data?
And another thing I noticed is low CPU utilization (top shows only 200% CPU out of 7200% possible - it's skylake node) during mesh generation.
There were also quite a lot of memory utilization, but I assume that it's due to noise...
Thank you very much for going through this with me!
It looks like you are displaying the raw data as a SegmentionLayer as well, rather than an ImageLayer.
On Sat, Mar 17, 2018, 13:33 Anar Z. Yusifov notifications@github.com wrote:
Thank you, @jbms https://github.com/jbms At first it looked not so right - I expected grayscale, though. [image: example_slice] https://user-images.githubusercontent.com/2971670/37559703-791f6546-29f8-11e8-894d-35868b3c21ee.gif Once I asked for the mesh I got some strange behavior. It looks like there is a lot of noise in my data. Or I interpret it the wrong way? I just don't understand where it comes from - my raw data or my segs data? And another thing I noticed is low CPU utilization (top shows only 200% CPU out of 7200% possible - it's skylake node) during mesh generation. There were also quite a lot of memory utilization, but I assume that it's due to noise... [image: example_crop] https://user-images.githubusercontent.com/2971670/37559547-1c85d1f0-29f6-11e8-9fbb-43639a5647af.gif questions
- Why I don't see nice gray scale background behind the segments as in demos on the page?
- I assume it's because my format is uint16
- if so - should I normalize it to uint8 first?
- Do I generate mesh correctly?
- I double click on the highlighted segment
- Do I need to do any pre-processing to speed it up?
Thank you very much for going through this with me!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-373950558, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6ujXFCNrZxTMjH3HpThtJPqqNB_5ks5tfXMdgaJpZM4SiAuV .
Regarding the mesh generation, the fact that you were displaying both the raw data and your segmentation as segmentations may have affected things. However, in general the mesh generation is unfortunately slow. There are two steps --- there is an initial marching cubes step that runs over the full volume, using multiple threads, that runs the first time you request any mesh, and then for each individual segment there is a simplification step that runs on a single thread and happens the first time you request a given segment.
The python integration doesn't support a way to precompute the meshes, and is only practical for small volumes. For larger volumes you can convert the data to the precomputed format.
https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed
There are some third party scripts to help you generate that format --- see e.g. https://github.com/FZJ-INM1-BDA/neuroglancer-scripts
On Sat, Mar 17, 2018, 13:41 Jeremy Maitin-Shepard jbms@google.com wrote:
It looks like you are displaying the raw data as a SegmentionLayer as well, rather than an ImageLayer.
On Sat, Mar 17, 2018, 13:33 Anar Z. Yusifov notifications@github.com wrote:
Thank you, @jbms https://github.com/jbms At first it looked not so right - I expected grayscale, though. [image: example_slice] https://user-images.githubusercontent.com/2971670/37559703-791f6546-29f8-11e8-894d-35868b3c21ee.gif Once I asked for the mesh I got some strange behavior. It looks like there is a lot of noise in my data. Or I interpret it the wrong way? I just don't understand where it comes from - my raw data or my segs data? And another thing I noticed is low CPU utilization (top shows only 200% CPU out of 7200% possible - it's skylake node) during mesh generation. There were also quite a lot of memory utilization, but I assume that it's due to noise... [image: example_crop] https://user-images.githubusercontent.com/2971670/37559547-1c85d1f0-29f6-11e8-9fbb-43639a5647af.gif questions
- Why I don't see nice gray scale background behind the segments as in demos on the page?
- I assume it's because my format is uint16
- if so - should I normalize it to uint8 first?
- Do I generate mesh correctly?
- I double click on the highlighted segment
- Do I need to do any pre-processing to speed it up?
Thank you very much for going through this with me!
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-373950558, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6ujXFCNrZxTMjH3HpThtJPqqNB_5ks5tfXMdgaJpZM4SiAuV .
You were completely right. Once I attributed my data with Image and Segmentation layers - it's all worked! Not only much more beautiful right now - but also much faster. I clearly see that it takes a first touch to the global mesh - but then local selected meshes are generated very fast (comparing to before). So I would say that this issue is no longer an issue!
I might come back very soon with some more questions - but for now - it looks and does exactly what I would expect!
One minor thing, though, is that Image layer is not as bright as I would expect - is there any way to control the colormap of it so that more active range of the image would be used?
First, you generally want the image layer before the segmentation layer -- otherwise it displays based on its opacity setting which defaults to 0.5.
Aside from that, there is no explicit setting bt you can adjust the contrast and brightness by modifying the shader.
On Sat, Mar 17, 2018, 14:23 Anar Z. Yusifov notifications@github.com wrote:
You were completely right. Once I attributed my data with Image and Segmentation layers - it's all worked! Not only much more beautiful right now - but also much faster. I clearly see that it takes a first touch to the global mesh - but then local selected meshes are generated very fast (comparing to before). So I would say that this issue is no longer an issue!
I might come back very soon with some more questions - but for now - it looks and does exactly what I would expect! last minor question
One minor thing, though, is that Image layer is not as bright as I would expect - is there any way to control the colormap of it so that more active range of the image would be used?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-373953680, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6gXpljjmRpdCqBQopBiFS-wzGBReks5tfX7rgaJpZM4SiAuV .
Order of layers is matters, indeed! Well, that was somewhat unexpected, but reasonable.
Is there any documentation on that shader technique, you've mentioned above?
I also noticed that there are few attributes of segmentation class: 'selected_alpha', 'not_selected_alpha', 'object_alpha'
- how do they affect overlay of the image and segments. There is also 'opacity'
in image segmentation class... Should I use that one instead?
And I also noticed AnnotationLayer - what is that beast for - does it add hints to the view - so on mouse over I can put some information?
(I should stop calling them last
:-) )
Is there any way to trigger a mesh generator in advance?
I really want my mesh to be available when I let the user to see the viewer's url.
Thank you very for your help!
See this documentation regarding custom image layer shaders: https://github.com/google/neuroglancer/blob/master/src/neuroglancer/sliceview/image_layer_rendering.md
The layers are rendering in the order they are listed, and by default blended as: dest_value = old_dest_value (1 - src_alpha) + src_value src_alpha
not_selected_alpha and selected_alpha determine the alpha values for the segmentation layer. object_alpha affects the 3-d rendering only.
Yes, the annotationlayer lets you do exactly that --- see the discussion in #78 and this example link: https://goo.gl/vhcUXE
There isn't currently a great way to trigger mesh generation in advance, although you could manually call the get_object_mesh method of LocalVolume.
Thank you for the examples and documentation - will experiment to see how far I can get.
Couldn't figure out how to use get_object_mesh
- cannot find the way to get object_id
within txn()
context.
Would be nice to see an example.
Thank you for very detailed answers! Hope that somebody else will find this discussion as useful as it was for me!
vol = neuroglancer.LocalVolume(
data=b,
voxel_size=s.voxel_size,
)
vol.get_object_mesh(0)
s.layers.append(
name='b', layer=neuroglancer.SegmentationLayer(source=vol))
Thank you for this, @jbms. I have got this error:
Traceback (most recent call last):
File "example.py", line 85, in <module>
vol.get_object_mesh(0)
File "neuroglancer/python/neuroglancer/local_volume.py", line 243, in get_object_mesh
raise InvalidObjectIdForMesh()
neuroglancer.local_volume.InvalidObjectIdForMesh
Is there any alternative to this? May be an additional flag to the SegmentationLayer which would force mesh generation... But I don't know how to implement it...
Even though you receive the error, the first step, the marching cubes, is still done. To do the simplification as well, you would need to call get_object_mesh for every object id.
On Mon, Mar 19, 2018 at 9:13 PM Anar Z. Yusifov notifications@github.com wrote:
Thank you for this, @jbms https://github.com/jbms. I have got this error:
Traceback (most recent call last): File "example.py", line 85, in
grainsVol.get_object_mesh(0) File "neuroglancer/python/neuroglancer/local_volume.py", line 243, in get_object_mesh raise InvalidObjectIdForMesh() neuroglancer.local_volume.InvalidObjectIdForMesh Is there any alternative to this? May be an additional flag to the SegmentationLayer which would force mesh generation... But I don't know how to implement it...
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-374468621, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6gJBDiF8Jtwdp3-XAH2HZab0KCwxks5tgIHvgaJpZM4SiAuV .
I put a try/except and it indeed worked! Thanks!
To do the simplification as well, you would need to call get_object_mesh for every object id.
Don't follow this point... And even if I would understand what you meant - I still don't know where get the list of objects from...
I think this is related enough to include in this thread -- but if not I can create a new issue.
Is there a way to export the mesh that neuroglancer python creates and then load it in from a static URL? I've been unsuccessful in determining what the output of get_object_mesh is (raw byte string)?
The format is identical to the precomputed mesh format, documented here:
If you write the output to a file, then create the appropriate manifest JSON file for each object, you can view it as a precomputed mesh source.
Thank you, @jbms for your help! How to remove visualization of orthogonal 3D planes? The reason I need it is that I can browse through my data in 3 other windows but would like to see 3D generated object clearly in my 3D view without panels going into my sight. May be I can set transparency for the panels or somehow disable them in 3D view?
Use the Slices checkbox or press s.
On Thu, May 17, 2018, 08:34 Anar Z. Yusifov notifications@github.com wrote:
Thank you, @jbms https://github.com/jbms for your help! How to remove visualization of orthogonal 3D planes? The reason I need it is that I can browse through my data in 3 other windows but would like to see 3D generated object clearly in my 3D view without panels going into my sight. May be I can set transparency for the panels or somehow disable them in 3D view?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/neuroglancer/issues/79#issuecomment-389909590, or mute the thread https://github.com/notifications/unsubscribe-auth/AEBE6isOUB1xLepL4EK4LuhPz_Okhzebks5tzZh5gaJpZM4SiAuV .
I have tried few examples so far and couldn't make them work in my case:
But I don't know where to start from in order to make neuroglancer to work in my case.
Could anyone please help me?
Thanks in Advance, Anar.