chiggum / mindthegap

Vectorize bitmaps without introducing gaps or overlaps between adjacent regions
11 stars 1 forks source link

is algorithm extendable to density maps or already working on maps? #1

Closed rimmartin closed 6 years ago

rimmartin commented 9 years ago

Hi,

I saw your comment on boost. Is the png just a vehicle and your mindthegap algorithm works on the 3d density data?

Could it work within the framework of point clouds (http://www.pointclouds.org/)?

And one more; can it find bodies and spheroidal shapes within a volume? http://bigwww.epfl.ch/preprints/delgadogonzalo1302p.pdf

chiggum commented 9 years ago

Hi, I'm so sorry, I never noticed this issue.

As far as 3d density data is concerned, I've worked out the vectorization of brain atlases in GSoC'14 which basically involved vectorization of 3d brain atlases provided as input in nifti format. The task involved extracting the brain slices from the nifti file, converting them into svg using mindthegap and finally re-combining the svgs to get final 3d svg brain. (http://scalablebrainatlas.incf.org/main/coronal3d.php?template=PHT00&)

But, I'm not sure whether this is what you need. Probably, you meant some utility that can generate svg paths in 3 dimensions. I don't think mindthegap support this. But, I'm sure that this can be built easily over mindthegap.

I hope this answered your other two questions too.

It's a nice idea to have a 3d tracer. I'll see if I can work it out. Thanks.

rimmartin commented 9 years ago

Hi,

I see svg only as a slice vehicle (io). More interested in 3d intensity data and locating pockets, blobs and voids preferably where the feedback can move a variable 3d mesh to the locations and the volumes surface limits to increase the quality and accuracy determining the boundaries.

rimmartin commented 9 years ago

Or another way to put it can gradients be computed[or maybe you already are computing them] from your algorithm? dmindthegap/dx where x is 3d coordinates.

chiggum commented 9 years ago

Hi, Sorry for the late reply.

According to my understanding, given f = f(x,y,z), we can calculate df/dx. But, I don't think that the current version of mindthegap has got the capability to calculate f i.e. a path in 3d or in other words it cannot trace paths/boundaries in 3d but I'm pretty sure that this can be worked upon with ease. Give me some time, I'll let you know my insights on this problem.

Thanks.

rimmartin commented 9 years ago

Very interesting. I'm hoping there is more than one application and thus valuable pressure to develop. I haven't had time to try or look under the hood. Came here from boost and read" vectorize bitmaps without introducing gaps or overlaps between adjacent areas" and that is a great starting point if that can be extended to 3d!

chiggum commented 9 years ago

Hey! Is it possible for you to provide me a 3d density data file and do you know any software that renders 3d density data (in svg format and in some standard format)?

rimmartin commented 9 years ago

I got a bunch that are slices in png; I'll look for an png svg converter or go out to svg; yet in svg you want the base64 stuff that some converters would do?

rimmartin commented 9 years ago

I also have a small opencl based app that does actual 3d rendering; currently compiles on Ubuntu. What OS are you on?

chiggum commented 9 years ago

Hey! slices in png will probably work. Can you please share those if that's not a problem to you. Also, I've Linux Mint installed. So, yes the app will probably work on my pc too. Can you please share that too, if that's not a problem to you.

rimmartin commented 9 years ago

since they can be big; and resolution can be played with; I have some more slots in my github account; I'll rework it into an application with some unit tests and make it accessible to you and me over the weekend. It generates from much smaller input

chiggum commented 9 years ago

okay! cool.

rimmartin commented 9 years ago

Hi @chiggum, I'm a little delayed. The one I have has a bunch of dependencies that would be a pain and also I've been thinking about webgl and why volume rendering isn't there. Tonight playing with a chrome extensions and hopefully I can find how to get to v8 native code and see if it is possible to stand up a volume renderer. If this looks possible I'll make it a public repository with mit license and propose adding to webgl.

otherwise I'll make a clean desktop app for what we were discussing

I'm comfortable with v8 native coding so hope it is possible. They have a similar project style as nodejs; just pure v8

rimmartin commented 9 years ago

ah webgl is 2d based technology

                var canvas = document.getElementById('photoplate');
                var context = canvas.getContext("2d");

What I want is context 3d. I see firefox had/has an experimental addon for it; opera seems to be working on experimental browsers with it. More reading...

chiggum commented 9 years ago

Great! I'll also look into that too. I think we should make another repository for our work and try to formulate the problem more properly. Tell me if you have some nice name in mind.

rimmartin commented 9 years ago

Definitely a new repository[ different build and dependency requirements]. and :-) I'm only good at development names, not marketing or catchy ones

rimmartin commented 9 years ago

WebCL 1.0 doesn't support any 3d images or volume rendering either https://www.khronos.org/registry/webcl/specs/1.0.0/ Differences between WebCL and OpenCL 1.1 section. Most of the browsers are staying at the common denominator of many embedded devices that don't have the 3d image hardware power.

Opera is no longer busting itself on new technology like it did for so many years. There is a new browser coming to do that; https://vivaldi.com and I'm looking into where it is at and if it allows extensions

chiggum commented 9 years ago

ok cool. btw is this: https://github.com/matthiasak/3D-svg-model-viewer of any help.

rimmartin commented 9 years ago

Going with current technologies webgl and three.js can get quite far but I was wanting faster and direct to hardware that has been lacking. I've made the 3d texture appear as a volume rendering by starting from and modifying https://dl.dropboxusercontent.com/u/7508542/three.js/body/index.html

To get rid of the darkening when looking on the edge(s) of a 2d texture I loaded all three stacks of slices in all three axes. Snapshot looks like 3fva_density

The page loads with the partial slices coming in and navigation is as all webgl/three.js perform on a browser with enabled hardware acceleration.

My difficulties come in because the unit cell isn't orthogonal and the current technology didn't skew/slant the image textures for me and the origins change with skew/slant. (there are about 230 different spacegroups to deal with) But I could take this further and you would have data to begin extending your algorithm to 3d. I could pick an starting example spacegroup that is orthogonal and continue to develop for skew/slants.

Then outside the browser I do have a viewer that is a volume renderer in c++11 and opencl[cuda 7.0 on Ubuntu]; app currently has dependencies on boost, pion and one proprietary chunk of code[easily removed].

With pion could load the data as a RESTful endpoint and put it into chrome via the outdated https://developer.chrome.com/extensions/npapi but this wouldn't be for handy for other users and browsers.

So was looking where the tech is to get volume rendering through to the hardware from browsers. There is a lot of scientific and biodata that could really use this but they aren't the market share that end consumers are. webgl/webcl could do it readily if the khronos group would allow

rimmartin commented 9 years ago

641 images total for the above; a slice looks like 256 639

Since working with png's I've learned how to stream such data[no file format] with binaryjs in my hdf5 node project to web pages. Could also go compressed

rimmartin commented 9 years ago

https://developer.chrome.com/native-client seems where c++ code can be run in the chrome browser. Exploring using their nacl_sdk...

chiggum commented 9 years ago

Okay! But I think, for now, we should concentrate on desktop app or software rather than worrying about how it would be rendered on the browser. I am also a bit confused about how the input data looks like. The output is roughly clear in my mind but a better view is still required. Let's continue our discussion on the new repo mindthegap-3d that I made. I've added you as a collaborator.

chiggum commented 6 years ago

Feel free to reopen.