slarson / wholebrain

Automatically exported from code.google.com/p/wholebrain
0 stars 0 forks source link

Memory Error when Checking on Cells Tree after Importing Network #434

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Import network
2. Once network has been imported, select cells checkbox to make them 
visible
3.

What is the expected output? What do you see instead?
WBC Crashes.
Exception java.lang.OutOfMemoryError: requested 128000 bytes for GrET* in 
C:/BUILD_AREA/jdk1.5.0_17/hotspot\src\share\vm\utilities\growableArray.cpp. 
Out of swap space?

Please use labels and text to provide additional information.

Original issue reported on code.google.com by piperfl...@gmail.com on 19 Feb 2010 at 9:58

GoogleCodeExporter commented 9 years ago

Original comment by piperfl...@gmail.com on 19 Feb 2010 at 10:06

GoogleCodeExporter commented 9 years ago
I should mention that I am allocating 1.5GB of Heap Space. The program is not 
crashing due to lack of Heap Space, according to my taskManager that most it 
ever 
uses is 800MB. The problem seems to be occurring because the threads can't find 
contiguous memory space or my system rans out of swap space.

Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: unable to 
create 
new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:597)
    at java.awt.EventQueue.initDispatchThread(EventQueue.java:833)
    at java.awt.EventDispatchThread.run(EventDispatchThread.java:153)
java.lang.OutOfMemoryError

Original comment by piperfl...@gmail.com on 19 Feb 2010 at 10:26

GoogleCodeExporter commented 9 years ago
Hi Jesus.  Can you attach the network you are importing to the issue, as well 
as give 
exact instructions for how you are doing the import, and what version you are 
working 
with?  Thanks.

Original comment by stephen....@gmail.com on 19 Feb 2010 at 10:29

GoogleCodeExporter commented 9 years ago
1. Go to Debug Menu and choose Import File option
2. Choose network file to import
3. Import network

Original comment by piperfl...@gmail.com on 19 Feb 2010 at 10:32

Attachments:

GoogleCodeExporter commented 9 years ago
I was unable to solve this issue today, but now I noticed I am getting 
different 
errors.

The network was uploaded, the client was closed, then client was reloaded and 
reloading the scene I got this...

Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError
    at sun.misc.Unsafe.allocateMemory(Native Method)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
    at com.jme.util.geom.BufferUtils.createFloatBuffer(BufferUtils.java:731)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.getGeomInstanceMesh(NeuronCl
oud.j
ava:294)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.makeMDInstances(NeuronCloud.
java:
554)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.setup(NeuronCloud.java:644)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.addCloudInstances(NeuronClou
d.jav
a:818)
    at org.wholebrainproject.wbc.view.View3D.setCells(View3D.java:753)
    at 
org.wholebrainproject.wbc.observers.SceneViewObserver.reloadAll(SceneViewObserve
r.jav
a:384)
    at 
org.wholebrainproject.wbc.observers.SceneViewObserver.doSceneUpdate(SceneViewObs
erver
.java:128)
    at 
org.wholebrainproject.wbc.observers.SceneViewObserver.update(SceneViewObserver.j
ava:9
9)
    at java.util.Observable.notifyObservers(Observable.java:142)
    at org.wholebrainproject.wbc.scene.Scene.changed(Scene.java:445)
    at org.wholebrainproject.wbc.view.View.simpleSetup(View.java:534)
    at com.jme.system.canvas.SimpleCanvasImpl.doSetup(SimpleCanvasImpl.java:116)
    at com.jmex.awt.lwjgl.LWJGLCanvas.initGL(LWJGLCanvas.java:113)
    at org.lwjgl.opengl.AWTGLCanvas.paint(AWTGLCanvas.java:286)
    at sun.awt.RepaintArea.paintComponent(RepaintArea.java:248)
    at sun.awt.RepaintArea.paint(RepaintArea.java:224)
    at sun.awt.windows.WComponentPeer.handleEvent(WComponentPeer.java:306)
    at java.awt.Component.dispatchEventImpl(Component.java:4577)
    at java.awt.Component.dispatchEvent(Component.java:4331)
    at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
    at 
java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269
)
    at 
java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
    at 
java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174
)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
    at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)

I modified the NeuronCloud.java code in my machine to exclude line 294, since I 
figure by the stack trace that this is where the problem was occurring. 

I then try re-running the whole brain catalog and this time got a different 
stack 
trace error.
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError
    at sun.misc.Unsafe.allocateMemory(Native Method)
    at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:99)
    at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
    at com.jme.util.geom.BufferUtils.createFloatBuffer(BufferUtils.java:731)
    at com.jme.util.geom.BufferUtils.createVector3Buffer(BufferUtils.java:244)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.getGeomInstanceMesh(NeuronCl
oud.j
ava:289)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.makeMDInstances(NeuronCloud.
java:
554)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.setup(NeuronCloud.java:644)
    at 
org.wholebrainproject.wbc.view.tangible.NeuronCloud.addCloudInstances(NeuronClou
d.jav
a:818)
    at org.wholebrainproject.wbc.view.View3D.setCells(View3D.java:753)
    at 
org.wholebrainproject.wbc.observers.SceneViewObserver.reloadAll(SceneViewObserve
r.jav
a:384)
    at 
org.wholebrainproject.wbc.observers.SceneViewObserver.doSceneUpdate(SceneViewObs
erver
.java:128)
    at 
org.wholebrainproject.wbc.observers.SceneViewObserver.update(SceneViewObserver.j
ava:9
9)
    at java.util.Observable.notifyObservers(Observable.java:142)
    at org.wholebrainproject.wbc.scene.Scene.changed(Scene.java:445)
    at org.wholebrainproject.wbc.view.View.simpleSetup(View.java:534)
    at com.jme.system.canvas.SimpleCanvasImpl.doSetup(SimpleCanvasImpl.java:116)
    at com.jmex.awt.lwjgl.LWJGLCanvas.initGL(LWJGLCanvas.java:113)
    at org.lwjgl.opengl.AWTGLCanvas.paint(AWTGLCanvas.java:286)
    at sun.awt.RepaintArea.paintComponent(RepaintArea.java:248)
    at sun.awt.RepaintArea.paint(RepaintArea.java:224)
    at sun.awt.windows.WComponentPeer.handleEvent(WComponentPeer.java:306)
    at java.awt.Component.dispatchEventImpl(Component.java:4577)
    at java.awt.Component.dispatchEvent(Component.java:4331)
    at java.awt.EventQueue.dispatchEvent(EventQueue.java:599)
    at 
java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269
)
    at 
java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184)
    at 
java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174
)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169)
    at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161)
    at java.awt.EventDispatchThread.run(EventDispatchThread.java:122)

Line 289 of the NeuronCloud is          
batch.setVertexBuffer(BufferUtils.createVector3Buffer(creator.getNumVertices()))
;

Original comment by piperfl...@gmail.com on 20 Feb 2010 at 2:03

GoogleCodeExporter commented 9 years ago
I have investigated and found a problem, 

Cleaning this up might fix the problem, and if not, it will allow for massive 
saving
ins memory/efficiency

In the routine, 'networkReader' from FileImportFactory, the method creates a new
NeuronMorpohlogy (this is fine), but for every single NeuronMorpohlogy is is 
using a
new data source. 

To put it another way, two "Granule Cells" are NOT using the same Data Wrapper, 
when
they ought to be. This would, of course, blow up memory.

The correct strategy would be to re-use the data by linking to it for two cells 
that
are of the same type. For example, 
GranuleCellA.state.DataWrapperID = "1234"
GranuleCellB.state.DataWrapperID = "1234"
GranuleCellC.state.DataWrapperID = "1234"
GolgiA.state.DataWrapperID = "789"
GolgiB.state.DataWrapperID = "789"

This will reduce overheard in memory, even if it does not solve the problem.
Look closely at the Book-keeping done to match files with cell instances in the 
way
I read them in for Coral's network. (Will post reply when I find it)

Original comment by caprea on 23 Feb 2010 at 2:40

GoogleCodeExporter commented 9 years ago
It looks like the code might have been deleted/changed, but I used to make use 
of
the method called 'matchType', which won't be too useful, but does suggest what 
was
going through my brain when I did the import for Coral's network. I maintained 
a map
of the dataWrappers using a NeuronMorphology and if the cell being imported was 
the
same TYPE as something already in WBC, it wouldn't import it fully, it would 
just
make a new instance of it.

Original comment by caprea on 23 Feb 2010 at 2:49

GoogleCodeExporter commented 9 years ago
By assigning the same DataWrapperID to all instances in each "Population" you 
will
also increase the speed at which the import happens, since you will not need to
download/read the data for each cell (only once per population of cell type).

I suspect the reason it is crashing is because it's trying to assign video card
memory to handle more shared meshes, which it has run out of. Yes, in this 
case, the
cloud fails, but only because it's not being used efficiently. The cloud was
designed with the principle that there would only be 'one' mesh for each cell 
type.

Original comment by caprea on 23 Feb 2010 at 3:45

GoogleCodeExporter commented 9 years ago
I'm coming at this from a high-level, so I may be off base here, but is there a 
way to 
modify the NeuronCloud API to enforce that it is not misused in the manner you 
describe?  One way to improve an API is to make it harder to misuse.  If we fix 
this 
problem, perhaps we can also make the API better to avoid such problems in the 
future.

Original comment by stephen....@gmail.com on 23 Feb 2010 at 4:09

GoogleCodeExporter commented 9 years ago
Forget about the Cloud, and just think about using resources for Neurons 
efficiently
in general. If all of the CellInstances have a 1-to-1 mapping of DataWrappers 
that
is a lot less efficient than a Many-to-1 mapping of CellInstances:DataWrappers,
that's the spirit the cloud is designed. The problem here isn't the cloud, it's
memory management.

How do we encourage CellInstances to check it there is a datawrapper for them?
1. A heuristic that compares names of cells, if a similarly named cell exists, 
then
it should infer that the DataWrapper should be the same?
2. Disable the ability to create a NeuronMorphology (make it a private class) 
and
force developer to only be able to create "Populations" (Groups of
NeuronMorphologies) However, this can be abused if taken to the extreme of 
making
multiple Populations with only 1 cell in them
3. ?
4. Profit!

Original comment by caprea on 23 Feb 2010 at 4:26

GoogleCodeExporter commented 9 years ago
I have a solution.

It just occured to me that the cell cloud renderer could limit the number of 
unique
cell TYPES it would render. But this could cause a lot of non-compile error
headaches by forgetting about this limiter.

To illustrate, if the cell renderer thought there was 200 Granule Cells Shapes, 
100
Purkinje Cell Shapes etc (when in reality there ought to be only 1 of each), it
could just stop somewhere at ~20. But this would be like writing into the code 
that
only 20 cell types exist.

Original comment by caprea on 23 Feb 2010 at 4:47

GoogleCodeExporter commented 9 years ago
"Cleaning this up might fix the problem, and if not, it will allow for massive 
saving
ins memory/efficiency

In the routine, 'networkReader' from FileImportFactory, the method creates a new
NeuronMorpohlogy (this is fine), but for every single NeuronMorpohlogy is is 
using a
new data source. "

You were absolutely right. I am now importing the network following your 
strategy:
"The correct strategy would be to re-use the data by linking to it for two 
cells that
are of the same type. For example, 
GranuleCellA.state.DataWrapperID = "1234"
GranuleCellB.state.DataWrapperID = "1234"
GranuleCellC.state.DataWrapperID = "1234"
GolgiA.state.DataWrapperID = "789"
GolgiB.state.DataWrapperID = "789""

And is working now. 
Thanks

Original comment by piperfl...@gmail.com on 23 Feb 2010 at 7:32

GoogleCodeExporter commented 9 years ago

Original comment by piperfl...@gmail.com on 18 Mar 2010 at 8:48