Closed tarelli closed 11 years ago
Maybe related to #2638?
(continuing from #2638)
@alteredq have you tried doing the 2 heap snapshots comparison technique?
@mrdoob I wanted to but I can't make a snapshot of the new r53
version, Chrome always crashes :S
Anyways, I think I understand what's going on. The final object is constructed via merging of tons of individual components. It's pretty big by itself but still quite fine.
Now before, with r51
these temporary objects were just garbage collected after they served their purpose.
But since we added these object / material / geometry libraries, this prevents garbage collection and these temp objects just stay around forever and increase memory consumption.
And now comes the kicker - in our regular usage even if we missed something like that, not much would happen. But because this application is so memory hungry, doubling the memory use (sum of all temp objects + final object) pushes it over some magic Chrome limit which makes everything extremely slow.
I just run into that magic limit elsewhere few days ago, just testing something in console. I was increasing array size and after some number there was a huge performance cliff. I assume due to some swapping or something like that.
@tarelli Easy way to test this theory would be to add deallocations after using temp objects for merging.
If this turns out to be true, it would be kinda perfect example how dangerous these libs are.
I tried the following:
for ( var seindex in jsonEntity.subentities) { var threeObject = OW.getThreeObjectFromJSONEntity(jsonEntity.subentities[seindex], mergeSubentities); THREE.GeometryUtils.merge(combined, threeObject); OW.renderer.deallocateObject(threeObject); } entityObject = new THREE.Mesh(combined, material);
but doesn't seem to make a difference, is that the correct way to deallocate?
This should be the complete code.
for ( var seindex in jsonEntity.subentities) {
var threeObject = OW.getThreeObjectFromJSONEntity(jsonEntity.subentities[seindex], mergeSubentities);
THREE.GeometryUtils.merge(combined, threeObject);
OW.renderer.deallocateObject(threeObject);
threeObject.deallocate();
}
entityObject = new THREE.Mesh(combined, material);
Nope, same :(
They should really put a confirmation box in the close button :)
hehe I have had that a lot too.
Do you also deallocate the generated geometry??
for ( var seindex in jsonEntity.subentities) {
var threeObject = OW.getThreeObjectFromJSONEntity(jsonEntity.subentities[seindex], mergeSubentities);
THREE.GeometryUtils.merge(combined, threeObject);
OW.renderer.deallocateObject(threeObject);
threeObject.geometry.deallocate();
threeObject.deallocate();
}
entityObject = new THREE.Mesh(combined, material);
Thanks, it worked! Adding these three lines did the trick, and frame rate is back to 60fps.
OW.renderer.deallocateObject(threeObject);
threeObject.geometry.deallocate();
threeObject.deallocate();
Not sure if you want to do anything to clear those references without having to do it explicitly but I'm happy enough :) Anyway, thanks for the awesome support, will let you know as soon as it's deployed. Also if you have any feedback for how I'm doing things and if I could do anything in a smarter way, sure let me know :)
If you're not using the material somewhere else, you can also add:
threeObject.material.deallocate();
Thanks, in some places I added that too.
@alteredq that usually happens if the js application uses >400MB of memory (i suspect its because by the time the heap profile opens the memory dump, the real OS memory usage is >2GB)
I wanted to but I can't make a snapshot of the new r53 version, Chrome always crashes :S
@tarelli Yay! That was a tough one ;)
@zz85 Oh well, so heap profiling will crash in cases where you need it the most :S
@mrdoob After this experience, I think I'm more in favor of removing these object / material / texture libraries. Even when being aware of these gotchas, it still took way too much time to connect all the dots together.
I expect such problems will keep happening over and over. And in most cases they would just go unnoticed because memory load wouldn't be big enough to create this situation, yet apps using three.js will be much fatter.
It's like when only in ro.me we started to have problems with keeping arrays needed for dynamic objects.
Oh, and now I realized, if you don't need to change the geometry after the creation you can also decrease the memory use by setting:
geometry.dynamic = false;
This should also help, even now Chrome process takes 654 MB.
Thanks! I added that and redeployed it :)
@mrdoob After this experience, I think I'm more in favor of removing these object / material / texture libraries. Even when being aware of these gotchas, it still took way too much time to connect all the dots together.
Well. This wasn't really a good example because we quickly moved the focus directly to MeshFaceMaterial
due to recent issues. But these objects were introduced to help this kind of situation.
I wonder what would have happened if instead of moving the focus we would have asked thins like "How many geometries is your application supposed to have?" then by checking THREE.GeometryLibrary
in the console we could see that there were 5x more on memory.
I do see your point, but I wouldn't like to remove the feature before letting it prove whether it's worth or not.
Well. This wasn't really a good example because we quickly moved the focus directly to MeshFaceMaterial due to recent issues
But that's kinda my point, we both completely missed the implications of these libraries for one code pattern we use quite often ourselves - creation of larger objects out of many smaller ones.
It's dangerous exactly because if you aren't aware of these dangling references all the time, you'll run into this type of issues. Which is what garbage collection tries to free your mind from.
I came from C/C++ background where you don't have garbage collection. Memory management bugs are constant source of pain in that world.
I wonder what would have happened if instead of moving the focus we would have asked thins like "How many geometries is your application supposed to have?" then by checking THREE.GeometryLibrary in the console we could see that there were 5x more on memory.
We need these libs to help to debug problems that they themselves cause ;)
But the thing is, this code not only helps when debugging but also highlight the use of bad practices.
I'm not sure if it's a good idea for people to assume that they can create things and not care about it because magically the GC will clean their mess.
You're proposing the scenario of having a butler that cleans after the children, and I'm proposing teaching the children to clean after themselves. In the long term, what's better?
You're proposing the scenario of having a butler that cleans after the children, and I'm proposing teaching the children to clean after themselves. In the long term, what's better?
If history of programming is any indication - in general case it's better to have a butler and only take care of cleaning yourself in specific performance-critical cases ;)
Believe me, having a garbage collection is a major major productivity boost.
I suspect you are optimistic mostly because you do not realize how much you already depend on "butler".
I guess a valid point would be that we could also check THREE.GeometryIdCount
for a situation like this...
Oh well, I'm going to miss being able to see exactly what's going on :)
Oh well, I'm going to miss being able to see exactly what's going on :)
Me too, that's why in the first place I liked it. It is useful, it's just that it leads to tricky problems and we don't like tricky problems.
One possible workaround, assuming scene
object is globally accessible (usually it is):
var geometries = [];
scene.traverse( function ( node ) {
if ( node.geometry ) geometries.push( node.geometry );
} );
To be entered into console.
It is a bit more tedious but also more flexible, you can for example filter out things you are not interested in.
This is what I found myself doing lately - checking in sources for global objects and then traverse scene to extract info I'm interested in.
I tried three development version and discovered library was removed. Finaly landed here. (just wanted to know why).
As far I understand you, you seem to worry about the loss somehow. May be you can make use of js prototyping behavior. Maybe I am wrong, but you could establish some debug inlcude. Into that file you put some modifiaction of existing "classes":
function extendObjects { var tObject=THREE.Object3D; THREE.Object3D=function() { tObject.call(this,arguments); this.library[this.id]=this; }; THREE.Object3D.prototype=Object.create(tObject.prototype); THREE.Object3D.prototype.library={}; }
I am not sure if this is correct, but I am sure -as long js uses prototypes- there is some way, to extend your "class" conditionaly by debug include.
I'm using THREE.js to build a 3D explorer for computational neuroscience. So far I've been using r51 which worked fine. Yesterday I tried to updated to r53 (r52 didn't work for me, the scene would never come up, don't know why) which works but performance decreased substantially. Using a given scene (aka "the worm") I have a steady 60fps with r51 while with r53 it goes down to 24fps average and it takes much longer to render too. Any idea why this would happen? To say a little bit about the scene: there are 302 neurons rendered, every neuron is composed by a given number of cylinder geometries which I'm merging into the same mesh for every neuron so the final scene is 302 meshes. Here you can find some screenshots with different versions of the library:
https://www.dropbox.com/sh/0azw2owz95ogtur/aMxD82iReA?lst
Thanks, Matteo