Closed mrdoob closed 10 years ago
Sounds like ideally, all of the primitive generators and loaders should be converted to raw BufferGeometry - it always surprised me how long it took to allocate a simple PlaneGeometry with high subdivision, so a 15x speedup is nice but a 46x improvement is AWESOME. As long as there's an easy way to convert a BufferGeometry to one of these new easier-to-use geometry wrappers, then it seems most people would appreciate the efficiency at init time.
I wonder how well some of the more complex geometries (like ExtrudeGeometry) will work by just swapping out the base Geometry for this new implementation. It would be awesome if the compatibility layer was so close to the real thing that we could use these with all the BufferGeometry efficiencies, without having to throw out the code out and start over.
It would be awesome if the compatibility layer was so close to the real thing that we could use these with all the BufferGeometry efficiencies, without having to throw out the code out and start over.
Sadly we'll have to re-code quite a few things... geometry.vertices
structure is backwards compatible, but all the rest won't be...
:+1: THREE.Geometry5
:wink:
This is how PlaneGeometry
looks like using raw BufferGeometry
btw.
/**
* @author mrdoob / http://mrdoob.com/
* based on http://papervision3d.googlecode.com/svn/trunk/as3/trunk/src/org/papervision3d/objects/primitives/Plane.as
*/
THREE.PlaneGeometryB = function ( width, height, widthSegments, heightSegments ) {
THREE.BufferGeometry.call( this );
this.width = width;
this.height = height;
this.widthSegments = widthSegments || 1;
this.heightSegments = heightSegments || 1;
var width_half = width / 2;
var height_half = height / 2;
var gridX = this.widthSegments;
var gridY = this.heightSegments;
var gridX1 = gridX + 1;
var gridY1 = gridY + 1;
var segment_width = this.width / gridX;
var segment_height = this.height / gridY;
var vertices = new Float32Array( gridX1 * gridY1 * 3 );
var normals = new Float32Array( gridX1 * gridY1 * 3 );
var uvs = new Float32Array( gridX1 * gridY1 * 2 );
var offset = 0;
var offset2 = 0;
for ( var iy = 0; iy < gridY1; iy ++ ) {
var y = iy * segment_height - height_half;
for ( var ix = 0; ix < gridX1; ix ++ ) {
var x = ix * segment_width - width_half;
vertices[ offset ] = x;
vertices[ offset + 1 ] = - y;
normals[ offset + 2 ] = 1;
uvs[ offset2 ] = ix / gridX;
uvs[ offset2 + 1 ] = iy / gridY;
offset += 3;
offset2 += 2;
}
}
var indices = new Uint16Array( gridX * gridY * 6 );
var offset = 0;
for ( var iy = 0; iy < gridY; iy ++ ) {
for ( var ix = 0; ix < gridX; ix ++ ) {
var a = ix + gridX1 * iy;
var b = ix + gridX1 * ( iy + 1 );
var c = ( ix + 1 ) + gridX1 * ( iy + 1 );
var d = ( ix + 1 ) + gridX1 * iy;
indices[ offset ] = a;
indices[ offset + 1 ] = b;
indices[ offset + 2 ] = d;
indices[ offset + 3 ] = b;
indices[ offset + 4 ] = c;
indices[ offset + 5 ] = d;
offset += 6;
}
}
this.attributes[ 'index' ] = { array: indices, itemSize: 1 };
this.attributes[ 'position' ] = { array: vertices, itemSize: 3 };
this.attributes[ 'normal' ] = { array: normals, itemSize: 3 };
this.attributes[ 'uv' ] = { array: uvs, itemSize: 2 };
};
THREE.PlaneGeometryB.prototype = Object.create( THREE.BufferGeometry.prototype );
Not very user friendly, but efficient.
@mrdoob I would caution against trading readability for performance, user-friendly API is usually more important in long run, just like code that's easier to maintain. But these snippets looks quite promising!
IMHO you should go deep with performance and leave "easy coding" for tquery (http://jeromeetienne.github.io/tquery/)
@rafaelcastrocouto jQuery has a use, because of how css works, and because of how html works. tQuery is a cool idea, without the solid basis of jQuery. jQuery exists as an extension to DOM api, this is because that api is lacking. Three.js has the freedom to work on its API and improve it, it doesn't have the kind of momentum DOM API does, which requires a very long time to change anything. Looking at tQuery i see a layer of indirection which reduces amount of code I write in small quantities and gives me nothing to help with the kind of code that I write in large quantities. Because of interest invested into tQuery - it may bring cool API ideas for 3d in general, but those same ideas can be merged back into threejs. "easy coding" is smacking keyboard with a fish :)
My two cents is I prefer the direct use of BufferGeometry rather than being tricky. What @mrdoob outlines in this comment:
https://github.com/mrdoob/three.js/issues/4386#issuecomment-37149395
It is just easier for me to understand, it is incredibly performance and it still has the easy to use interface provided by functions.
If one wants to make it easy to use elsewhere for custom geometry creation, just have a CustomGeometry node that wraps a BufferGeometry that has useful functions like this.setVertices( array of Vector3, optionalOffset, optionalLength ), this.setFaces( array of FacesoptionalOffset, optionalLength ), etc.
I agree with bhouston. I care about performance and value it higher than user-friendliness of on-the-fly mesh modifications. I also feel comfortable enough directly manipulating the plain buffers (but maybe that is just my experience with graphics programming).
I would prefer the core of three.js to use a plain BufferGeometry-like structure (including geometry loaders, geometry generators, tutorials and examples) and provide a wrapper for mesh manipulations.
@crobi I'd actually argue that the approach outlined by @mrdoob here https://github.com/mrdoob/three.js/issues/4386#issuecomment-37149395 is actually more user friendly than the trickiness of the other approaches. The reason it is more user friendly is that it is understandable. It sucks trying to explain trickiness because it requires communicating both the underlying aspects as well as the trickiness on top. I'd rather just explain the underlying aspects in a more direct manner.
So here's the latest approach... https://github.com/mrdoob/three.js/commit/9858c27eaf772b0b1d382cf2ad13e9f8f3f0fd6e
How about calling it BufferGeometryHelper
?
I considered that. But the API have sort of established that *Helper is a visual helper.
But yes, naming suggestions welcomed.
Maybe just GeometryManipulator
, or GeometryHandler
...
GeometryWrapper
.
btw @mrdoob's new picture is taking a little time getting use to :stuck_out_tongue_closed_eyes:
btw @mrdoob's new picture is taking a little time getting use to :stuck_out_tongue_closed_eyes:
haha! yeah, taking me some time me too ;)
What is the plan around releasing this?
I like the most recent version, because it keeps the core simple but it is easy to use the helper when needed. When I'ved used this pattern before, I've called it Mesh and MeshEditor. If one mapped this naming scheme to this situation one would end up with GeometryEditor or BufferGeometryEditor.
+1 What @bhouston said.
What is the plan around releasing this?
I may hold on releasing this month. I would like to feel good about this. And there are a few things broken right now.
Ideally, I would like to have all the geometry generators and loaders converted for the next release. But I suspect is not realistic.
An approach I went through was having PlaneGeometry
and PlaneBufferGeometry
. PlaneGeometry
basically adds all helpers to what PlaneBufferGeometry
produces. This being the most backwards compatible way. It's definitely faster than the current PlaneGeometry
, but, not as fast/optimised as PlaneBufferGeometry
.
That's what I'm still thinking about... Go with BufferGeometry
based PlaneGeometry
+ GeometryEditor
, or PlaneGeometry
+ PlaneBufferGeometry
...
@mrdoob, if the primary goal is to clean up all the Geometry
specific code from WebGLRenderer
, isn't the majority of that code essentially the same as the code to convert a Geometry
to a BufferGeometry
? What if Mesh
automatically turned a legacy Geometry
into one or more BufferGeometry
objects on init, then the Geometry
specific stuff in WebGLRenderer
and all the stuff with geometry.geometryGroups
could be stripped out, and as far as WebGLRenderer
is concerned everything's a BufferGeometry
.
Something like this:
THREE.Mesh = function ( geometry, material ) {
THREE.Object3D.call( this );
if (geometry === undefined) {
this.geometry = new THREE.BufferGeometry();
} else if (geometry instanceof THREE.Geometry) {
this.convertGeometry(geometry, material);
} else {
this.geometry = geometry;
}
}
THREE.Mesh.prototype = Object.create( THREE.Object3D.prototype );
THREE.Mesh.prototype.convertGeometry = function(geometry, material) {
if (material instanceof THREE.MeshFaceMaterial) {
// Multi-material mesh, move the geometryGroups splitting code here
var splitgeometry = ...;
for (var i = 0; i < splitgeometry.length; i++) {
this.add(new THREE.Mesh(splitgeometry[i], material.materials[i]));
}
return new THREE.BufferGeometry();
} else {
return THREE.BufferGeometryUtils.fromGeometry(geometry);
}
}
If this suggestion is getting a bit off of the original topic I can file this as a new issue, just trying to consider another possible option for how to enable the WebGLRenderer
clean-up without having to ask everyone to sacrifice their old code. This suggestion would allow Geometry
to live alongside BufferGeometry
without having to keep a separate codepath in WebGLRenderer
's internals, and people could update their code to take full advantage of BufferGeometry
and its helpers as needed.
i think another reason why mrdoob wants is getting rid of old Geometry
(apart from renderer complexity) is that using Geometry
objects are potentially a performance hog for large amount of vertices.
in my humble opinion, everyone have a level of abstraction they are comfortable with. the old geometry feels like a good abstraction for build your own geometries, and while buffergeometry feels like a good abstraction of how buffers are represented.
personally, the "rawer" BufferGeometry feels more for exporters, machines or humans who wants to optimize their stuff. I think the users of BufferGeometry would have no problems writing or creating their own Geometry primitives easily. the rest of users (or maybe just me) might still prefer to think of meshes with vertices and faces, which is probably why GeometryManipulator
(or what I see as a wrapper) comes in.
@mrdoob, if the primary goal is to clean up all the
Geometry
specific code fromWebGLRenderer
, isn't the majority of that code essentially the same as the code to convert aGeometry
to aBufferGeometry
? What ifMesh
automatically turned a legacyGeometry
into one or moreBufferGeometry
objects on init, then theGeometry
specific stuff inWebGLRenderer
and all the stuff withgeometry.geometryGroups
could be stripped out, and as far asWebGLRenderer
is concerned everything's aBufferGeometry
.
Yes, I thought of that. And it would indeed help to have a better understanding of WebGLRenderer
. But then it would pollute Mesh
, Line
and ParticleSystem
... I thought it would be a better approach to produce BufferGeometry
directly rather than converting.
I agree that generating the included PlaneGeometry
, SphereGeometry
, etc. should be done with BufferGeometry (or at least that there should be a PlaneBufferGeometry
option - I can see benefits to both approaches), and that people should be encouraged to use BufferGeometry
or a wrapper for their own cases, but I also understand how frustrating it is for users of the library when major API changes like this force them to rewrite large parts of their code. This sort of change often leaves people stuck between a rock and a hard place, when they want/need some cool new feature that has been added but upgrading means a couple days of going through and refactoring their codebase, updating libraries, etc, and the library earns a reputation for having an unstable API.
I would consider my suggestion to be a transitional thing, a way that we could give the nice "deprecated" message but still work with existing scenes, and then at some future point when everyone has had plenty of time to convert their code and the existing Three.js code, it could be removed and BufferGeometry
with its accompanying wrappers would be the only option that needs to be supported. This would be in line with how deprecation currently works in Three.js - normally the deprecated function complains but still works as it did before, until it's stripped out in the next release.
Another option rather than doing the conversion directly in the Mesh
code would be to throw a deprecated warning in there if THREE.Geometry
is passed, and provide the above Mesh.convertGeometry()
code as part of BufferGeometryUtils
or GeometryUtils
. Developers would then be forced to go through and change all of those instances rather than it seamlessly working, but the change is a lot less daunting if the deprecated warning tells you that it's a one-line change in your app code. This avoids polluting the Mesh
, Line
, etc. classes with compatibility code, at the cost of a little developer inconvenience.
I agree with @zz85 , having low-level abstraction like buffered geometry (without anything above) depreciates value of three.js as an abstraction on top of WebGL. Just my personal feeling.
Thanks a lot guys! Always good to see other perspectives :)
To your point @jbaicoianu, what if there was a BufferGeometry
wrapper that replicated the current Geometry
structure. That way stuff will break for people that are modifying geometry in their code, but they will only need to add a line of code and things should work again.
var geometry = new THREE.GeometryEditor( new THREE.PlaneGeometry( 100, 100, 10, 10 ) );
However, people that are creating Geometry
from scratch by hand will not have it that easy...
@mrdoob yes, if the BufferGeometry
wrapper exposes all of the properties in the same way and is sufficiently compatible, then that's a good way of providing an upgrade path too. #4588 is my attempt at that, I've implemented the same TypedVertex
approach used in your experiments above to provide geometry.faces
, geometry.faceVertexUvs
, etc.
It also uses getters to only allocate the compatibility objects when accessed, which means generators which use the underlying BufferGeometry
attributes should init at the same speed as raw BufferGeometry
, and only users who make heavy use of geometry.vertices
, geometry.faces
, etc. will incur the performance penalty.
Is there any actual sum of function with Geometry2? For example how is the custom mesh creating done etc... Thaks
Seems like I'm finally starting to experiment with something that I've been thinking for a long time.
The problem
The current
Geometry
structure is, albeit user-friendly, convoluted and unperformant. That structure was designed before WebGL even existed and it has been patched along the way.BufferGeometry
was created as a response to this. However,BufferGeometry
has the opposite problem, it's not user-friendly. I don't think a "normal" user needs to know what attributes are.Every time I see a three.js based project running on WebGL on mobile I hope (fear) the persons behind it didn't use
Geometry
.The solution
I spent a long while trying to think how to make
BufferGeometry
more user friendly, but I couldn't see a way of doing that without sacrificing its flexibility so eventually I decided to keepBufferGeometry
as it is. I spent some time during the last cycle makingProjector
(CanvasRenderer
,SoftwareRenderer
,SVGRenderer
, ...) support the structure instead.https://github.com/mrdoob/three.js/blob/dev/src/core/Projector.js#L307-L342
The intention now is to go with
Geometry2
, which eventually would replaceGeometry
and break backwards compatibility, although I wonder if people modify geometry much or not...The idea of
Geometry2
is to have everything in flat arrays:vertices
,normals
,uvs
andcolors
ready to be submitted to the GPU (on the WebGL side).https://github.com/mrdoob/three.js/blob/dev/src/core/Geometry2.js
However, that's not very user-friendly... I experimented with a similar class some months ago that had some "interfaces" to help the user:
https://github.com/mrdoob/three.js/blob/dev/examples/js/core/TypedGeometry.js
All that getters/setters black magic basically allows the user to modify the geometry without having to know that they're just modifying a blob of numbers. The API looked like this:
However, this would only be for the user. Generators and loaders would fill the arrays directly. Here's how
PlaneGeometry2
looks like:https://github.com/mrdoob/three.js/blob/dev/src/extras/geometries/PlaneGeometry2.js
As a side not, performance-wise, I've noticed that where
PlaneGeometry
was created in ~6ms,PlaneGeometry2
generates in ~0.3ms.On the
Projector
side, the code is looking pretty simple so far:https://github.com/mrdoob/three.js/blob/dev/src/core/Projector.js#L343-L358
To index or not to index
This is another topic that has troubled me for years and I think I want to bet to favour non-indexed geometries this time around. Part of the current over complexity in
WebGLRenderer
is the fact that WebGL limitsELEMENT_ARRAY_BUFFER
(indices) toUint16Array
(without extensions). This means that, internally,WebGLRenderer
has to split your geometry in chunks and well... I rather not explain how painful this is.So, even if indexed geometries save on GPU bandwidth, I don't think the complexity it creates is worth it.
Geometry2
won't support indices (Let me know if I'm missing something here).However,
BufferGeometry
will still be in place, so if someone wants to construct their custom indexed geometry they will still have a way to do it.Outcome
In theory, generators and loaders will speed up ~10x. Projects will consume less memory (single
Float32Array
s vs tons ofVector2
/Vector3
s). And, bothProjector
andWebGLRenderer
will get much simpler.