Closed Durman closed 4 years ago
Comparison of using mesh socket with separately setting elements of mesh:
Well, it's all nice, but how to maintain compatibility?...
What do you mean?
There are tons of existing nodes, which use 3-4 sockets, not the new one. Are you planning to rewrite them?...
No, both approaches can exists parallel. Old version is low level. New one is higher level.
I'll leave it here. Specific of importing: https://stackoverflow.com/questions/1744258/is-import-module-better-coding-style-than-from-module-import-function
Process of working with data from mesh socket in case a node does not produce and get other data than mesh:
def process(self):
meshes = self.inputs['Mesh'].sv_get(deepcopy=True)
for mesh in meshes:
node_algorithm(mesh, self.mode)
self.outputs['Mesh'].sv_set(meshes)
The node_algorithm
function wrights new data right inside mesh container.
Mesh will be actually just a dictionary but with access to data via attributes like this: mesh.verts
is the same as mesh['Verts']
.
Actually I am not excited with middle block of nodes in the layout it si too verbose. I think I will search approaches later.
@zeffii your thoughts on this matter?...
Managing with loop data
Actually I don't see alternative of this proposal. I would like for some nodes which change topology to have next list of sockets:
And the same list should be on output side and probably later it will be interesting to add other mesh data parameters.
The question is just how are we going to deal with exisitng nodes. We may end up with 150+ nodes having old 3 sockets, plus 150+ nodes having new sockets; how are we going to explain all that to users? :)
my requirements for Sverchok are not as feature rich as that list. I'm not opposed to a Mesh socket that carries such data. Having a whole bunch of duplicated nodes would be unfortunate.
I would prefer to spend time writing a sverchok like node system in rust
:)
huh. And then writing a manual on how to compile it, for users? :)
not only that, i don;t know `rust' at all :)
After all this years of developing, finaly, Sverchok have become capable of creating of outdoor toilet. :smile:
It looks like there is problem with performance.
o_O what's that? :)
This is probably how mesh data structure of procedural modeling will be look like in Blender after everything nodes
project will be accomplished.
Nice thing about having mesh data structure is that it already can keep everything what needs for displaying mesh on each step of mesh life.
I have investigated way of creating mesh directly in bpy.types.Mesh
of an object. According documentation it should most efficient way. But I have found that this way is very fragile and can crash Blender easily, looks like does not fit to Sverchok at all. And efficiency is not much more great or at least in my implementation. So I just leave this code here just in case.
passing around blender structures, and references to them, inside sverchok is going to break eventually. (this is also stated in the documentation) so we avoid that most of the time.
the only fast operation in Mesh land is passing a flat list of verts to a .mesh
mesh.vertices.foreach_set("co", <flat list of coordinates>)
Yes, foreach_set
is quite fast. Interesting thing is that with numpy arrays it works a little bit slower.
Test 1 1.0141143000000739
Test 2 0.21012190000010378
Test 3 0.26692159999993237 <- with numpy
Test 1 1.0243915000000925
Test 2 0.21129569999993691
Test 3 0.22625440000001618 <- with numpy
what about np_verts = np.array(verts, 'f')
that's about 7 times faster here.
Amazing. In my case it have became 30 times faster. What 'f' letter means?
Test 1 1.0403471999998146
Test 2 0.21267190000071423
Test 3 0.007748700001684483
Well I thought numpy automatically convert python lists with floats to np.array with float but looks like it is doing something different.
>>> verts = [(1, 0, 0) for _ in range(2)]
>>> np_verts = np.array(verts)
>>> np_verts
array([[1, 0, 0],
[1, 0, 0]])
>>> np_verts = np.array(verts, 'f')
>>> np_verts
array([[1., 0., 0.],
[1., 0., 0.]], dtype=float32)
With creating of edges there is no such benefit of using numpy arrays unless I don't know something else about numpy.
Test 1 0.024411099999269936
Test 2 0.013266699999803677
Test 3 0.016417299997556256 <- numpy
I thought numpy automatically convert python lists with floats to np.array with float
it does, but anything automatic will always have overhead. When we provide the dtype
the overhead is less brutal.
creating of edges there is no such benefit
the numpy array for edges can still offer speed ups i think, ...maybe even it looks tidier.
because [i for e in edges for i in e]
just looks insane.
Test with creating faces:
Test 1 0.019219599999814818
Test 2 0.0063215000000127475
Test 3 0.013162700000066252 <- Numpy
Looks like there is sense of using numpy with floats values only.
Test 1 0.06465090000028795
Test 2 0.015234599999985221
Test 3 0.00209439999980531 <- Numpy
Searching attributes in hierarchy of mesh elements:
Speed test cube: Cube subdivixions 100 without topology changes:
with topology changes:
remember BMesh viewer also has a mode that expects vertex count to be static, and no topology change. In the N-panel ," fixed vertex count".
I have measured the speed with taking this in account.
ok cool
generate bmesh verts test:
Test1 0.12106750000020838 <- Python list
Test2 0.1735398000000714 <- numpy to python list
Test3 0.5359352000014042 <- numpy
With face generation:
Test1 0.21056620000126713 <- Python lists
Test2 0.2658947999989323 <- Numpy to Python lists
Test3 0.6875750999988668 <- Numpy
Have you tested the performance difference between 'float32' and 'float64'? https://stackoverflow.com/questions/5956783/numpy-float-10x-slower-than-builtin-in-arithmetic-operations
Not sure if it can be related but it could: https://devtalk.blender.org/t/alternative-in-2-80-to-create-meshes-from-python-using-the-tessfaces-api/7445/3
And another idea that may give a little boost:
def test3(verts, faces):
bm = bmesh.new()
new_vert = bm.verts.new
[new_vert(v) for v in verts]
bm.free()
Have you tested the performance difference between 'float32' and 'float64'?
Yes, it looks like there is no difference.
And another idea that may give a little boost:
It makes it faster on 10-20 percent, not bad.))
For now creating Blender objects via Bmesh is fastest in my implementation but I did not finish my experiments yet.)
Actually I have moved much farther of this proposal in my experiment. I have found that just creating alias of sockets are not satisfiable. I'm thinking of creating real mesh data structure based on numpy arrays. Will open another issue for this.
@Durman did you ever link to SvMesh node code?
I think you will be interested in code of this node: https://github.com/nortikin/sverchok/blob/mesh_data_structure/nodes/mesh/viewer_mesh.py
What I had notice is that this function is super efficient but only with numpy arrays and type of the array should be exactly float32 type. https://github.com/nortikin/sverchok/blob/b0a6a6817dfa41902f7ed82092b9f8f5cc93022e/utils/mesh_structure/visualize.py#L96-L97
Example of generating simple line with 1 million vertices. Only drawing code is measured.
Init vertices - 1.87ms
Set coordinates - 2.73ms
Intit edges - 2.30ms
set edges - 95.14ms
TOTAL - 102.03ms
The answer to my question how to speed up process of creation edges - https://blender.chat/channel/python?msg=7mxrnoK6XW3Lchh8o from Skarn
It is probably because it expects pyobjects, so a conversion happens in the case of a numpy array. In the case of a Python list, the contained elements are already PyObjects.
So it makes sense to measure together with list/array creation Because that's gonna give you the final speed of the overall algo Another option is to create a Cython extension, that would replicate the behavior of foreach_set but epxecting a numpy array. For that Blender API offers a convenient .as_pointer() method on every DNA structure.
c++
switch (prop_type) {
case PROP_INT:
array = PyMem_Malloc(sizeof(int) * size);
if (do_set) {
for (i = 0; i < size; i++) {
item = PySequence_GetItem(seq, i);
((int *)array)[i] = (int)PyLong_AsLong(item);
Py_DECREF(item);
}
RNA_property_int_set_array(&self->ptr, self->prop, array);
}
else {
RNA_property_int_get_array(&self->ptr, self->prop, array);
for (i = 0; i < size; i++) {
item = PyLong_FromLong((long)((int *)array)[i]);
PySequence_SetItem(seq, i, item);
Py_DECREF(item);
}
}
break;
case PROP_FLOAT:
array = PyMem_Malloc(sizeof(float) * size);
if (do_set) {
for (i = 0; i < size; i++) {
item = PySequence_GetItem(seq, i);
((float *)array)[i] = (float)PyFloat_AsDouble(item);
Py_DECREF(item);
}
RNA_property_float_set_array(&self->ptr, self->prop, array);
}
else {
RNA_property_float_get_array(&self->ptr, self->prop, array);
for (i = 0; i < size; i++) {
item = PyFloat_FromDouble((double)((float *)array)[i]);
PySequence_SetItem(seq, i, item);
Py_DECREF(item);
}
}
break;
Those are two cases of a switch responsible for the internals of foreach_set / foreach_get as you can see see tons of PyObject related things are happening there Let me write a quick example of this
py
mesh.vertices.add(n_verts) # we allocate n verts
mesh_ptr = mesh.as_pointer()
numpy_array_ptr = your_numpy_vert_array.__array_interface__['data'][0]
cython_wrapper_func(mesh_ptr, numpy_array_ptr, n_verts)
The rest can be done in pure Cython, or in C++. I will use C++ for convenience, Cython bind is typical and can be done with a tutorial.
c++
// define Mesh struct by copying it from Blender code, or linking to its headers. (I prefer linking to headers, make updating between Blender versions easy)
Mesh* mesh = reinterpret_cast<Mesh*>(mesh_ptr) // mesh_ptr is uintptr_t we got from Cython, which got it from Python, which got it from Blender API using as_pointer() method.
for (int i = 0; i < mesh->totvert; ++i)
{
MVert* vert = &mesh->mvert[i];
vert->co = ....; // assign from numpy here
}
A Cython extension to work with numpy arrays without losing that bit of speed sounds really well to me This will give you the maximum speed. Allocating verts is easier done with python, by a single call to add(). I do all of my exporters/importers in this fashion now. The only downside is that you need to build your addon for each Blender release. And obviously this kind of approach is not crash-safe and can crash Blender if you code something incorrectly. But in the end all of this is for the convenience of the users not having to wait minutes before their scene exports/imports. Using this approach a scene that used to take 40 minutes to export using Python-only version, is now getting exported in 12 seconds. The majority of smaller scenes are done instantly, you don't even have a chance to notice. here is the example of such a module gitlab.com/skarnproject/blender-wow-studio/-/tree/master/io_scene_wmo/wbs_kernel/src An example of the exporter: gitlab.com/skarnproject/blender-wow-studio/-/blob/master/io_scene_wmo/wbs_kernel/src/bl_utils/mesh/wmo/batch_geometry.cpp And its corresponding Cython counterpart, gitlab.com/skarnproject/blender-wow-studio/-/blob/master/io_scene_wmo/wbs_kernel/src/wmo_utils.pyx And what's even cooler, you can use either C++ or Cython parallelization functionality to compute multiple meshes on multiple CPU cores. gitlab.com/skarnproject/blender-wow-studio/-/blob/master/io_scene_wmo/wbs_kernel/src/wmo_utils.pyx#L167 -> and example of parallel mesh export done in Cython
Problem statement
As you can see on the picture process of plugging sockets to each other is becoming too boring. And the node does not have
vertices data
andedges data
sockets yet.So the goal is to make something like alias for group of sockets.
Solution
Such issues already was in the past. #184 My proposal is much more simple. Actually it developing of the idea of this #2766
I think for such nodes as
bevel
,inset face
and similar others it would be nice to have mesh input socket instead of (verts, edges, faces, verts data, edges data, faces data). For creating data for such sockets Sverchok should have Mesh in and out nodes (or SvMesh in / out ?)Mesh output socket will have dictionary type and have certain keys according input data. So algorithm of working with input mesh in such format will be next: a node search certain keys like
verts
orface data
, if such keys likeverts
andfaces
which are mandatory for inset faces algorithm are not found then NoDataError is raised, if other keys are not found the node continues working without this data.Nestedness: Actually it lays out of the boundary of the current problem but with the solution it looks like it can be quite easy to create arbitrary number of nested meshes. More precisely to say meshes can be joined to categories with arbitrary level of nestedness. Some examples:
Here is dictionary with two keys:
mesh 1
andmesh 2
.Mesh 1
andmesh 2
are also dictionaries in which has at least keyverts
. The category dictionary can't include any values exclude dictionary with mesh data and other dictionary which include dictionaries with mesh data. So algorithm for searching mesh dictionaries should be next: