Open hx2A opened 1 month ago
Is it risky to implement a py5 interface to this undocumented Processing signature?
A "vectorized" vertices()
were you could pass as arrays/sequences optional keyword arguments strokes=
, fills=
, and stroke_weights=
sounds like smashing useful.
Maybe the selective/optional part is harder (you would have to figure the current attributes to pass along with the user defined ones). But even so... to be able to use vertices()
like this would be very nice.
Is it risky to implement a py5 interface to this undocumented Processing signature?
A "vectorized"
vertices()
were you could pass as arrays/sequences optional keyword argumentsstrokes=
,fills=
, andstroke_weights=
sounds like smashing useful.
Hmmm, this is an intriguing idea. Perhaps you could write something like:
py5.styled_vertex(10, 20, 30, stroke="red", fill="white")
And then it would construct the 37 float array for you and pass that. Would that be better though? I suspect the effort of constructing array would negate the performance gain of using this feature, which is really the only upside of it existing.
to be able to use
vertices()
like this would be very nice.
An idea I'd like to try is to use a Pandas DataFrame, or at least, show people how to set up the DataFrame correctly, with the column names in the right order, such the DataFrame could be passed to vertices()
. This would be efficient and is best way I can think of to make this feature useful.
I was initially thinking about contexts where you would be creating a collection of things to display anyway.
Use cases that would be cool to optimize:
with py5.begin_shape(py5.POINTS):
py5.styled_vertices(coords, strokes=colors, stroke_weights=sizes )
with py5.begin_shape(py5.TRIANGLES):
py5.styled_vertices(coords, fills=colors, no_stroke=True )
Having to build a Py5Shape object and then set_strokes()
and set_fills()
feels a bit awkward... and there is no set_stroke_weights()
...
Otherwise, maybe we could go closer to the idea from the GeoDataFrame conversation we had... columns for attributes and one for geometry.
Crazy idea, could we have a Py5ShapeDataFrame inspired by the GeoDataFrames maybe?
Having to build a Py5Shape object and then
set_strokes()
andset_fills()
feels a bit awkward... and there is noset_stroke_weights()
...
Right, it is an awkward method. It was created because it needed to exist for the trimesh integrations to work efficiently. That's the best use for it. I wouldn't use it any other time.
I do like the GeoDataFrame idea quite a bit. And perhaps a Py5ShapeDataFrame object could be useful also? It could work kind of like Py5Shape objects but provide tabular access to the shape data. Internally it would use the vertex()
method with 37 columns to create the shape.
@villares something I was working on required me to attempt to use the vertex() method with a 37 column table. Unfortunately it didn't work. TL;DR: this seems to be part of functionality that probably worked in Processing a long time ago but now is useless. I will remove that method signature since it is misleading.
If anyone is curious, here's the code I used to test this:
import numpy as np
import pandas as pd
import py5
COLUMN_NAMES = "X Y Z FILL_R FILL_G FILL_B FILL_A TEXTURE_U TEXTURE_V NORMAL_X NORMAL_Y NORMAL_Z EDGE STROKE_R STROKE_G STROKE_B STROKE_A STROKE_WEIGHT TRANS_X TRANS_Y TRANS_Z VIEWSPACE_X VIEWSPACE_Y VIEWSPACE_Z VIEWSPACE_W AMBIENT_R AMBIENT_G AMBIENT_B SPECULAR_R SPECULAR_G SPECULAR_B SHINE EMISSIVE_R EMISSIVE_G EMISSIVE_B BEEN_LIT HAS_NORMAL".split()
def make_data():
data = pd.DataFrame(np.zeros((5, len(COLUMN_NAMES))), columns=COLUMN_NAMES)
data["X"] = np.random.uniform(0, py5.width, size=data.shape[0])
data["Y"] = np.random.uniform(0, py5.height, size=data.shape[0])
data["Z"] = 0
# everything below this line is ignored by JAVA2D renderer
data["STROKE_R"] = np.random.rand(data.shape[0])
data["STROKE_G"] = np.random.rand(data.shape[0])
data["STROKE_B"] = np.random.rand(data.shape[0])
data["STROKE_A"] = 1.0
data["FILL_R"] = np.random.rand(data.shape[0])
data["FILL_G"] = np.random.rand(data.shape[0])
data["FILL_B"] = np.random.rand(data.shape[0])
data["FILL_A"] = 1.0
data["STROKE_WEIGHT"] = np.random.uniform(1, 50, size=data.shape[0])
data["EDGE"] = 1.0
data["TRANS_X"] = data["X"] - py5.width / 2
data["TRANS_Y"] = data["Y"] - py5.height / 2
data["TRANS_Z"] = -866.025
data["VIEWSPACE_X"] = data["X"]
data["VIEWSPACE_Y"] = data["Y"]
data["VIEWSPACE_Z"] = -0.818
data["VIEWSPACE_W"] = 866.025
py5.println(data.T)
return data
def setup():
py5.size(500, 500, py5.JAVA2D) # get unstyled points
# py5.size(500, 500, py5.P3D) # nothing drawn to screen
py5.stroke_weight(20) # leave this
data = make_data()
print(data.shape)
py5.background(255)
with py5.begin_shape(py5.POINTS):
py5.vertices(data.values)
# above line is the same as:
# for i in range(data.shape[0]):
# py5.vertex(data.iloc[i].values)
py5.run_sketch()
You can see the 37 columns I was talking about. With the JAVA2D renderer it only looks at the X and Y values. For the OpenGL renderers, it puts the data in a vertices array that is in the abstract PGraphics class but doesn't actually get used the P2D or P3D renderers. Basically the vertices data gets stored someplace that is ignored.
Since this is in the abstract PGraphics class it must have been used by the other renderers at some point, but the code in the subclasses evolved so that it is no longer used.
Although there is one renderer that this would work for: the py5shapely renderer that I have not yet open sourced. When building that renderer I came across this 37 column data table thing and used it in the py5shapely implementation. That's why I knew about this. I thought the other renderers supported it though. Oh well! It was an idea.
Anyhow, since this is misleading, I'll remove it from the documentation since nobody should use it and it isn't supported by Processing.
The vertex() method documentation contains this signature:
This signature is referring to the feature undocumented in Processing that allows you pass 37 floats to this method to set the vertex's position, stroke, fill, stroke weight, etc. The variable name happens to be
v
, which is the same as thev
typically used for uv texture mapping. This explains the incorrect parameter description above.To address this, the generator code needs to intercept the Java info and override this part of the data. Setting the parameter name to something unique will let me write the correct description and include it in the documentation.
Py5Graphics.vertex()
also has this same problem.Are there other methods that have incorrect variable descriptions?