Open michaelnorman-au opened 2 years ago
Increasing Access
I can see this as a way to drop the barrier to entry for making more organic looking 3D shapes that you can render with the same tools p5 gives you for its own 3D primitives. To do this right now, you'd have to do a lot of math yourself, or learn 3D modelling software and export a model, and that has a large time cost (and potentially a monetary cost.)
Could act in a similar fashion to UV assignment for custom shape vertexes, adding an extra 2 parameters on the end (in WEB_GL mode) for the UV coordinates.
This will also need https://github.com/processing/p5.js/issues/5631 to be fixed in order to not lose the texture coordinate data that the user supplied.
For bezierVertex
and quadraticVertex
, I think we'll actually need UV coordinates for all of the control points (so 3 sets of UVs in bezierVertex
and 2 for quadraticVertex
.) Then, when we convert those to vertex
calls, we'd also mix the UV values with the same weights we use for the position values. Maybe it's the same for curveVertex
but I'd need to read up a bit on Catmull Rom splines first (are the curves also contained in the convex hull of the control points? If not, do we risk getting weird UVs in the interpolated regions?)
If we add a UV coordinate to the longest form of the bezierVertex
function, its signature becomes this:
bezierVertex(x2, y2, z2, u2, v2, x3, y3, z3, u3, v3, x4, y4, z4, u4, v4)
This is kind of a lot to grok. It's a bigger API departure but maybe one could introduce a bezierControlPoint
function, which lets you split the above into three calls:
bezierControlPoint(x2, y2, z2, u2, v2)
bezierControlPoint(x3, y3, z3, u3, v3)
bezierControlPoint(x4, y4, z4, u4, v4)
Could we use the starting point and direction of a shape's perimeter to infer the texture map, instead of requiring the user to pass in texture coordinates? [^1] This may improve the user experience without sacrificing much flexibility, and it’d eliminate the need for major API changes.
Having some clarity on this issue will help me to organize work on #6560. I'll explain my thinking based on the example from issue #5699 (shown below). I'm in unfamiliar territory here, so please don't hesitate to correct me :) I'll address direction and distance separately.
If the user specifies the curved shape between beginShape()
/endShape()
by starting from the top left and proceeding clockwise, then we could map $a$ to the top left corner of the curved shape, and when we move clockwise in $uv$-space, we could also move clockwise in $xy$-space. If the user draws their shape counterclockwise, a clockwise movement in $uv$-space could correspond to a counterclockwise movement in $xy$-space.
We might have two modes: SEGMENT
and PERIMETER
. [^2] Maybe we could set this with a textureMap()
function that would accompany the existing textureMode()
and textureWrap()
functions.
In SEGMENT
mode, we could map each edge of the texture image to a rendered path segment (the top edge could map to the first segment, the right edge could map to the second segment, and so on). [^3]
In PERIMETER
mode, we could ignore edges and map the entire texture perimeter to the entire shape perimeter (i.e. moving along the texture perimeter could correspond to moving along the shape perimeter by a corresponding amount).
[^1]: This is at least somewhat similar to a feature of Vectorworks, in which "The starting point and direction the wall is drawn affect how a texture is applied."
[^2]: In the future, we may want to support new kinds of surface primitives (e.g. we could allow the user to create Bézier triangles by using bezierVertex()
together with the TRIANGLES
shape kind). I haven't yet considered whether the mode names might need to be revised to accommodate those cases.
[^3]: If there are fewer than four path segments, multiple texture edges could map to the last segment. If there are more than four segments, we could maybe fall back to PERIMETER
mode.
Could we use the starting point and direction of a shape's perimeter to infer the texture map, instead of requiring the user to pass in texture coordinates?
I think the answer is yes we can, but that we can't eliminate it from the API. Some explanation:
A common technique is to "pin" texture coordinates to vertices, but then move the vertices in 3D space without changing their texture coordinates, making the texture stretch to fit the shape as the shape changes. If the texture coordinates are derived from the positions and directions in 3D space, then it becomes hard to change one without changing the other.
The nice thing about many curve formulations used in graphics is that control points are effectively the same as vertices in terms of the data they store, and how you work with them. In most 3D programs, when doing UV mapping, you can see your curves in 3D and also where their texture coordinates lie on a 2D plane. Each vertex corresponds to a point in both views. Each control point on a curve also corresponds to a point on both. This is nice because it is predictable: the same algorithms for evaluating the curve in 3D space also apply to the 2D UV coordinates. Here's a screenshot of some older software using a curved model and showing its curves in 2D space as well:
Lastly, since evaluating the UV along a curve is done by interpolating the control point UVs the same way one interpolates "regular" points, I think if we want to infer UVs, we'd do so by generating UV values per control point. That would mean that under the hood, we would still end up with a representation like this regardless, so I think for that reason alone, it makes sense to start with this representation.
I think it's still potentially useful to give users a way of getting UVs without having to define them themselves for cases when they aren't trying to pin a specific texture map image to a specific curve. I think my worry is that this is a pretty general tool, drawing curves, and that there are a lot of edge cases that come up. Something like manual mapping eliminates all the edge cases at the cost of more manual work for the programmer; derived UVs can maybe make an easier API but it puts a lot more pressure on us to answer questions like these:
SEGMENT
mode, what would you do if you're drawing a shape that has more than four segments?I think something like this would definitely be useful, but I'm not sure I'd want to rely on that as the only API. Another thought: could we design the API in a way that allows p5 libraries to define mapping modes? e.g. if they had a manual UV mapping API available, could a library override the shape drawing behaviour for when you don't specify UVs, and derive them with whatever strategy they like?
One of the reasons I think deriving UVs is something that can be done separately from our curve implementation is because I think it's a useful feature for more than just curves.
One example: if you want to build a 3D shape out of spheres, you run into a similar problem to what I was saying about not mapping to the whole texture at once. Each p5 sphere has UVs that map to the whole texture. If you want to combine the spheres into one big shape, but then you apply a texture, you'll see that texture repeated onto every sphere. An idea that could help deal with that is to provide some APIs to reassign all the UVs in the whole model, and provide a few strategies for doing so:
uv = normalToEquirectangular(normalize(pt - center))
.)Anyway I'm not advocating that we build the above as part of this curve drawing API, but it hopefully just paints the picture that we're tapping into a fairly complex problem that has a lot of directions it can go in, and why my inclination is to try to build something that others can build their own methods on top of in addition to some simpler solutions.
Thank you so much for your thoughtful reply @davepagurek! There's definitely a lot to consider. I'll start by sharing some initial thoughts about the API, under the assumption that users will manually specify texture coordinates in all cases. I still want to think about this more, but I'm pretty excited about it, since I think the API change alone could be a big improvement.
Here are three options, exemplified by the case of Bézier curves:
bezierVertex(x2, y2, z2, u2, v2, x3, y3, z3, u3, v3, x4, y4, z4, u4, v4)
bezierControlPoint(x, y, z, u, v)
(called multiple times)bezierVertex(x, y, z, u, v)
(called multiple times)The third option, which I added, is the same as Option 2 but uses "Vertex" instead of "ControlPoint."
The new API could improve readability, consistency, and extensibility.
bezierVertex()
takes multiple points and really specifies a curve, not just a vertex. In fact, the corresponding commands in the native canvas API and the SVG specification work basically the same way as p5's commands, and they're "curve" commands, not "vertex" commands. This discrepancy was observed by @zenozeng back in 2015. curveVertex()
already uses "Vertex" in its name to refer to points that may or may not be on the curve itself.curveVertex()
specifies points that only guide the curve and points that are actually on the curve, so it forces us to interpret control points and vertices as being the same. However, bezierVertex()
forces us to interpret control points and vertices as distinct concepts: this function has a singular name but takes coordinates for three points, which only makes sense if we distinguish control points (the first two points) from vertices (the third point). This is a problem I've been wishing we could fix, and the new API manages to fix it!arcVertex()
; the parameter list only contains coordinates for one point, so it's less confusing than a command like bezierVertex()
.bezierVertex()
specifies two signatures: bezierVertex(x2, y2, x3, y3, x4, y4)
and bezierVertex(x2, y2, z2, x3, y3, z3, x4, y4, z4)
. In the new API, there would be only one simple signature: bezierVertex(x, y, [z], [u], [v])
. This is more beginner friendly.bezierVertex()
function would work more like curveVertex()
, it eliminates the need for a separate quadraticVertex()
command. This has multiple benefits:
quadraticVertex()
improves API consistency, since it's already the case that there is no quadraticBezier()
curve command; there's just a bezier()
command.quadraticVertex()
eliminates confusion arising from a conflicting use of the term "quadratic." Math students learn that the vertex of a quadratic equation's graph corresponds to its maximum or minimum value, which is not true of a quadratic Bézier vertex). If we want, we could deprecate quadraticVertex()
before removing it entirely in the next major version of p5.js.curveVertex()
API, which people are already used to. bezierVertex()
. The current API allows the user to call vertex()
to start and continue a polyline, and to call curveVertex()
to start and continue a Catmull-Rom spline; however, to start a shape with a quadratic or cubic Bézier curve, the user needs to mix commands, specifying only the first vertex with vertex()
. (Since the vertices are all specified in a common (x, y, [z], [u], [v])
format, the implementation will just set it aside as the first vertex regardless of the vertex type. So, supporting the old syntax shouldn't require us to complicate the codebase.)bezierVertex()
, we open up the possibility of creating higher-order Bézier curves without increasing the size of the p5.js API, as noted above.vertex()
and specify the next four with quadraticVertex()
, we only have five vertices. (We could potentially use vertex()
to specify the sixth vertex at the end, but this is inconsistent with how vertex()
is used everywhere else, and it requires us to mix command types to create a single primitive.)bezierVertex()
corresponds to the canvas's bezierCurveTo()
and SVG's C
command. They all take two control points and one vertex. Changing the API would mean a bigger departure from these commonly used APIs (as well as the Processing API). However, beginners probably won't tend to know those other APIs anyway, and more experienced users may have less trouble adapting, so this doesn't seem like a major concern.If others want to help extend these lists of advantages and disadvantages, I'd be happy to incorporate their comments into the lists above (with links to the original comments). That way we can compile everyone's thoughts in one place.
If we reach a consensus, it seems like we could solve the original requirements of this issue now (as part of #6560), rather than waiting for the next major version of p5.js. If we use "Vertex" in all the function names instead of "ControlPoint," we wouldn't need to maintain separate reference pages for deprecated features. Later, we could eliminate deprecated usage altogether in p5.js 2.0, which would eliminate any performance hit caused by having to process two types of parameter lists.
Just wanna save my progress on this issue, this sketch has working examples of passing texture coordinates to bezierVertex, quadraticVertex, and curveVertex by calling them multiple times.
Is there any solution to this? I am also trying to implement it, but it gives error that bezierVertex expects maximum of 9 parameters
.
Not yet currently, but this is something we aim to enable in our 2.0 release!
Thank you for the reply, @davepagurek. Is there any date for release? We are currently working on big project which we have to place textures on each shape, and most of our shapes inclue bezier and quadratic vertex.
Not yet, so for now your best bet will be to manually convert your beziers to polylines that you can use with vertex()
. To do this, you can use bezierPoint()
to calculate positions along your curve. Normally you'd use three calls to this for x, y, and z, but for texture coordinates, you can add two additional calls for u and v.
Increasing Access
Unsure
Most appropriate sub-area of p5.js?
Feature enhancement details
5699
Putting this feature request in as a response to the above issue. Could act in a similar fashion to UV assignment for custom shape vertexes, adding an extra 2 parameters on the end (in WEB_GL mode) for the UV coordinates.