processing / p5.js

p5.js is a client-side JS platform that empowers artists, designers, students, and anyone to learn to code and express themselves creatively on the web. It is based on the core principles of Processing. http://twitter.com/p5xjs —
http://p5js.org/
GNU Lesser General Public License v2.1
21.73k stars 3.34k forks source link

Refactor vertex functions to enable composite paths #6560

Open GregStanton opened 1 year ago

GregStanton commented 1 year ago

Increasing Access

This enhancement will provide greater access to individuals in multiple circumstances.

Code contributors: For volunteers who have less experience with large codebases or less free time to navigate them, it will become feasible to contribute to vertex-related features. For example, there is community support for a new arcVertex() function, but the complexity of the current codebase has impeded progress.

Users: For users who want to make composite shapes, the existing p5 API will work as they expect it to, so they will face less confusion. A knowledge barrier will also be removed, since they won't need to learn the native canvas API or SVG in order to make composite shapes.

Most appropriate sub-area of p5.js?

Feature enhancement details

Problem

Mixing path primitives

Users expect to be able to create composite paths with a mix of path primitives (line segments, quadratic Bézier curves, cubic Bézier curves, and Catmull-Rom splines). The p5 reference does not indicate that general mixing is not possible, and it's already possible with native canvas paths and SVG paths.

However, the current p5 implementation assumes shapes include at most one type of nonlinear path. For example, see the 2D renderer's endShape() method. This limitation leads to unexpected behavior in a range of cases. A couple of these are illustrated below.

Expected shape Actual result
Quadratic and cubic Béziers
Lines and Catmull-Rom splines

Complexity of the codebase

With the current implementation, it will be difficult to support mixing vertex types (e.g. #906), to track down bugs (e.g. #6555), and to implement new features (e.g. #6459). The difficulty arises from specific functions as well as broader code organization:

Solution

Separation of concerns

We can solve many of these problems at once by separating concerns. The idea is to create classes for the different vertex types, each with their own data and their own methods that add them to the canvas. Then vertex(), quadraticVertex(), etc. just push vertex instances of the appropriate type into a path array. When endShape() is called, it simply loops over the vertex objects, and they add themselves to the canvas; endShape() doesn't need to know that curveVertex() requires special logic, for example. By refactoring in this way, mixing path primitives is essentially automatic, with no change to the public p5 API.

Proof of concept

I put together a proof of concept in the p5 Web Editor; it includes an implementation of arcVertex() as proposed in issue #6459. A separate sketch shows how it fixes unexpected mixing behavior. With these two enhancements together (arcVertex() and composite paths), p5's beginShape()/endShape() will be able to produce composite paths with all the basic primitives of the native canvas API and SVG, plus Catmull-Rom splines. Based on some discussion, it looks like it will also help with the bug observed in issue #6555. Although the proof of concept focuses on the 2D case, the goal is to accommodate the 3D case as well, with some adjustments.

Update: Task list

Calling all volunteers!

@davepagurek, @limzykenneth, @xanderjl, @Gaurav-1306, @Jaivignesh-afk, @sudhanshuv1, @capGoblin, @lindapaiste

Based on discussions here, in the online meeting, and on Discord, we're going to use this comment as an overall implementation guide, and we've decided to organize the work according to the following tasks:

*: A separate GitHub Issue does not need to be created. **: A separate GitHub Issue should be created, but not yet.

For all the unmarked tasks, I'll set up separate issues with more details as soon as I get a chance, and the task list will link to them. Any code should be committed to the new vertex-refactor branch @davepagurek set up, and we'll merge it into the main branch once the time is right. For right now, we can get started by discussing the tasks marked with a single asterisk!

Gaurav-1306 commented 1 year ago

I would like to work on this issue.

Just a little hiccup, i can start in the next 2 weeks, so if someone wants to do it before that they can, but after that i would like to work on this issue. @GregStanton thanks again for the update 😃

GregStanton commented 1 year ago

Hi @Gaurav-1306! It's great that you want to work on this! We may want to break this up into smaller tasks that multiple people can work on. Maybe the first thing is to make a list of people who are interested. Maybe one person can take the lead by dividing up tasks. When I get a chance, I can think more about that, but I'd love to hear from others on this.

Jaivignesh-afk commented 1 year ago

I would like work on this as well!!

limzykenneth commented 1 year ago

Great write up @GregStanton! I like the idea of having internal classes for each type of vertices and it also makes it possible for addon libraries to implement their own unique vertices that can work with beginShape() and endShape().

As a start can we work out a common interface that each of the classes should adhere to? ie. What properties and methods should they all have? What are the signatures of the methods?

After that or at the same time we can figure out how the internals call the methods/interface. For anyone interested in working on this, please work out a proposal in this issue first. You can test out code implementation in the web editor as @GregStanton has done with the proof of concept. Thanks!

GregStanton commented 1 year ago

Thanks for the kind words @limzykenneth, and for helping to direct the effort! I'm really glad you like the idea.

I'm not sure if you noticed, but the code that makes my proof of concept work is all located in shape.js, inside the project's file directory. I think it may provide initial answers to your questions. For example, the vertex classes all have the following interface:

Possible next steps/questions

  1. Does anyone want to extend my proof of concept to the 3D case? This might mean adding a method or getter analogous to addToCanvasPath(), but it would output vertices; @davepagurek mentioned something along these lines on Discord.
  2. Does anyone want to extend my proof of concept so that it supports contours? I think Dave also suggested a nested array that starts with _path as in shape.js and holds additional arrays for the contours after that.
  3. I'm glad you mentioned extensibility; that's a good point to keep in mind. I guess we'd need to expose something like shape.js's _path array in order to allow add-on libraries to create new vertex types? Maybe there's another way.

Edit: Maybe @Gaurav-1306 or @Jaivignesh-afk can divide up points 1. and 2. above, regarding the 3D case and contours?

limzykenneth commented 1 year ago

I'm glad you mentioned extensibility; that's a good point to keep in mind. I guess we'd need to expose something like shape.js's _path array in order to allow add-on libraries to create new vertex types? Maybe there's another way.

We don't necessarily have to to figure this out at this point. Once we have the implementation working with the flexibility we want, we can revisit at a later point. That is to say it's a nice to have but not must have.

davepagurek commented 1 year ago

For 3D support: in WebGL, each path/contour ends up just as an array of vertices, so each segment would need to be able to convert itself into a polyline. This will also depend on the level of detail, so its interface for WebGL would have to look something like:

interface Segment {
  first: boolean
  addToCanvasPath(ctx: CanvasRenderingContext2D): void
  addToPolyline(polyline: Array<p5.Vector>, curveDetail: number): void
}

Here I've also made this a void function that mutates the array to be consistent with the way the 2D one works, but outputting an array of vertices which we then concatenate to a big list would also work.

I've also added a first property here because I assume for 2D mode, each vertex may need to know if it needs to moveTo or if it needs to lineTo based on whether it's the start of a path/contour.

To support contours, if we have:

type CompositePath = Array<Segment>

...then a full shape would be:

type Shape = Array<CompositePath>

where beginShape() starts off the shape as [[]] (just one composite path, initially empty), and each beginContour call pushes a new empty array to the shape to add a new contour. (The main shape and contours can be treated the same.)

GregStanton commented 1 year ago

Replying to https://github.com/processing/p5.js/issues/6560#issuecomment-1821679406 by @davepagurek:

Thanks! This is very helpful. There's some tension between the idea of vertex classes and segment classes; I think you were right to switch over to segment classes. The deciding factor for me is that it seems natural to put all the Catmull-Rom vertices in a single object, and it doesn't make sense to think of that as a singular "vertex" object.

Below, I propose a few tweaks that combine (and hopefully refine) the interfaces we each suggested. I'm using a TypeScript-like syntax as an informal shorthand (e.g. get doesn't actually work in TypeScript interfaces, as far as I know).

Proposed interfaces

If these sound good, we can test them by incorporating them into a new version of the proof of concept. (That's good because I don't have the extra time right now to think this through completely! I think it makes sense, though.)

interface PathSegment {
  data: Array<number | string>;
  type: string;
  get endVertex(): p5.Vector;
  addToCanvasPath(ctx: CanvasRenderingContext2D): void;
  addToVertexPath(shape: Shape, curveDetail: number): void;
}

interface Path = {
  segments: Array<PathSegment>;
  currentVertex: p5.Vector;
}

interface Shape {
  boundary: Path;
  vertexBoundary: Array<p5.Vector>;
  contours: Array<Path>;
  vertexContours: Array<Array<p5.Vector>>;
  currentPath: Path;
}

Notes

sudhanshuv1 commented 1 year ago

I would also like to work om this issue!

davepagurek commented 1 year ago

Thanks @GregStanton! I took some time to think through the implementation some more and have some more refinement to propose (or at least spelling out some of the separation of concerns more explicitly for implementers.) Ideally p5.Renderer just acts as an adapter between the p5 public API and Shape. Then, when the shape is complete, p5.RendererGL and p5.Renderer2D only need to extract the data from Shape in the format they need.

For easier testing and to keep the code clean, I'd ideally like to keep Shape/Path/PathSegment as decoupled as possible. There's a bit of mixing right now between what the data of these classes contain, and the algorithm of how they get constructed. Both are important, but I'd ideally like to have some invariants so that there's less confusing state management. How do you feel about these as some things to try to keep true during construction/drawing:

With that in mind, here are some more thoughts for public interfaces (omitting internal state for now as an implementation detail).

interface Shape {
  // To be called by `p5.Renderer` from public drawing API methods:
  addVertex(v: VertexInfo)
  addCurveVertex(v: VertexInfo)
  addArcVertex(v: VertexInfo, options: ArcVertexOptions)
  addQuadraticVertex(v1: VertexInfo, v2: VertexInfo)
  addBezierVertex(v1: VertexInfo, v2: VertexInfo, v3: VertexInfo)
  startPath()
  endPath()
  closePath()

  // To be called from `p5.RendererGL` and `p5.Renderer2D` to draw to the screen
  toVertices(curveDetail: number): Array<Array<VertexInfo>> // Array of arrays for each contour
  drawToCanvas(ctx: CanvasRenderingContext2D)
}

interface VertexInfo {
  position: p5.Vector

  // WebGL-specific data per vertex or control point:
  texCoord?: [number, number]
  fill?: [number, number, number, number]
}

Not necessary for a proof-of-concept, but just as a heads up, WebGL mode has more info than just position data for vertices, so I've made an interface for everything that might include. I think you can generally substitute VertexInfo everywhere we previously only had p5.Vector so it's not too hard of a switch when implementing things for real.

I've also only opted to include startPath and endPath here, still treating paths/contours uniformly, because while it's potentially easier for people to understand contours as being separate things from the boundary, there actually is no underlying distinction in the data: if you want to draw a venn diagram shape, you need to draw two circles, and it doesn't matter which circle is the "contour" or the "boundary": both paths contribute to both at different times, so there's not really a reason for users to add a boundary later. For that reason I feel like it makes more sense in the renderer implementation to match how the rendering works rather than how the public API is set up. (The public API's insistence of there being one boundary if you have contours can get in the way sometimes, especially when using non-p5 data. I'd love the ability to draw shapes where you only have contours so that if you want to draw a font glyph from its source path information, you don't have to special-case the first contour in the glyph, and can instead treat all contours in the glyph uniformly.)

So then Shape has to make calls to construct the paths, and to "export" them to the canvas or to a vertex array. I imagine that means Shape makes new paths in the start/endPath methods, and otherwise forwards the vertex methods on to the paths themselves.

interface Path {
  addVertex(v: VertexInfo)
  addCurveVertex(v: VertexInfo)
  addArcVertex(v: VertexInfo, options: ArcVertexOptions)
  addQuadraticVertex(v1: VertexInfo, v2: VertexInfo)
  addBezierVertex(v1: VertexInfo, v2: VertexInfo, v3: VertexInfo)
  closePath()

  toVertices(curveDetail: number): Array<VertexInfo> // Just one array now at this level
  drawToCanvas(ctx: CanvasRenderingContext2D)

  firstVertex(): VertexInfo // For segments, if needed
}

The vertex adder methods on Path will do a lot of the heavy lifting of figuring out when to construct new segments and of what type. The idea is that that's Path's responsibility, and then Segment is just responsible for translating that data into actual drawing commands or vertices.

Since I'm just documenting the public API, I've omitted things like the current vertex. The idea is that the path should know its own current vertex (or can be derived by getting the last vertex of its last segment), but nothing externally needs to know that.

Following @GregStanton's idea, if segments don't know their index, paths would create an array containing the first vertex of the first segment, and then calling addToVertexPath on each segment adds everything except the first vertex of the segment to the path since that will be covered by the array already. I've added a startVertex getter to facilitate that.

interface PathSegment {
  type: string
  index: number
  startVertex(): VertexInfo
  endVertex(): VertexInfo
  drawToCanvas(ctx: CanvasRenderingContext2D)
  addToVertexPath(shape: Shape, curveDetail: number)
}
GregStanton commented 1 year ago

Replying to https://github.com/processing/p5.js/issues/6560#issuecomment-1824900276 by @davepagurek:

Thank you for all this work! One of my favorite things about software development is iterating like this. My previous proposal needed to be adjusted to improve encapsulation, and it looks like you've taken the next steps. Instead of state changes coming from anywhere, we now have objects being responsible for modifying themselves. I see how removing internal state from the interfaces (like the data property in my last proposal) expresses this intent more clearly.

Eventually, we may want to specify the interfaces, the internal state for classes that implement them, and as @limzykenneth suggested, the expected call sites for the various methods. That could be a guide for implementers.

But first, I think I'll be able to provide better feedback on the interface details after I have a few questions answered.

Questions

2D vs. 3D

I'm now thinking we really have two shape models:

If we follow the single-responsibility principle, it seems like these should be implemented in separate classes. People interested in the 2D functionality may think the vertex-related code is relevant to them, when in fact it's not.

Maybe we could specify a common Shape interface that abstracts out anything common to the two cases, and then we could extend it with Shape2D and Shape3D interfaces? I'd need to spend more time to figure out what this would look like, so I figure it's good to float the general idea first.

Did I break it up too much?

Conceptually, our shapes are built from paths, and our paths are built from segments or vertices. It seems nice to organize our code to reflect this. On the other hand, breaking it up means a single function call turns into three. If we want to draw a shape, we now call a drawing method on the shape, and that calls a drawing method on a path, and that calls a drawing method on a segment.

It's not currently clear to me if this is a step backward or a step forward. Maybe we should focus on breaking up the code horizontally (between 2D and 3D cases) rather than vertically (shapes, paths, and path segments)? Is it possible that doing both would mean breaking things up too much? Maybe we could just use a Path type in our specification, rather than a full interface, like you had originally?

Boundary vs. contours

I'd love the ability to draw shapes where you only have contours...

I think you're right about combining "boundary" and "contours." The original idea is to make beginShape()/endShape() as flexible as the native canvas API by supporting composite paths; since p5 already supports moveTo() operations via beginContour()/endContour(), it seems reasonable to take advantage of the full generality there as well.

Do you see any reason why we couldn't treat beginContour()/endContour() as a general way to create the paths that make up a shape, without requiring a separate boundary to be created first? It seems like this could be done without breaking existing sketches.

davepagurek commented 1 year ago

If we follow the single-responsibility principle, it seems like these should be implemented in separate classes. People interested in the 2D functionality may think the vertex-related code is relevant to them, when in fact it's not.

Maybe we could specify a common Shape interface that abstracts out anything common to the two cases, and then we could extend it with Shape2D and Shape3D interfaces? I'd need to spend more time to figure out what this would look like, so I figure it's good to float the general idea first.

I wonder if having the ability to get vertices out of paths in general might be something we want once we have a format that makes it easy to do so, to get functionality like getPointAtLength() on SVG path objects to do things like interpolate between paths. That might be overcomplicating things for now though, but maybe to have the ability to have that but also keep the vertex and canvas implementations separate, rather than my original suggestion of having methods for both on each PathSegment, we do something like a visitor pattern that can walk a Path and convert its segments into whatever format the visitor is designed to do? That would mean path data would be public, more like your original idea, and all the segment-type-to-vertex conversions are in one place, and all the segment-type-to-canvas conversions are in another place.

It's not currently clear to me if this is a step backward or a step forward. Maybe we should focus on breaking up the code horizontally (between 2D and 3D cases) rather than vertically (shapes, paths, and path segments)? Is it possible that doing both would mean breaking things up too much? Maybe we could just use a Path type in our specification, rather than a full interface, like you had originally?

That also works as long as we can still break out the common functionality between 2D and 3D mode. I think part of the problem right now is that because the data is handled differently across both modes, we end up with different behaviours between both. I think if the functionality to build up the path is common to both, then that fully addresses that concern for me.

Continuing to think about the visitor pattern sort of idea: if we make these as their own separate objects rather than tying it directly to Renderer2D and RendererGL (the renderers can just initialize the one they want to use), then we can still pretty easily add other operations on paths in the future.

Do you see any reason why we couldn't treat beginContour()/endContour() as a general way to create the paths that make up a shape, without requiring a separate boundary to be created first? It seems like this could be done without breaking existing sketches.

I think that should be ok! It's currently only hard because contours are special cased in the implementation, but if we decide to treat them uniformly internally, then I think we can expose that new functionality without breaking any backwards compatibility.

GregStanton commented 1 year ago

Replying to https://github.com/processing/p5.js/issues/6560#issuecomment-1826355046 by @davepagurek:

Next iteration

I put together a new iteration of the interface to help us move forward. The main changes are a separation of the 2D and 3D cases, the addition of a visitor interface for adding new functionality, and a return to the original idea of public shape data.

To add new vertex types (e.g. smooth quadratic and Bézier vertices), it should only be necessary to (a) create a new user-facing vertex function, (b) create a new segment class in 2D and 3D, and (c) add a new method to any existing visitor classes.

To add a new feature, it should only be necessary to add a new visitor class and decide how the client code will make use of it; a great trial feature might be something akin to SVG's getPointAtLength().

Note: This comment contains significant edits. I'm hoping this latest iteration is better, but it may still need work. If anyone wants to help improve it or else submit an alternative proposal, that would be great!

//--------BASE INTERFACES--------
type Shape = Array<Contour>

interface Contour {
  firstVertex: Vertex
  segments: Array<ContourSegment>
}

interface ContourSegment {
  //has internal access to full contour
  index: number
  getStartVertex(): Vertex
  getEndVertex(): Vertex
  //Catmull-Rom can have an additional extend(vertex: Vertex) method
  accept(visitor: Visitor)
}

type Vertex = Vertex2D | Vertex3D

//--------2D--------
interface ContourSegment2D extends ContourSegment {
  drawToCanvas(context: CanvasRenderingContext2D)
}

type Vertex2D = p5.Vector //alias for consistent naming

//--------3D--------
interface ContourSegment3D extends ContourSegment {
  addToVertexContour(curveDetail: number)
}

interface Vertex3D {
  position: p5.Vector

  // WebGL-specific data per vertex or control point:
  textureCoordinates?: [number, number]
  fill?: [number, number, number, number]
}

//--------VISITOR--------
Interface Visitor {
  //2D
  visitLineSegment2D(line: LineSegment2D)
  visitQuadraticSegment2D(quadratic: QuadraticSegment2D)
  visitBezierSegment2D(bezier: BezierSegment2D)
  visitCurveSegment2D(curve: CurveSegment2D)
  visitArcSegment2D(arc: ArcSegment2D)

  //3D
  visitLineSegment3D(line: LineSegment3D)
  visitQuadraticSegment3D(quadratic: QuadraticSegment3D)
  visitBezierSegment3D(bezier: BezierSegment3D)
  visitCurveSegment3D(curve: CurveSegment3D)
  visitArcSegment3D(arc: ArcSegment3D)
}

Notes

  1. I switched the naming from "path" to "contour" (a) to avoid confusion with the native Path2D, and (b) to be consistent with the p5 naming, since we're now thinking of p5 contours as the general paths that make up a shape.
  2. I'm not sure about all the trade-offs involved with making shape data public. This does reduce the need to forward method calls and also makes it easier to do things like extending curve segments.
  3. Each contour segment typically starts at the endpoint of the previous segment, but as @davepagurek noted, the very first segment has no predecessor. To address this, the first vertex of each contour can be stored separately in firstVertex. We could let vertex() set firstVertex if the current contour is empty and push a line segment otherwise.
  4. Regarding the build process, I think Renderer2D and RendererGL will each have their own versions of the user-facing vertex functions; they'll push 2D and 3D segments into the shape, respectively. Then, the user-facing functions can just call their counterparts on the current renderer.
  5. At some point, I guess we'll want to consider the kind parameter of beginShape(), or other existing features. Once we settle on the interface for the current features, maybe that can be a next step.

Reference

For anyone following along, this seems to be a good introduction to the visitor pattern: https://refactoring.guru/design-patterns/visitor

Update

To be clear, I'm proposing that we make no distinction between vertex functions called directly inside beginShape()/endShape() and those called between beginContour()/endContour(). So, beginShape() starts off the shape with [/* empty contour */], where an empty contour is just a contour whose firstVertex is null and whose segments is []. Similarly, beginContour() either does nothing (if the current contour is empty) or it pushes in a new empty contour.

But... I didn't realize contours in p5 are closed by default. If we require CLOSE to be passed to endContour(), that would bring existing contours in line with the more general definition we're adopting here, and it would also make endContour() and endShape() work similarly. Unfortunately, that would be a breaking change. Oof. Would we be able to merge it?

Another update

I removed the type field from the ContourSegment interface. It was originally used to check if the current contour segment is a curve (a Catmull-Rom spline), so that curveVertex() would know whether to extend it or start a new segment. However, this was a kludge. It's probably better to use instanceof.

davepagurek commented 1 year ago

This is looking really promising! Replies to your notes:

4. Regarding the build process, I think Renderer2D and RendererGL will each have their own versions of the user-facing vertex functions; they'll push 2D and 3D segments into the shape, respectively. Then, the user-facing functions can just call their counterparts on the current renderer.

I was thinking about this some more, and it seems like there will always be some step where we have to manually parse the arguments into either 2D or 3D data, because even in WebGL mode, you can still call quadraticVertex(x1, y1, x2, y2) with just the 2D arguments. If we have to do that anyway, how do you feel about each segment being implemented by just one class instead of having 2D/3D variants, with the 3D data just being left undefined in the 2D case? This would save 2D/GL renderers from both implementing the logic to add to the current contour: I'm thinking, if each line segment is always a LineSegment class, then something (the p5.Renderer base class, or maybe a Shape class) can be responsible for taking the user's functions and constructing contour data out of it. Then maybe an abstract method on p5.Renderer called drawShape(completedShape) can be implemented by p5.Renderer2D/GL separately to apply whichever visitor it needs, and that's the only part that's different between the two?

5. At some point, I guess we'll want to consider the kind parameter of beginShape(), or other existing features. Once we settle on the interface for the current features, maybe that can be a next step.

This is a really good point! Currently, primitives like QUADS work kind of like contours: both add a moveTo() internally, either between quads, or between contours themselves. So maybe we need a translation layer between user functions and the segments we create, which in the QUADS case, waits for four vertices to come in, and then emits a new contour into the shape?

If so, then maybe we need something like a command queue, and a strategy for interpreting the queue. Maybe the default strategy always creates new segments, but we swap out the strategy when you change shape mode, so the QUADS strategy does nothing until there are four vertex commands in the queue, and then converts all of them into a new contour. Something like QUAD_STRIP also has a bit of a memory of the previous quad since successive quads share an edge, maybe that still could work if we let the strategy have state? Something like this:

class QuadStripMode {
  constructor() {
    this.lastEdge = null;
  }

  handleCommands(queue, shape) {
    if (this.lastEdge) {
      if (queue.length !== 2) return;
      shape.contours.push([
        this.lastEdge[0],
        this.lastEdge[1],
        queue[1],
        queue[0],
      ]);
      this.lastEdge = [queue.shift(), queue.shift()];
    } else {
      if (queue.length !== 4) return;
      shape.contours.push([queue[0], queue[1], queue[3], queue[2]]);
      queue.shift();
      queue.shift();
      this.lastEdge = [queue.shift(), queue.shift()];
    }
  }
}

Also, curves probably don't make sense when combined with a mode like QUADS, right? We could deal with that by having the strategy for that shape mode throw an error if it gets anything but a vertex command.

One more thought: right now the shape mode is only specified on the whole shape. In theory, this could be different for just one part of a shape (one begin/endContour(), from the user's perspective.) Maybe in the future we could let you add a shape kind as a parameter to beginContour() to temporarily override the whole shape one?

GregStanton commented 12 months ago

Replying to https://github.com/processing/p5.js/issues/6560#issuecomment-1832924215 by @davepagurek:

Thanks! You make a good point about creating a single construction process, and the queueing idea opens up a new possibility. Overall, it sounds like our next job is to figure out how to handle two cases: dimensions (2D or 3D) and kinds (POINTS, LINES, TRIANGLES, QUADS, etc.). I think we may be able to address both at once with a variation of the abstract factory pattern. I'll try to make my thinking clear enough for anyone following along.

Motivation

Let's break it down and consider the dimension problem first. Constructing a shape in 2D and 3D involves nearly the same process, and we'd like to avoid duplicate code. Since the only apparent difference is that different objects are created, we might try a creational design pattern: rather than merging the objects themselves—which leads to inapplicable object properties and extra complexity for 2D developers—we can merge the interfaces through which they're created. As it turns out, this also provides us with a way to handle the kind problem.

Handling dimension

We start by defining 2D and 3D factories. In our case, these are classes whose methods create objects, so the methods have names like createQuadraticSegment(). When beginShape() initializes the shape, it also sets the factory based on whether we're using a 2D or 3D renderer. Then vertex functions like quadraticVertex() call methods like factory.createQuadraticSegment(); segments created in this way are then added to the shape (I'll describe that in more detail soon). Note that the same interface factory.createQuadraticSegment() is used to create objects in 2D and in 3D, as we wanted.

Handling kind

Now let's consider if this will still work when we add POINTS, LINES, TRIANGLES, TRIANGLE_STRIP, etc. To build on the approach we have so far, we can continue thinking of these as different kinds (classes) rather than different modes (algorithms). Then, just as we'll have classes such as QuadraticSegment2D, we can have classes like TriangleInStrip2D. The question is, how can we define the vertex functions so that they can create these new kinds of shape primitives, without giving each vertex function too many responsibilities?

First let’s consider how many different kinds of shape primitives each vertex function may need to create. The existing non-default kinds only work with vertex(), so right now, that’s the only vertex function that needs to deal with more than one kind. However, it's possible to imagine other combinations of shape kinds and vertex kinds that could be useful in the future.[^1] So, it'd be nice if our design could accommodate arbitrary combinations without requiring us to change the code inside existing vertex functions.

Now let’s consider how each vertex function might manage to create an unforeseen variety of shape primitives. That will work if the vertex functions can retrieve the right creator method based on the vertex kind and shape kind. We could try to extend our factory classes, but a possibly simpler option might be to replace the factories with Map objects. This way, a key of the form [vertexKind, shapeKind] could be directly paired with a particular creator function; the user-facing vertex functions could then use the map to create the right kind of object, without any hard-coded shapes.

Organizing construction and rendering

Once the vertex functions create the primitives, they need to add them to the shape, and once the shape is complete, it needs to be rendered. For these purposes, we could give each primitive object addToShape() and render() methods. The vertex functions would call addToShape() on the appropriate shape primitive after creating it. After that, if each primitive shape object stores a reference to the shape and rendering context to which it belongs, the user-facing endShape() can call the render() methods directly. The render methods should be able to handle any operations that are specific to the type of primitive (e.g. a moveTo() for Quad2D); this should be true even if the operations rely on state (the shape will be fully constructed by the time an operation like moveTo() is called, and the render methods will have access to the shape). There are some aspects I’m not sure about (see Question 1 below), but I wonder if this could be a really nice way to organize things.

This way, if a user is reading the p5 website’s reference pages and clicks to see the source code, it should be much easier for them to navigate it and to understand the parts they’re interested in. When they look up the source for the user-facing shape and vertex functions, they'll see that these direct the construction and rendering of 2D and 3D shapes from one place, without if…else statements and without forwarding requests to different renderers. Beyond that, if they need to understand the details for a particular shape primitive, they'll find all the code encapsulated within that shape’s own class.

Implementing construction and rendering

I'm assuming for now that we're making the shape accessible to the primitive shape classes. In that case, as soon as the user input is encountered, we can push it through the appropriate addToShape() method and into the shape. In some cases, it will be necessary to use the shape state to determine what to do with the input, but that’s the job of addToShape(). (See Question 2 for a possible alternative based on a command queue.)

For example, a new CurveSegment2D object will have just one vertex in it. Its addToShape() method can check to see if the current contour segment is a curve segment, and if so, it can add the new vertex to that segment.[^2] If not, it can push the newly created (and still incomplete) curve segment directly into the shape. In a similar way, the render() method will know whether it’s attached to a shape created in 2D or 3D. So, it will know whether it needs to give commands to the 2D rendering context or whether it should produce vertices for the 3D case.

Summary

  1. Separate 2D and 3D vertex objects, in keeping with the single responsibility principle.
  2. Support different kinds of shapes by extending the list of primitive shape classes.
  3. Combine construction and rendering for 2D, 3D, and all kinds in the following way: a. Use factories or maps to create the right type of shape primitive at runtime. b. Use classes to encapsulate primitive shape data with the construction and rendering details specific to that data.

Questions

  1. I still haven’t learned enough about 3D rendering in p5 with WebGL, WebGL2, and potentially WebGPU. So I’m not exactly sure if a render() method on a common interface for 2D and 3D makes sense. After the necessary vertices are generated for the 3D case, would render() actually be able to render them, or would it need to pass the vertices somewhere else?
  2. Based on the general direction of @davepagurek’s last post, we might replace addToShape() with addToQueue(). Would this be better? This should eliminate the need to keep checking the shape state each time a function like curveVertex() is executed. Since a curve segment can be built from any number of vertices greater than three, we may need to store all the commands for the whole shape somewhere (following the command pattern). Then endShape() could invoke functions that construct and render the shape. This might be a premature optimization. I’d need to think more to understand if this would even work, and what the precise trade-offs might be, but I thought it’d be better to post what I have so far.
  3. Right now, contours are closed by default. This may cause problems with our new approach. If we require CLOSE to be passed to endContour(), that would bring existing contours in line with the more general definition we're adopting, and it would make endContour() and endShape() work similarly. Unfortunately, that would be a breaking change. Would we be able to merge it? I asked this question in an update to an earlier comment, but I’m not sure if everyone noticed the update.
  4. What do you all think about this overall approach? Does it have any significant flaws that I’m overlooking?

Future considerations

Maybe in the future we could let you add a shape kind as a parameter to beginContour() to temporarily override the whole shape one?

Good point! Do you have any use cases in mind? If greater generality can be achieved without increasing complexity of one form or another, then we almost don’t even need to come up with use cases. But if it does, it’d be nice to know some concrete benefits to point to. I have some initial ideas but don’t want this comment to get too long, so I’ll defer this for now. We may also want to consider if certain combinations wouldn’t make sense, and if so, how to handle that.

Once we settle on an overall approach, I guess the next step is to make the necessary adjustments to the previous iterations of the interface that we discussed.

[^1]: Bézier triangles and Bézier quadrilaterals may top the list. Catmull-Rom splines might be extended similarly. If we allow for new types of shapes or vertices, NURBS curves and surfaces might be worthy of consideration. Or, if we're feeling really creative, maybe even spherical triangles or elliptical triangles with arcVertex()? Hmmm... not sure about that one. [^2]: Since the new curve segment object is not added to the shape in this case, we could avoid creating a whole object around the vertex, and just push it directly into the existing segment. That's how the original proof of concept works, but that was only possible because the construction logic was included directly inside of curveVertex(), not in a separate class method like addToShape(). However, any memory allocated for the temporary object should be reclaimed after curveVertex() executes, so this doesn't seem like a problem. Also, putting the logic inside an addToShape() method has a major benefit: vertex() isn’t responsible for constructing a long list of primitives (POINTS, LINES, TRIANGLES, etc.). In particular, vertex() won’t need to be changed if a new shape primitive is added.

Update: Changed TriangleStrip2D to TriangleInStrip2D and revised some wording in the section "Organizing construction and rendering," for clarity.

davepagurek commented 12 months ago

2D and 3D segment interfaces

I’m not exactly sure if a render() method on a common interface for 2D and 3D makes sense. After the necessary vertices are generated for the 3D case, would render() actually be able to render them, or would it need to pass the vertices somewhere else?

Right, I'm not confident that we'd able to have a common render() interface (especially because "rendering" in 3D might mean drawing to the screen, or adding to a p5.Geometry if done within a builder class.) Because WebGL has to implement clipping itself, after all the shape outlines have been created, we have to postprocess them to convert them to triangles that can then be rendered. I think changing that process should be out of scope for this in order to limit the size of the change.

Thinking through the implications of this, I feel like it's conceptually simpler to be able to have methods that convert to different output formats: toVertices(): Array<Vertex> and toContext2D(ctx: CanvasRenderingContext2D): void.

If we had a common render() interface, 2D and 3D things could have separate implementations of the same method. The trouble with having separate methods is that we'd either need all 2D and 3D segments to implement all methods, which would result in duplication between 2D and 3D implementations, or we have to introduce a lot of abstraction all the way up our chain of classes to be able to handle the fact that in some cases we have one method available and in other cases we have another. While separation of concerns is a good goal, in this case I feel like it maybe wouldn't be worth the extra complexity?

I personally am not opposed to having just one dimensional type per segment type, which implements both output formats in one spot. That would look something like:

// Probably will extend a base class that handles some construction stuff later
class LineSegment {
  shape: Shape;
  index: number;
  vertex: Vertex;

  constructor(shape: Shape, index: number, vertex: Vertex) {
    this.shape = shape;
    this.index = index;
    this.vertex = v;
  }

  toVertices() {
    return [this.vertex];
  }

  toContext2D(ctx: CanvasRenderingContext2D) {
    if (this.index === 0) {
      ctx.moveTo(this.vertex.position.x, this.vertex.position.y);
    } else {
      ctx.lineTo(this.vertex.position.x, this.vertex.position.y);
    }
  }
}

Downsides:

The benefits of this are:

Handling kind

I think a map is a good idea. Maybe rather than putting both the vertex and kind in the key, which means having to enumerate a lot of possibilities, we can just have two maps? The kind function could then look up the vertex type in the map if it wants the default behaviour, or override it with custom behaviour (e.g. a POINTS kind could add a special PointSegment given an incoming vertex(), and throw an error for anything else.)

addToShape vs addToQueue

I think defaulting to just using the shape can make sense, and we can keep the queue idea in our back pocket if we find the implementations are getting pretty complex.

Closing contours

I think as long as our internal contour structure doesn't care if it's closed or not, then we can continue making the user-facing endContour() auto-close itself. That way the main path can continue to be implemented internally as a contour, and we don't change the behaviour of existing sketches that use begin/endContour. In the future, we could add an endContour(OPEN) flag if we want, so we haven't permanently closed the door on non-closed contours.

Points

POINTS feels like an outlier currently: in 3D mode they are not processed at all, each is drawn as a particle without any geometry processing. In 2D mode, we implement these as circle paths (so they might actually end up being able to clip things? But this seems like an unintended side effect if they do, and probably not a behaviour we need to preserve.)

Maybe we should implement these as contours with just one PointSegment. Then in our toContext2D() implementation, that can draw a small circle, and in toVertices we can choose to either ignore them (and add a toPoints method to extract just the points), or output a length-1 contour that we can postprocess however we like to treat them separately.

GregStanton commented 12 months ago

Hi all!

We're going to have an online video call to hash out an approach that will allow us to accommodate composite paths, contours, the 2D and 3D cases, shape kinds, and visitors; @davepagurek and I will be there, and anyone who is interested is welcome to join!

Date: Thursday, December 7, 2023 Time: 5:00 PM Eastern Time (UTC-5) Place: BitPaper

The BitPaper platform we're using includes a shared online whiteboard, and it runs in the browser, so no installation is required. You can use the link to visit the meeting room and take a look at the agenda, but the pages are locked until the meeting goes live, so no edits will be possible until then. In case anyone is interested and can't make it, we'll post meeting notes here.

If you're interested, we'd love it if you could comment here, so that we know how many people to expect. But, if you can't let us know in advance, you're still welcome to attend.

Thanks!

Update: When the meeting opens, you may need to sign in to BitPaper. You should be able to join with a Google or Meta account, or with an email and a password. It's free for you to use.

xanderjl commented 12 months ago

I'd love to be a fly on the wall if you'd have me @GregStanton 👀

GregStanton commented 12 months ago

Of course @xanderjl! Looking forward to it. I just updated the meeting announcement with the actual link to the meeting room.

Gaurav-1306 commented 12 months ago

i would also i to join the meet.

GregStanton commented 11 months ago

Hi all! I put together a new approach based on all the discussion so far, and I updated it with feedback based on the meeting.

Features

This plan supports the following features:

Goals

We can check any design changes against the following goals:

Design overview

After beginShape() initializes a shape object, each type of vertex function calls its counterpart on that object. For example, vertex() calls shape.vertex().

Each vertex method attached to shape will create a vertex object and build the appropriate primitive shape around it, depending on the shape kind; the appropriate primitive constructor will be selected from primitiveShapeCreators (a wrapper class for a Map object). Once a shape's vertex function creates the appropriate primitive, it has the primitive add itself to the shape:

  let vertex = createVertex(this.vertexStates); // vertex states include position, fill, etc.
  let primitiveShapeCreator = primitiveShapeCreators.get(vertexKind, shapeKind);
  let primitiveShape = primitiveShapeCreator(vertex);
  primitiveShape.addToShape();

Then endShape() closes the shape if CLOSE is passed, and it calls drawShape() on the current renderer (a 2D or 3D renderer). The drawShape() method loops over the shape’s primitives, letting a visitor produce the output for each one, and then it renders this output. Specifically, here's how drawShape() works in each case:

Note: primitiveShapeCreator(vertex1, vertex2, ...) may also be supported, for more general usage

Shape construction: Shape -> Contours -> Primitives -> Vertices

A shape consists of contours (subshapes), which themselves consist of shape primitives. While the shape primitives are the most basic objects that can be rendered, they may be composed of one or more vertices. The outline below indicates how the design will look with the planned API for custom shapes in p5.js 2.0.

interface Shape {
  kind: string // constants.LINES, etc.
  contours: Array<Contour>

  // shape and contour methods
  beginShape(kind: string)
  endShape(mode: string, count: number)
  beginContour(kind: string)
  endContour(mode: string)

  // vertex constructors (these may support an x, y, u?, v? signature too)
  vertex(x: number, y: number, z?: number, u?: number, v?: number)
  bezierVertex(x: number, y: number, z?: number, u?: number, v?: number)
  splineVertex(x: number, y: number, z?: number, u?: number, v?: number)
  arcVertex(x: number, y: number, z?: number, u?: number, v?: number) // new in p5.js 2.0

  // vertex setters
  fill(color: p5.Color)
  stroke(color: p5.Color)
  normal(x: number, y: number, z?: number)
}

interface Contour {
  kind: string // set to shape kind unless a kind is provided to `beginContour()`
  primitives: Array<ShapePrimitive>
}

// abstract base class
interface ShapePrimitive {
  vertexCount: number // getter that returns current number of vertices in the primitive
  vertexCapacity: number // getter that returns max vertex count from a private class field
  accept(visitor: Visitor)
  addToShape(shape: Shape) // acts based on whether vertexCount < vertexCapacity for shape's current primitive
}

interface Vertex {
    position: p5.Vector //set via a vertex function
    textureCoordinates: p5.Vector //set via a vertex function
    normal: p5.Vector // set via normal()
    fill: p5.Color // set via fill()
    stroke: p5.Color // set via stroke()
    toArray(): Array<number> // returns vertex data in a 1D array
    // add lerp() or other interpolation methods, like p5.Vector has?
}

Notes:

  1. Vertex instances may generate properties dynamically, depending on the renderer.
  2. The Shape class is a slim version of Processing's PShape class. It'd work with only the kind and contours properties, but the extra methods make it possible to completely separate shape construction from shape rendering. The result is a more extensible design. (For example, it'd be easier to add SVG-export functionality. Without rendering a path, it could be stored in a Shape instance and converted to an SVG path.)

Path primitives

Paths are created when the contour kind is PATH, which is the default. Each path consists of a single anchor at the start, followed by one or more segments.

Anchors

An Anchor is the starting point for the segments in a path. It's created when a contour of the PATH kind is empty (the contour kind field will be implemented with a getter that will return EMPTY_PATH in this case).

In the typical case, a path segment will begin at the end of the preceding segment. However, the very first segment lacks a predecessor; the anchor fills this role. (For example, PrimitiveToContext2DConverter would convert the anchor to a moveTo() command to ensure that the path starts from the right place.)

Segments

Segments are equivalent to subpaths in the native canvas API and SVG.

interface Segment extends ShapePrimitive {
  index: number // used to get start vertex from preceding segment
  getStartVertex(): Vertex
  getEndVertex(): Vertex
}

Isolated primitives

Isolated primitives aren't connected to their neighbors. They're created when the contour kind is POINTS, LINES, TRIANGLES, or QUADS.

Since isolated primitives aren't connected to their neighbors, these primitive classes are self contained. For example, Line directly contains two vertices, instead of pulling its start vertex from its predecessor. This distinguishes Line from LineSegment.

Tessellation primitives

Tessellations consist of shapes that cover a region in 2D or a surface in 3D, with no overlaps and no gaps. They're created when the contour kind is TRIANGLE_FAN, TRIANGLE_STRIP, or QUAD_STRIP.

Note that the classes have names like TriangleStrip instead of TriangleInStrip; in other words, the primitive is the entire strip, rather than a single triangle within a strip.

Future primitives

There are other combinations of vertex kinds and shape kinds that lead to meaningful primitives. For example, in a contour with a kind set to TRIANGLES, the bezierVertex() function could produce a Bézier triangle, represented by a BezierTriangle primitive. These could potentially be implemented in the future.

Shape handling: Visitor classes

A primitive visitor can visit different primitives and use their data, for example to produce native Canvas API drawing commands. Putting such functionality inside a separate visitor class means that a 3D primitive doesn't require a method for the 2D Canvas API, for example. Visitor classes are also independent of the renderer, which makes it easier to add new features that could potentially work with multiple renderers. A few examples of classes that might implement the PrimitiveVisitor interface are below.

Class: PrimitiveToPath2DConverter

Class: PrimitiveToVerticesConverter

Class: PointAtLengthGetter

Interface PrimitiveVisitor {
  // path primitives
  visitAnchor(anchor: Anchor)
  visitLineSegment(lineSegment: LineSegment)
  visitQuadraticSegment(quadraticSegment: QuadraticSegment)
  visitBezierSegment(bezierSegment: BezierSegment)
  visitCurveSegment(curveSegment: CurveSegment)

  // isolated primitives
  visitPoint(point: Point)
  visitLine(line: Line)
  visitTriangle(triangle: Triangle)
  visitQuad(quad: Quad)

  // tessellation primitives
  visitTriangleFan(triangleFan: TriangleFan)
  visitTriangleStrip(triangleStrip: TriangleStrip)
  visitQuadStrip(quadStrip: QuadStrip)
}

Note: If a particular visitor class doesn't apply to a certain kind of primitive, it could either omit the method corresponding to that class, or it could throw an error.

Updates: This comment is being kept up to date with any high-level design changes made during implementation.

davepagurek commented 11 months ago

That update looks great @GregStanton, it looks like it will handle all the situations we've though of so far!

On the call today we can go over this all again and answer any questions and clarifications other contributors might have.

GregStanton commented 11 months ago

Updates from the meeting on December 7, 2023:

Thanks so much to everyone for attending! During the meeting, we adjusted the latest approach and cleared up our main questions. I went ahead and incorporated the adjustments into my previous comment. It looks like we're ready to go ahead with the approach described there. I'll summarize the key points below. I'm filling in some details myself, so if anyone notices any errors, please let me know!

Summary of questions

Summary of decisions

It looks like we're going to go ahead with this interface with a few adjustments.

Adjustment 1: Based on the discussion, it was determined that the tessellation primitives can implement the PrimitiveShape interface directly, without the need for an extendByVertex() method. The CurveSegment primitive will not need this type of method either. The idea is that we'll allow primitives of the same type to have write access to each other's data. I went ahead and edited the proposal above to include these changes.

Adjustment 2: In the 2D case, we may want to output commands to the Path2D interface rather than the rendering context. This may allow us to issue the basic rendering commands and any clipping commands using the same kinds of paths, since the native canvas API's clip() method accepts a Path2D object.

Next steps

Will it be helpful to break up the refactoring into smaller tasks, and then possibly assign those to individual contributors? Or would it make sense for someone to take on the whole thing? Other than this project management aspect, it seems that we're ready to start on the implementation. After that, since it will be a big change to the codebase, we'd like to use the new visual testing system (once it's ready), to prevent regressions.

capGoblin commented 11 months ago

Hi everyone,

Apologies for the delay in updates. Here's the progress so far:

I've implemented the vertex functions for both 2D and 3D, along with visitors and contours, and initiated work on shapekinds.

If possible I'll aim to complete shape kinds and create a PR for comprehensive review or create one with whatever done so far, soon.

GregStanton commented 11 months ago

Hi @capGoblin! No problem at all, and thanks for all your work! Did you see the update in the body of this issue about the new approach, the task list, and the new branch?

GregStanton commented 11 months ago

Pinging @Qianqianye: When I go to create new issues and link them to the task list in the body of this issue, I'd like to apply the "Good First Issue" label to at least some of them (I'll provide clear instructions to make the issues simpler). Would you be okay with giving me triage access so that I can apply labels?

I thought about asking someone else with access to apply the labels for me, but I was helping a new contributor on Discord, and I noticed that there currently aren't any issues in the p5.js repo with the "Good First Issue" label. That makes it harder for beginners to figure out where to start. If at least a few more people have triage access, there's a better chance that simpler issues will receive this label.

In general, I'm managing this refactoring issue and related issues, along with the discussions, and I've started reviewing PRs on some of the issues I started, so having triage access should be pretty helpful.

capGoblin commented 11 months ago

Hey @GregStanton, I overlooked the last part of the comment. That plan sounds great! In the meantime, I'll focus on addressing some other issues.

perminder-17 commented 11 months ago

Pinging @Qianqianye: When I go to create new issues and link them to the task list in the body of this issue, I'd like to apply the "Good First Issue" label to at least some of them (I'll provide clear instructions to make the issues simpler). Would you be okay with giving me triage access so that I can apply labels?

I thought about asking someone else with access to apply the labels for me, but I was helping a new contributor on Discord, and I noticed that there currently aren't any issues in the p5.js repo with the "Good First Issue" label. That makes it harder for beginners to figure out where to start. If at least a few more people have triage access, there's a better chance that simpler issues will receive this label.

In general, I'm managing this refactoring issue and related issues, along with the discussions, and I've started reviewing PRs on some of the issues I started, so having triage access should be pretty helpful.

I apologize for the ping, @Qianqianye. I am an active contributor to p5.js, and I've been discussing strategies on with @GregStanton on Discord to expand our organization by involving more contributors. We believe that labeling certain issues as "Good First Issue" would greatly beneficial for the new contributors. I've identified several suitable "good first issues" and am willing to label them all. I'm considering two options: I could reach out to @davepagurek on Discord and request him to label the issues, but I don't want to bother him excessively as there are many. Alternatively, I could be granted triage access to label the issues independently, providing a smoother start for new contributors. I'm open to both options, and I leave the decision to you, respecting your judgment. Thankyou :)

limzykenneth commented 11 months ago

@GregStanton I've given you triage access to this repo.

GregStanton commented 11 months ago

@GregStanton I've given you triage access to this repo.

Thanks @limzykenneth!

limzykenneth commented 10 months ago

@GregStanton Not sure what point you are at with this feature but I'm thinking of moving this to be a proposal in the 2.0 RFC to give it a bit more flexibility in terms of not having to consider breaking changes as much. Let me know if that sounds good to you, you can summarize what you have here as the proposal and no need to write something from scratch.

GregStanton commented 10 months ago

Thanks @limzykenneth! There's been some good progress so far. I've sorted out a number of the smaller tasks in the task list, and we've worked out the overall implementation. However, there are some major updates that I'd like to make, in light of the move to p5.js 2.0.

  1. I proposed a significant API change in a related issue, based on an idea from @davepagurek.
  2. I worked out initial ideas for a minimal p5.Shape class that would expose some of the essential functionality we've planned. It would also make the design more extensible for add-on libraries. Dealing with this now would be good because there may be some consistency issues with p5.Geometry to deal with.

In short, I think this issue definitely satisfies our criteria for the 2.0 proposals, but I need to figure out how to incorporate the two updates above. Also, the comments on the current issue contain a long brainstorming discussion that led to the implementation design, but newcomers shouldn't need to read through all that.

So, it might make sense for me to submit a clean, updated issue using the new 2.0 template. (I could tag everyone who was originally interested in this issue and link back to it for context.) Do you mind if I spend some time this week to figure out a good approach?

GregStanton commented 10 months ago

Okay, I'm ready for this issue to be listed as an RFC proposal, @limzykenneth. I think this should be separate from the proposal for the new vertex API, since the refactoring could be done for either API.

GregStanton commented 1 month ago

Some thoughts about an explicit name for the default shape kind in 2D and 3D are below. Any feedback would be great!

Current situation: It looks like TESS is the explicit name for the default shape kind in 3D mode, but there is no explicit name for the default shape kind in 2D mode (it's set to null). The reference doesn't specify the default in either case, and the explanation for TESS seems unclear.

Solution? We'll want to address this as part of the refactoring, in accordance with proposal #6679. That proposal mentions SEGMENTS as a possible name for the default shape kind in 2D and 3D (we would eliminate TESS). However, I'm now thinking this name doesn't quite work. How about SEGMENT_CHAIN? Although CHAIN would be more concise, I'd probably go with the more precise term, especially since it fits well with existing names like TRIANGLE_FAN, etc.

Here's why I'm proposing SEGMENT_CHAIN in place of SEGMENTS:

davepagurek commented 1 month ago

@jules-kris, you're looking into explicit names for default values right now, let me know if you have any feedback on the name for this one!

I think SEGMENT_CHAIN is a good working title while we think, and we can switch the name if we think of something else. I guess in the default mode, you're defining a shape by creating the contours or tracing the outlines of shapes? both "contour" and "outline" are kind of overloaded terms though, hmm. Silhouette might be a less overloaded kind-of-synonym? And some of the other modes also do that, e.g. TRIANGLES or QUADS -- maybe that's ok though and those can be thought of as special cases for when you want to draw the silhouettes of a bunch of the same kind of shape in a row?

GregStanton commented 1 month ago

@davepagurek: Thanks Dave! I appreciate the brainstorm!

My initial feeling about "outline" and "silhouette" is that these both seem to refer to marking the boundary of a 2D shape, which is typically a closed curve. To be fair, beginShape() does use "shape" to include non-closed curves... But I'm not sure, especially if we want the same default for both 2D and 3D. For example, we might draw a helix in 3D mode; it's hard for me to see that as an outline or a silhouette. Also, silhouette is hard to spell.

But it's great to come up with lots of ideas. The more the better. I'll let your ideas percolate in my brain and see if they inspire something new!

GregStanton commented 1 month ago

Oh wait. How about PATH? Our usage would be similar to usage in related contexts. This term generally refers to any curve (open or closed), potentially made of multiple pieces (linear or nonlinear). It also doesn't seem to conflict with anything else in p5.

Paths in related contexts:

Does anyone see any downsides to this? Perhaps we'd want to use "path" to mean something else in the future? I doubt that's a concern, though. The default shape kind is the closest thing in p5 to the concept of a path. In fact, I just noticed that I referred to these default shapes as "composite paths" in the title of this issue haha. I also previously described our concept of segments as "equivalent to subpaths in the native canvas API and SVG." Instead of classes for LineSegment, etc. we could have LinePath etc. (This is distinct from the Line class.)

davepagurek commented 1 month ago

Path sounds pretty good to me, and it's definitely easier to spell! I suppose its a little overloaded in the graphics space because e.g. in SVGs and in the native canvas context, they have objects called paths rather than modes, but p5 doesn't have any objects or other functions using that name, and this shape mode works pretty similarly to the SVG/native canvas path drawing APIs.

GregStanton commented 1 month ago

Thanks @davepagurek! Good point about objects vs. modes. Hopefully that doesn't cause too much confusion. For what it's worth, we will have path objects under the hood (LinePath, BezierPath, SplinePath), although I don't expect we'll expose those to the user, at least not for now.

GregStanton commented 1 month ago

One more idea for the name of the default shape kind:

If a future version of p5 were to allow all shape kinds to work with all vertices, then we'd be forced to interpret LINES more generously. In some contexts, "line" is indeed used to mean "curve" (we have more precise terms "rectilinear" and "curvilinear" for this reason). Then LINE might work even better than PATH as the name of the default shape kind. I thought I should add this observation here, for the record, but I'm not sure yet which name is better.

GregStanton commented 4 weeks ago

Update: After a quick real-time discussion with @davepagurek , it looks like PATH and LINE are our top candidates for the name of the default shape kind. We put a little thought into this, since users may sometimes need to explicitly name the default shape kind, due to the new way contours will work.

@davepagurek: After writing down our observations (see below), I'm now leaning toward PATH rather than LINE. Do you have a preference?

@jules-kris: If you have any feedback, that would be great! If you want to skip the foregoing discussion and just focus on this comment, it should summarize the important things.

General criteria: It'd be nice to fulfill as many of the criteria below as we can (I also adapted these into a general list of criteria for #6679).

  1. Descriptiveness
    • Implies straight, curved, or composite
    • Implies open or closed
    • Implies planar (fits in a 2D sketch) or non-planar (requires a 3D sketch)
  2. Consistency
    • Is free of conflicts with names of other shape kinds
    • Is free of conflicts with p5 API in general
    • Is free of conflicts with other APIs (mainly the Canvas and SVG APIs)
    • Is free of conflicts in related contexts (art, math, ...)
  3. Simplicity
    • Familiar to beginners
    • Easy to spell
    • Concise
  4. Durability (in the future, its scores on the other criteria will not be reduced)

Assessment of PATH:

Notes:

  1. The consistency would be excellent, except that Canvas and SVG paths are user-facing objects, rather than modes. For now at least, p5's path objects will just be internal. (Specifically, a path would be implemented as a contour object with the PATH kind.)
  2. The durability would be excellent, except that the name LINE would be more consistent with LINES in the future, if p5 were to allow all shape kinds to work with all vertex types.

Assessment of LINE:

Notes:

  1. Descriptiveness isn't "Excellent" since "line" is often used to specifically mean a straight line, even though "line" is used to refer to straight or curved lines in some contexts, including in math and in art. As @davepagurek noted in conversation, it also may imply that it makes only stroked shapes, as in line art.
  2. Consistency is "Average" since LINE conflicts with the meaning of line(), and since it conflicts with internal class names (at least currently, and resolving these conflicts might require longer class names).
  3. Durability is "Excellent" rather than "Good" because it would fit well with the existing LINES name if all shape kinds eventually work with all vertex types.

Plan for now: I'll use PATH for now. We can potentially change this, up until the 2.0 release.

golanlevin commented 3 weeks ago

I was about to file a new issue, then found this one. As you are aware, mixed vertex types are not playing well together. I'm currently working on an SVG-export library specifically for pen-plotting, to which this issue is directly relevant. It seems like y'all are far ahead on thinking about this, so I'm just parking this here as an illustration of the current state of things (v.1.11.0) for mixed vertex types. Thanks for all of your effort on this.

problems-with-mixed-vertices

A live version of this demo is here: https://editor.p5js.org/golan/sketches/gGBFMTvaI

let correctImage;

function setup() {
  createCanvas(800, 500);
  correctImage = loadImage("correct.png"); 
}

function draw() {
  background(220);

  text("Mixed vertex types don't play well together:", 75, 70); 

  push();
  translate(0,0); 
  text("Bezier only", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  bezierVertex(100,100, 110,130, 125,130);
  bezierVertex(140,130, 150,100, 170,100);
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  // quadraticVertex(125,140, 100,195); // commented out
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 

  push();
  translate(150,0); 
  text("Quadratic only", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  //bezierVertex(100,100, 110,130, 125,130); // commented out
  //bezierVertex(140,130, 150,100, 170,100); // commented out
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  quadraticVertex(125,140, 100,195); 
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 

  push();
  translate(300,0); 
  text("Bezier + Quadratic (Wrong)", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  bezierVertex(100,100, 110,130, 125,130);
  bezierVertex(140,130, 150,100, 170,100);
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  quadraticVertex(125,140, 100,195); 
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 

  push();
  translate(500,0); 
  text("Bezier + Quadratic (Correct)", 75, 90);
  image(correctImage, 75,100); 
  pop(); 

  text("Adding a curveVertex alters the entire shape, but shouldn't:", 75, 270); 

  push();
  translate(0,200); 
  text("2 vertexes on left", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 60, 160); // regular vertex added
  vertex( 90, 140); // regular vertex added
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 

  push();
  translate(150,200); 
  text("2 curveVertexes on left instead (Wrong)", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  curveVertex( 60, 160); // curveVertex added instead
  curveVertex( 90, 140); // curveVertex added instead
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 
}
GregStanton commented 3 weeks ago

Thanks for sharing @golanlevin! It's great to hear that this project may help with the library you're working on! Would you be interested in doing some user testing with your library once everything is ready? If so, you may want to check out the new vertex-function API for p5.js 2.0, since it looks like we're going to be switching to that.

golanlevin commented 3 weeks ago

@GregStanton Thanks for pointing me to the new vertex API for p5 v.2! The SVG-export library I'm making is specifically intended for p5.js, so this is directly relevant to me. I appreciate that there will be some breaking changes, and I'll check the roadmap closely!

GregStanton commented 3 weeks ago

@golanlevin Sure thing! I'm glad that was helpful :)

golanlevin commented 1 week ago

@GregStanton My SVG-export library is now public: https://github.com/golanlevin/p5.plotSvg

I'm keenly aware that the new vertex-function API (which is great) for p5 v.2 will cause some breaking changes in my library. (Is testable code available somewhere, or is it still in the specification-design stage?)

GregStanton commented 1 week ago

@golanlevin Wow, looks cool! The README is quite helpful.

I'm glad to hear you like the new vertex-function API in #6766! Would you mind adding a comment with your feedback on that issue? It's nice to have as much community feedback as possible, and it helps to be able to see it in one place. A sentence would be fine :)

We're actively working toward the new API in the dev-2.0 branch, but we're not there yet. Some detailed aspects of the design (e.g. for splines) are still being discussed.

For a general sense of what all is being worked on, there's a public project tracker for p5.js 2.0, as you may have noticed. I'm not sure if the timeline is public yet, but the plan has been to have something working by the end of 2024.

I think @davepagurek will know whether there's a more specific date we can comment on. He may also have a better idea about a compatibility module. (I think the idea is to continue supporting old features in a compatibility add-on, for some time after 2.0 is released.)

limzykenneth commented 1 week ago

@golanlevin Had a quick glanced at the code of your library and I think for the most part there probably won't be breaking changes and if there are, should be easy to fix.

I plan to go through at least some of the addon libraries at a later stage to help start the transition of addon libraries to use newer syntax, I can take a close look at yours at that stage as well if you like.