Closed jtran closed 6 days ago
Example of a multi-segment path constrained to a distance.
// We'll use two segments, but there could be any number.
p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)
p3 = screen_point(x: 21, y: 23)
l2 = line(p2, p3)
// The required span distance. Maybe this is a parameter.
d = 10
// Assuming they're not axis-aligned:
dist(p1, p3) = d
// Return the Euclidean distance between two points.
fn dist(p1, p2) {
return sqrt((p2.x - p1.x)^2 + (p2.y - p1.y)^2 + (p2.z - p1.z)^2)
}
The only thing is, I have no idea how to do curves. I think the implementation would need to be aware of how they’re represented with the formulas so that they can be symbolically manipulated.
I’m realizing also that I think screen_point()
needs to have special treatment as some kind of soft-constraint that only applies when not fully constrained otherwise.
- (Inline) I'm a little concerned about
screen_point
, because to me it reads as "screen space", as in it is dependent on the camera position which seems odd. I think I need a bit more explanation on what that means (theprint_point
both helped and harmed what I thought my understanding was).
Yes, this needs work.
- (Inline) I could use a little more explanation around what it means to be fully constrained in this paradigm's sense. I think it means in the mathematical sense that the equation we're we're solving for has more than one solution, but that seems distinct from the CAD meaning of the term, in which one or more positions are "movable". I think they are the same underlying idea in both senses, but in CAD you typically display "not fully constrained" sketch geometry (traditionally black geometry is constrained and blue geometry could be moved around).
I don't know what you mean. Maybe I need to read up on how CAD software uses the term. I intended "fully constrained" to mean that a variable, like the X-coordinate of a point, has a unique and computable value. If this isn't the case, we cannot know where to display it on the screen.
- (Inline) Am I understanding correctly that this would actually use Datalog under the hood, or just conceptually, and continue to be in Rust?
I imagine that we'd either write our own Datalog engine in Rust or use a Rust library, like this one. The Rust compiler uses chalk, which is another guide.
- (Inline) This seems oriented (correctly) toward robustly describing 2D geometry, but I'm interested in how this maps into 3D geometry as well? This would help me understand what an "extrude" and "revolve" map to in this realm of thinking.
I don't know. We'll have to work through more examples. Part of me thinks that it's just more geometry. I think Bézier curves are pretty straight-forward math. But this is beyond my expertise. Until we try it, I don't know what challenges we'll face. Even curves in 2D are kind of over my head. 3D is probably even more challenging.
- (Inline) How is construction geometry represented?
If I understand correctly that construction geometry is geometry that is used only for building the model, but not intended to be exported or manufactured as part of the model, this language design currently doesn't treat it differently from other geometry. All geometry that has known coordinates will be displayed.
Perhaps we could add something similar to hide
, but instead of not displaying, it omits the geometry from export.
- (Inline) During KittyCAMP 2024 we discussed the benefits of "having every segment be relative unless a 'fixed' constraint is applied". How are relative lines represented in this approach at all?
Yes, you're absolutely right. I completely screwed this up. I want everything to be relative. I will need to change a few things. Maybe this will inform how to fix screen_point()
et al.
- How does this proposal interface with tags? Does it preclude the need for them, since every entity is represented explicitly? I don't think that is the case.
I believe that by giving everything a variable by default, like p1
and l1
in the examples, we're essentially tagging everything. In fact, we're able to tag points, which I don't think KCL can currently do.
I've brought this up before that tag declarators in KCL today are essentially output parameters. We could rearrange return values of stdlib functions such that we could use standard variable bindings in place of local tags. But it would be more verbose. The whole thing where they modify .tags
of the sketch is a separate thing.
(one of them)
If we can think of ways to display partially-constrained geometry, perhaps with default values or hints for the unconstrained properties, would that support proper components in the same manner? More of a UX challenge for us than a language one.
I mentioned this in the comment that I think it would be nice if when you drew in the UI, it essentially created fallback coordinates/offsets that are only used if something else in the program doesn't constrain it to be somewhere else. But this is very under-baked. I'm not sure how we could implement this.
I believe that by giving everything a variable by default, like p1 and l1 in the examples, we're essentially tagging everything. In fact, we're able to tag points, which I don't think KCL can currently do.
I've brought this up before that tag declarators in KCL today are essentially output parameters. We could rearrange return values of stdlib functions such that we could use standard variable bindings in place of local tags. But it would be more verbose. The whole thing where they modify .tags of the sketch is a separate thing.
Yes this makes sense; it aligns exactly with what you and @lf94 have been saying about tags as output parameters, another way of declaring.
Some high level comments (I have thoughts on syntax and stuff, but the details don't matter too much as this stage). I've addressed the proposal on its own terms (i.e., my comments are purely technical), but also this would be a big change at a late stage and even once we implement it, it would require a lot of iteration to get right, so I think that adopting the proposal is more of a business decision than a technical one.
I like the idea of concretely specifying constraints, but making the whole language a constraint solving problem would have some big downsides:
All 4 of these sub-points are great. @nrc if you don't mind making your list numbered, it would make referring to points a bit easier in conjunction with quote replies.
/r/ProgrammingLanguages thread from yesterday: programming languages where algebra is a central feature of the language
One link that has a neat interactive demo: Constrain - a JS library for animated, interactive web figures, based on declarative constraint solving
1st of all I really appreciate you walking through how the code is generated for each of the user's UI interactions, syntax proposals without these walk-throughs are missing important context.
||l2|| = 1.2 * ||l4||
and parallel(l1, l2)
from two of your examples seem incongruent to me, I'm mostly talking about syntax, as it on uses special syntax, and the other uses a function call.
but another perspective is that one defining a length in reference to another while the other just defines an higherlevel constraint. I can imagine that either
equalLength(l2, l4)
parallel(l1, l2)
OR
||l2|| = 1.2 * ||l4||
<<l1<< = <<l2<< // <<line<< is a arbitrary syntax for angles
Would achieve similar results (NOT equivalent, but similar in terms of user intention) Maybe we should choose one (both is also fine) but I kinda like the expression syntax, it at least defines the length/angle of one of the segments as the reference length/angle
Question, what happens if if something is over constrained? I imagine in the UI we will have a way to stop users from adding extra constraints to already full constrained segments, but we can't stop user from typing extra constraints. Assuming the constraint solving is done all within kcl (no engine needed) are we able to show diagnostics in the editor pointing out the problem?
With
<expression> <relational operator> <expression>
Do you think there's confusion with using the same simples like =
and <
since they have other uses?
The resolution strategies section makes me a bit nervous. Am I right in saying that this comes as a result of allowing the <
or >
relation operators? (am I understanding correctly?) if so maybe we just don't include this initially?
As a side note
mySegmentAngle < 370
maximize(mySegmentAngle)
maybe not important but maximising an angle to 370, is really maximising it to 10 ya know, feels a little uncomfortable.
You lost me completely in the Displaying Geometry
, why can't unconstrained geometry be displayed? I thought that the use of the screen_point(x: 10, y: 12)
gives all of the segments their initial values that then would be overridden by the constraints? I guess I'm misunderstanding something.
typing in ||
and us than going into a UI mode where they select a segment seems very jaring to me, I think bog standard auto complete makes more sense, and we have a separate UI for adding common constraints, (click equal length button, than select two segments.)
I have a question around tags, obviously they are not needed for defining 2d constraints anymore with the paradigm, but tags use goes beyond sketches, for selecting faces etc. Will this work with tags, or are tags not needed because
p1 = screen_point(x: 10, y: 12)
p2 = screen_point(x: 20, y: 13)
l1 = line(p1, p2)
l1
becomes the tag?
Lastly I'll link the reasons why I came up with the current chaining syntax and why I didn't want to go down a solver path https://github.com/KittyCAD/modeling-app/issues/111 The alternative I'm comparing against is very different from this proposal
I feel like I didn't see any examples of dimensions being constrained here, like how would a user specify that lineA is 5 units away from lineB?
@Irev-Dev,
Obviously, this whole doc is very under-baked. So take everything with a grain of salt. It's just ideas, not fully thought out.
<<l1<< = <<l2<< // <<line<< is a arbitrary syntax for angles
Interesting. I like weird language ideas 🙂
Question, what happens if if something is over constrained?
It would show an error displaying the source lines of all constraints that conflict.
Yes, this is a problem in certain real programming languages. If everything is a constraint on equal footing, who's to say which one is wrong? I would love if we could find a sweet-spot the way programming languages have mostly settled on specifying the type on function signatures. That's authoritative. If a call site differs, the call site is wrong, not the function signature. Contrast this with call sites constraining what types function parameters are. That's what would happen if you treated all constraints as the same.
Do you think there's confusion with using the same simples like
=
and<
since they have other uses?
Yes, I think it can be confusing to distinguish between pure computation and constraints. The main difference being which way data flows, into parameters or out. This is why I invented the split between fn
and constraint
. But I agree that it can be confusing which context you're in. Since I wrote this, I've been reading about logic programming and functional logic programming. The latter tries to unify logic programming and functional programming to get the best of both, and that's exactly what I'd like to do.
Am I right in saying that this comes as a result of allowing the
<
or>
relation operators? (am I understanding correctly?) if so maybe we just don't include this initially?
You're exactly right. It's because of inequalities. They complicate things. Dropping them would definitely simplify things.
(See update below.)
You lost me completely in the
Displaying Geometry
, why can't unconstrained geometry be displayed?
Yes, I screwed this up. I conflated known/unknown with constrained/unconstrained. They're two separate things that need to be included in the design and implementation. Every piece of geometry, when the user draws it in the UI, they're giving it a default/fallback known coordinate, meaning we can always display it. But constraints may move it somewhere else. Solved constraints override the default.
p1 = screen_point(x: 10, y: 12) p2 = screen_point(x: 20, y: 13) l1 = line(p1, p2)
l1
becomes the tag?
Yes. That's how I imagine it.
why I didn't want to go down a solver path KittyCAD/modeling-app#111
Interesting. I will give this a read. I really missed you last week!
I feel like I didn't see any examples of dimensions being constrained here, like how would a user specify that lineA is 5 units away from lineB?
Is this what you mean? My first comment on the PR above: https://github.com/KittyCAD/kcl-experiments/pull/13/#issuecomment-2427883926
But if you think about it, that is only partially constrained.
I tried to work through the actual derivation by hand. It's long and reminds me of the painful parts of high school math classes. The exercise was helpful, though, because after you do all the symbolic substitution, at a certain point, you end up with something like this:
// Default values from UI drawing.
default(p1.x) = 10
default(p1.y) = 12
default(p2.x) = 20
default(p2.y) = 13
default(p3.x) = 21
default(p3.y) = 23
// From line equations.
p2.y - p1.y = ((p2.y - p1.y) / (p2.x - p1.x)) * (p2.x - p1.x)
p3.y - p2.y = ((p3.y - p2.y) / (p3.x - p2.x)) * (p3.x - p2.x)
// From the Euclidean distance.
(p3.x - p1.x)^2 + (p3.y - p1.y)^2 = 100
What do we do now? Maybe I'm just not that good at math, and this is a system of linear equations that's trivially solvable that I just can't see. But at a certain point, it seems like you have to start plugging in numbers using the default values. After all, we know they're only partially constrained. But how do we do that? An obvious approach is to start with p1
, but that's so arbitrary.
Update: I think I made a mistake with the square root. I think we need to derive the fact that the thing inside the square root is greater than zero. Otherwise, we'd have complex numbers. So I don't think inequalities can go away completely. But maybe we just error out instead of allowing for maximize
and minimize
.
I just read https://github.com/KittyCAD/modeling-app/issues/111.
The claim is that this CadQuery example is hard to read or difficult to understand.
import cadquery as cq
result = (
cq.Sketch()
.segment((0, 0), (0, 3.0), "s1")
.arc((0.0, 3.0), (1.5, 1.5), (0.0, 0.0), "a1")
.constrain("s1", "Fixed", None)
.constrain("s1", "a1", "Coincident", None)
.constrain("a1", "s1", "Coincident", None)
.constrain("s1", "a1", "Angle", 45)
.solve()
.assemble()
)
I tried to reproduce this in KCL. But it wasn't straight-forward at all. I couldn't figure it out with our current arc(). I'm imagining what it might look like once we implement 3-point arc https://github.com/KittyCAD/modeling-app/issues/1659. After about 15 minutes oftrial and error, I came up with this. (The angle between the line and the arc is wrong, but it's the general shape.)
sketch001 = startSketchOn('XZ')
|> startProfileAt([0, 0], %)
|> arc({
angleStart: 90,
angleEnd: -120,
radius: 100,
}, %)
|> close(%)
... which was actually completely unintuitive. I stumbled on this solution of using close()
as the straight segment. I first tried to start the path with the straight segment and arc to the end. I was trying to figure it out with arc(), couldn't, then imagined how it might work with a hypothetical 3-point arc, which seemed easier. But I feel like the solution I ended up with is a hack that won't work for anything more complicated. close()
is completely opaque. nrc pointed out, that it might be nice to close a sketch using an arc or some other kind of segment, other than a straight line.
So we again have lots of flavors of all our functions. They're opaque in that you as a user of KCL couldn't create such a thing yourself.
I've found it to be common that I need to rotate my paths so that the starting point can change.
The claim was that direct computation using the style of KCL of today was more concise and easier to follow. Going through this exercise convinces me even more that KCL has a predefined way of doing things, and if a user tries to stray outside that way at all, they're SOL.
I would even argue that the CadQuery example is indeed easier to read because our target audience thinks this way. They think in terms of high level things like lines being parallel or points being coincident. On the other hand, they do not think in terms of directly computing points. I think the example is actually pretty intuitive.
Direct computation of points in a path is easier to read or consume if you're trying to construct the points as the implementer. They correspond one-to-one with the output. But as a user, there's a big difference in mental model. @jgomez720 expressed frustration with how all line segments in KCL/the Modeling App have a direction. I understand why this is, and after talking with Alyn about constraints, I think leaning into deltas (AKA relative path segments) is generally what we want under the hood. But could the UI shield users from this somehow?
@nrc brought up the idea that the UI could withhold writing KCL until the user exits sketch mode and "commits" to a sketch. The nice thing about this is that the user could do all kinds of complicated edits on lines and points in any order, and then once they've built up an entire thing, only then does the tool need to write the corresponding KCL. This has some nice effects. For example, because the tool knows the entire path, it can group things cleanly in the source.
The downside is that the user loses the live one-to-one connection between UI operations and KCL changes, which was always intended to aid in users learning KCL. And presumably users would want to edit existing sketches, so this doesn't actually save us much work in KCL refactoring because we'd still need to be able to understand and refactor previously created sketches to properly edit them.
I think this is an interesting, under-explored approach. Something might have to give.
To be clear, I'm not trying to say that the current way is so bad and a solver approach is so much better. I think that we need to address the big difference in mental model.
One thing I'd like to add is how in the initial draft of this PR, I conflated the idea of a point being known with it being constrained. When people pointed this out, I immediately thought it was an oversight that needed to be fixed.
But the more I think about this, I'm actually not so sure. I think that part of why other CAD tools are so unintuitive in the solver department is because of these "initial guesses" for locations of points. I still haven't worked through the details yet, but it's something I'm thinking about. The whole idea of a "default" that somehow informs how things get solved is actually pretty weird.
Rendered
This is a very rough sketch of an idea that resulted from conversations during KittyCAMP 2024.