KittyCAD / modeling-app

The KittyCAD modeling app.
https://kittycad.io/modeling-app/download
MIT License
413 stars 35 forks source link

Discussion on Prioritization of 3D Features over 2D->3D Sketching and Implementation of CAD Workflow and Communication Layer #24

Closed Irev-Dev closed 1 year ago

Irev-Dev commented 1 year ago

Edit: I've updated the description to match what the thread ended up being about. And the original intention of this ticket has been fleshed out more over here: https://github.com/KittyCAD/untitled-lang/issues/29

org-projects-app[bot] commented 1 year ago

org-projects-app[bot] added this issue to KittyCode Project.

hanbollar commented 1 year ago

sketch (ie 2d->3d) is a lower priority for now than any of the 3d aspects for this just as heads up so this task should be moved to a later stage in MVP

can go over order of those tasks in next 1:1 for us to get better handle on what's implementable now with current team and what gets pushed back

Irev-Dev commented 1 year ago

ok, what are the 3d features that are being prioritised? is it like primitives? or something else?

I feel like maybe there's some context I'm missing? I'm just thinking even in the last "API design for modeling" meeting we had last week we were talking about extruding polygons, I know that discussion is still ongoing, but has something changed since then?

There's an argument to continue with this work, even if it can't be matched on the engine side in the near future. There are parts of the code-UI inter-opt that I'll be able to test with a robust usecase I'm familiar with. An example I've got in mind is selecting two lines and specifying that they should be the same length is probably going to involve creating a new variable that will be the length of both of the lines, and then using that variable in both function calls, possibly swapping out the line function call for something else that's better suited to taking a "length" instead of a coordinate. Details aside, selecting multiple lines and modifying the AST from there is not something I've been able to test so far.

Irev-Dev commented 1 year ago

Maybe you meant that issues like #16 should take priority?

Which makes a lot of sense, I'm just blocked on that atm since I thought those endpoints weren't exposed yet.

hanbollar commented 1 year ago

Maybe you meant that issues like #16 should take priority?

Which makes a lot of sense, I'm just blocked on that atm since I thought those endpoints weren't exposed yet.

yea for now #16 should take higher over any 2d->3d planning atm since 2d->3d sketching isnt a focus atm

and @iterion was going to expose them - @Irev-Dev let us know priority-wise when theyre needed (if theyre not already exposed) for your side since they'll be the first features you'll need to work with with your app and adam can push them thru as higher priority then

Irev-Dev commented 1 year ago

I can start using them as soon as they're ready.

hanbollar commented 1 year ago

sweet, @iterion could the exposure of the mesh ones be made a priority for this week (if not already)

jessfraz commented 1 year ago

Hey just want to chime in here:

I think we should prioritize the following:

I think there is value in doing it both ways, but for a lot of the products that exist in the world today it is going to be way more intuitive to users to work in a sketch and extrude workflow. BUT we can also help those users in the future in different ways where it is valuable. I think the freeform "clay modeling" is useful for characters or 3D printing but if you think about an iPhone or a laptop/TV/monitor it is much more constraint based and works well with the original CAD workflow. Plus people will get it out of the gate.

Hopefully that makes sense.

I want to make sure, and correct me if I'm wrong, that if we are going down the road of implementing "dexahedron create" purely for testing coms I want to make sure this is more than just 5% useful, I think it should be > 50% useful or else we are going the wrong direction for the wrong reason and should be spending that time implementing lines instead.

jessfraz commented 1 year ago

Thinking about this more I really don't see the value in passing around meshes to start. Mostly because we won't be passing around meshes in the limit, we won't even be using rest in the limit, and we won't be using dexahedrons in the limit so that's super not useful on multiple axis.

Instead I'd suggest we try, and not even connect it to the engine yet, just we just set up in api-deux the absolute most simple hello world example of webrtc and go from there. After we can successfully connect over webrtc from the app we can get a bit more clever and try passing a render from the engine, but let's start basic and only implement in the api repo first. Just whatever sample stream we could try.

Then after we could also get more clever, perhaps try the difference of webrtc vs. websockets for messages and webrtc for constant stream or pure webrtc back and forth from the app. We basically need to decide the direction we want to go.

Webrtc makes the most sense for the server to client stream so I think we are sure there. Because the lossiness of UDP is more a feature than a bug in our case. If we drop a few frames it's okay, we don't need this to be a 60fps gaming platform. And trying to implement dropping of frames in websockets (that are mostly meant for text) would be quite an overhead.

For the messages from the client to the server on how to modify the stream, it might make sense to use websockets since TCP is more reliable but we might find UDP (webrtc) works just fine. So we need to do some experimenting there but can be low overhead and not touch the engine til we decide on the approach.

Also the nice thing about this is, it gives time for engine to implement lines while we experiment outside the engine since arguably using webrtc versus websockets for the messages from the client to the server wouldn't effect that. So we can work in parallel while we figure out the communication layer. Then once we have a better picture we connect it together

Irev-Dev commented 1 year ago

This all sounds really good to me, I'm definitely on board with matching existing CAD workflows.

And yeah feels like we were skirting around what we need to do to some extent. We can definitely figure out the connections web-rtc/sockets before we know exactly what the API shape is going to look like.

Irev-Dev commented 1 year ago

Following Jess's approach, We could also investigate how we can get the connection to play nice with OpenAPI before we're ready to connect up the engine if we wanted to.

hanbollar commented 1 year ago

~quick initial thoughts/reasoning behind: (disclaimer see ‼️ section)~

~the reason im stating mesh for now is that we 'have' those basic endpoints to test already so that we can test the back and forth interop and setting up other aspects with it an easy thing to do~

~the sketch extrude is a second priority because it's not implemented yet tho will be available internally before we actually release this to the public~

~so we will have it, the public will know we have it and they wont be 'misguided' by missing it~

hanbollar commented 1 year ago

~im going to stand firm on mesh first for this just because it's a usability feature we can actually test internally for now while we're still hiring in another engine person since testing endpoints with the interop is a partial blocker for some of the stuff kurt is doing~

hanbollar commented 1 year ago

~kurt can still work on figuring out sketch extrude his side, but there will not be anything engine-wise for hkm to interop with for a bit with that so that's partially a waste for this initial step~

~which is why mesh interop as brute force ( 1) we have it 2) it's easy to setup engine-wise ) must come first~

hanbollar commented 1 year ago

~‼️~

~heads up - only partially read the commentary above - wanted to get my initial thoughts out before i drove this morning~

~‼️~

~will read fully after my apt (aka after 10.30a pacific) and probably edit 👍~

commentary added see: https://github.com/KittyCAD/untitled-lang/issues/24#issuecomment-1421089190

jessfraz commented 1 year ago

It really doesn't make sense to implement mesh fully up the stack if we overhaul it away for webrtc stream eventually.

By testing a hello world webrtc sample we are testing the coms without the need for engine making it so that once engine is ready we can hook it up.

Mesh is driving off the road to get nowhere we actually need to be. Whereas a sample webrtc implementation gets us halfway there.

hanbollar commented 1 year ago

tldr; was a good 2cents @jessfraz - added some commentary for details but as long as we're delaying attaching the engine and the app then this is fine ~


(as a note in the below - 3d doesnt mean triangles, 3d means 3d - the triangles interop was for testing purposes specifically - see the third section of response (starts with 'yes it is...'))

I think there is value in doing it both ways, but for a lot of the products that exist in the world today it is going to be way more intuitive to users to work in a sketch and extrude workflow. BUT we can also help those users in the future in different ways where it is valuable.

from an engine perspective this is actually more complicated from the get go (ie 2d then 3d) (which is why most pure graphics does 3d to start)

which basically means we'll be doing engine (3d then 2d->3d sketch extrude) for priority of features tho the ast/app can do (2d->3d sketch extrude then pure 3d) as priority of features /as long as/ theyre separated for the time being while those are being figured out

I think the freeform "clay modeling" is useful for characters or 3D printing but if you think about an iPhone or a laptop/TV/monitor it is much more constraint based and works well with the original CAD workflow. Plus people will get it out of the gate.

I do have to add this caveat just as an fyi (in case this is a concern) - our 'clay modeling' will be 100% able to do and intuitively do the work for the more constraint based work - it just depends on what tools we give it

also tbh as a future note - once we have the ML in place for the 2d->3d drawing (ie not sketch extrude, but the actual 'here's the diagram, here's the model') and/or some of the 3d->2d schematic visual in place, we can lean people more towards that which can allow users to do this code-gen modeling of editing the 2d schematic and editing the 3d output for whichever makes more sense to make the edit at the same time in real time (like how we're currently discussing for codegen and visual interop)

Hopefully that makes sense.

I want to make sure, and correct me if I'm wrong, that if we are going down the road of implementing "dexahedron create" purely for testing coms I want to make sure this is more than just 5% useful, I think it should be > 50% useful or else we are going the wrong direction for the wrong reason and should be spending that time implementing lines instead.

yes it was purely for testing coms alongside the actual mesh deform functions aka the original point of my adding them in the first place before vacation - for them to be able to test their communication layer for the larger data calculations and of the whole pipeline (not just from api-deux but all the way from engine for that interop confirm) and not just text interactions

that is, so visual-wise kurt could test his end and the interop and adam could test his end as well and confirm theyre getting // doing the same manipulations as the api calls back and forth between them (ie it's a point of sanity with the data and the interaction - not of the 'ast call from a point and click interop to code gen works' but of the 'calling an endpoint with this interop connection works' -- aka mvpmvp)

additionally @iterion had recommended we can even just keep the endpoints exposed in engine-api and not to customers (if that's a concern) since it's for internal testing specifically

the mesh deform ones will be useful to customers as individual endpoints eventually (separate from this app) /but/ we dont need to add more in that area for awhile and if there's a concern of exposing those before other aspects of a modeling pipeline (since this keeps coming up - again im stating the mesh deform ones, not the mesh create ones - can 100% hold off on the create ones since they were filler to help with these coms), am fine with holding off exposing the deform ones to customers until we have more of an actual 'non triangle modeling' basis in place

Instead I'd suggest we try, and not even connect it to the engine yet, just we just set up in api-deux the absolute most simple hello world example of webrtc and go from there. After we can successfully connect over webrtc from the app we can get a bit more clever and try passing a render from the engine, but let's start basic and only implement in the api repo first. Just whatever sample stream we could try. ... Also the nice thing about this is, it gives time for engine to implement lines while we experiment outside the engine since arguably using webrtc versus websockets for the messages from the client to the server wouldn't effect that. So we can work in parallel while we figure out the communication layer. Then once we have a better picture we connect it together

if we dont connect to the engine for a bit - then 100% ok with the 2d sketching being the approach @Irev-Dev starts with - as a heads up @Irev-Dev since we're adding more of a delay on connecting the app to the engine, we're going to need to have a more detailed meeting with you/me/josh to flesh out from a ux perspective for the drawing && point and click <-> api call expectation of interactions since they'll need to make sense from a future engine perspective -- additionally we need to make sure we come up with a point where 'these are the features at which we /need/ to connect the engine to the app to confirm that functionality' (separate from confirming we can pass a graphics visual to the app in a simple get call)

additionally (fun fact) - as a feature once the 3d engine has a basis - the 2d extrusion will just come as 'extrude off of this shape' tho from the customer's perspective this'll be 2d sketch to extrude - meaning tho it'll work for '2d', it'll also work for whatever shape side you'd want to extrude off of (flat or not) which'll be useful for text bumps, quick inlays, and more.

hanbollar commented 1 year ago

commentary added since had some time before apt - going to dash out my quick response that had the disclaimer

jessfraz commented 1 year ago

from an engine perspective this is actually more complicated from the get go (ie 2d then 3d) (which is why most pure graphics does 3d to start)

we arent selling to graphics engineers tho, we have to meet people where they are, we cannot change their entire workflow out of the gate.

the concern isnt exposing the endpoints the concern is it is the wrong way on multiple axis: 1. we arent using REST long term, 2. we arent passing meshes long term, 3. creating a dexahedron isnt a typical cad workflow, I actually dont even see a point in implementing it at all in engine-api or api-deux. IE it does not test the coms at all since none of that is how we will communicate in the limit.

I do see the point in bringing up the smoothing functions but thats not related to the gui.

hanbollar commented 1 year ago

It really doesn't make sense to implement mesh fully up the stack if we overhaul it away for webrtc stream eventually.

By testing a hello world webrtc sample we are testing the coms without the need for engine making it so that once engine is ready we can hook it up.

Mesh is driving off the road to get nowhere we actually need to be. Whereas a sample webrtc implementation gets us halfway there.

sry was in the middle of crafting the full brain dump still when u had added this response 😅

hanbollar commented 1 year ago

from an engine perspective this is actually more complicated from the get go (ie 2d then 3d) (which is why most pure graphics does 3d to start)

we arent selling to graphics engineers tho, we have to meet people where they are, we cannot change their entire workflow out of the gate.

the concern isnt exposing the endpoints the concern is it is the wrong way on multiple axis: 1. we arent using REST long term, 2. we arent passing meshes long term, 3. creating a dexahedron isnt a typical cad workflow, I actually dont even see a point in implementing it at all in engine-api or api-deux. IE it does not test the coms at all since none of that is how we will communicate in the limit.

I do see the point in bringing up the smoothing functions but thats not related to the gui.

sry if you read the whole brain dump ull see i agree with you (maybe i shoulda changed the order of my commentary)

most of the response after the '---' line was just my adding details from an impl perspective of whatll need to happen in what order before we can show to private beta as 'try this out'

the tldr i added is the tagline tho 👍

jessfraz commented 1 year ago

yeah I just dont want us hiding from hard things, webrtc is going to be hard as well but we need to do it

hanbollar commented 1 year ago

yeah I just dont want us hiding from hard things, webrtc is going to be hard as well but we need to do it

agreed, again the intent of the deform endpoints isnt hiding, this is 'this is available as a quick check, use it for testing while the engine does its build out as well'

aight at dr apt now - @jessfraz can discuss on a call later with you if there's more concerns about this but from what i can tell we're aligned - lmk

jessfraz commented 1 year ago

sounds good!

Irev-Dev commented 1 year ago

Because the thread became somewhat detached from original the issue, I've created #29, and I'll close this as I think the discussion has finished. Feel free to open it again if there's more you want to add.