Open d4tocchini opened 7 years ago
Differentiating timing for different properties i dont think is in the compiler, all of this actually generates vertexshaders so i may have skimped a bit on the fancy bits (since it generates shitpiles of shadercode). However i will have to rewrite this code entirely when i switch out my compiler so i can reimagine it better.
looks like the helper functions you have for easing within the shader calls are more than sufficient, the declarative state tree thingy would be nice-to-have but no biggy, I enjoy the frontier
i have to say, makepad's state transitioning is a breath of fresh air amongst the react replicants.
while youre imagining... a couple killer features come to mind in this department :)
.
or /
delimiter-ing parent/child statesyou could go full Automata-tard with parrallel state-paths & formal action edges in the state graph, would give brownie points but idk
I've been playing around with a minimal lib (will OS eventually) similar to @davidkpiano's xstate slides. I've found simply declared statecharts to measurably reduce bullshit UI code. the concept feels underutilized by front-enders, or grotesquely overcomplicated, such an API should have a surface area you can count on one hand in JSON or ProtoBuffer like form.
So the problem is the code i have that executes the states / queues is a shader that operates on attribute data (otherwise i cant have 10k animations in flight without JS time). I have a max queue depth of 2 (current state and next state), however trees and shaders i currently don't see how i can do that, but maybe i misunderstand what you'd want. In this a bit of example code would help.
But yeah per prop customized interpolation i can add to the next compiler.
yes, i'm getting close to having a solid demo to show you... in the meantime, maybe a little more context before clarity
my full blast is in adapting the whole UX enchilada to the unknown & vague wims of our users, from the paths the app can take, to painting every pixel. we employ a lot of constraint solving & ml to do something like this:
design
& color
you can see a demo of real-time tweaks.it's super hard to not hard code every variation of UI behavior, or not to fall into the OOP inheritance trap. the kind of button that is ultimately collapsed for a section, or whether or not an ornamental flourish resonates with the font which resonates with the vibe of the content, will likely impact the stateful possibilities of the UI. one section may be composed of buttons with additional animation states or scroll behavior. to be able to declaratively compose the statefulness of UI components helps a ton in composing behavior in general.
a simple UI button can explode in possible states when you want to make it not just clickable or active, but morphable from shape to shape or icon to icon - whether or not the icon is svg, font embedded, or otherwise may influence the intermediate transitions, etc etc. (see that spiral logo button for example)
it would be cool to be able to create & compose views with potentially deep hierarchal states declaratively. I know what you mean by the queue, ultimately a single state should be full collapsed and thus really just one state, but encoded in the name would be the means to compose the props of the aggregate state, like selected
-> selected/uber-highlighted/forgo-silly-parallax-fx
, who-dat
-> logging-in
-> logged-in/checking-moneys
-> logged-in/checking-moneys/asking-mom
-> logged-in/owes-money/do-not-pass-go
.... and encoded in the statechart declaration the logic to traverse the graph of possibilities
i've come to find FSM driven UI logic to offer a simple alternative to hardcore constraint reasoning for underconstrained situations where ant-brain-like logic is suitable to stochasticaly choose among large sets of equally weighted possible paths
anyhoot, i hope that makes sense, even better if you can show me a truer path. i'm traveling right now, forgive the long winded and-no-code!
Ok thanks for your writeup, i think i understand what you mean. I've been noticing the problem of bifurcating states on the size of the shader, since i will need to flatten the entire width of the stategraph into the shader as i currently only have a 'single' state, not overlayable states, which means that every combination becomes its own state. To be honest i haven't solved that problem, i just decided not to have that many UI states ;) The way im solving it is to allow more programmable animation at the drawing-call side instead of at the shader level. So the solution would be if you need 10k buttons with hovers: use the shader states, if you need 3 trillion overlayed app states on a buy button by all means use some kind of programmable drawing fabric (the compiler im building will fix that)
@makepaddev
precompiling the animation frames makes sense for time-fixed easing, but what about spring-based animations / smoothing, where duration is not fixed & anim function depends on current velocity to target value? Like those in rebound or react-motion. Of course, it's trivial to compute in JS and redraw, but I'm assuming there's a enough benefit of keeping this logic on the GPU. All that I need is way to keep a prop's velocity value - prevValue
between vertex/pixel calls. is there a reasonable approach you can suggest? I tried playing around with propTypes like output
and uniform
to no avail. The only way I can imagine right now would be storing the velocity values in a texture similar to the how heatmap renders in your shop example, but not sure how then I can access that texture from another view/shader... I know webgl2 has some sugar that would help here...
FYI, I'm hoping to push a fork in next week or so with a bunch of new UI components & examples :)
Thanks!
ok, it's not so bad if i focus more on raw shader code and less with the js-gl bridge. i just needed to go back to the basics (vis-à-vis thebookofshaders & shadertoy-ing around) and give up my ol' JS OOP ways...
speaking of webgl2, is that on your list for the compiler rewrite?
Hey, ok so i dont have write-out from a shader other than a color. One of the things im designing into the compiler right now is to make it easier to write gpgpu code, which could for instance run those kinds of springy animations in a shader. The 'output' is me preparing for MRT which currently isnt in there, because its not really supported everywhere. So yes webGL2 / MRT / GPGPU is what the new compiler will target
Makepad right now is really aiming at baseline webGL1 without float render targets, and is thus limited that way
So what im trying to solve in the next compiler is automatic input/output mapping of float structures and MRT. However the compiler will codegen/polyfill solutions if your underlying webGL API does not support these features
Right now i cannot do this with the existing compiler stack since i generate plain text shaders in the worker, and hand them to the main thread. This design will run part of the compiler in the worker and part in the main thread where the 'platform' can figure out what shader needs to be generated for your needs
Makes total sense and I figured as much as I shelved the spring physics and went with a more simple animation approach. There's enough horse power to ride now.
for example, when transitioning via
setState('selected', false, {dx: this.dx})
to state below, how can I animateBg
props with different timing functions thanText
props? Is there a way to configure this in the state tree or a lower level API? I'm assuming nested states are not possible? I can imagine how to do manually in thepixel ()
render call but that's no fun! Considering falling back to rapidly setting multiple states in consecutive frame updates to coordinate the animation, but that feels kind of wrong too...