solidjs / solid

A declarative, efficient, and flexible JavaScript library for building user interfaces.
https://solidjs.com
MIT License
32.19k stars 919 forks source link

React Parity: Components & Context #18

Closed ryansolid closed 5 years ago

ryansolid commented 5 years ago

For me this is the last question that is hanging on me so I'm looking for feedback suggestions. I've implemented JSX Fragments, control flow with fallbacks, Portals, and Suspense. I've had a lot of fun solving similar problems to React but in completely different ways that are Web Component friendly and work with Fine Grained performance considerations. I feel now, most typical cases are covered and you can do some pretty amazing things.

But ultimately, a library like Solid does not need Components to work, and in so upper case Function Components are essentially just Function templates. There are no context boundaries, no internalized state etc. This is definitely the biggest departure for those familiar with React.

Components vs Templates

If you want to pass state variables to these templates you need to essentially pass the whole state object or wrap in function accessors to get fine grained resolution.

const Child = ({ getName }) => (
  <div>{( getName() )}</div>
);

const App = () => {
  const [state, setState] = createState({ name: 'John' });
  return <Child getName={() => state.name} />
}

With WebComponents this isn't a problem since children will be bound as DOM elements:

const App = () => {
  const [state, setState] = createState({ name: 'John' });
  return <child-elem name={( state.name )} />
}

Where does that leave us? I can simplify binding syntax to use the similar approach to auto wrap but the child template still is dealing with a function. I can always wrap all props in a state variable and create computations based on the binding to update that state behind the scenes. But the overhead of doing the same on simple Function components seems awkward. I'm not sure I like creating these arbitrary Component boundaries as it sidesteps the power of a library like Solid. That the UI or Code modularity considerations are independent of your ability to optimally manage data flow.

Dependency Injection

Solid Components has a Context API but Solid does not. The context in Solid is the computation or Root context which is not tied to a specific DOM node necessarily. While this can work like a typical framework like React it wouldn't play nice with Web Components. So it makes sense to tie context to elements but Solid Template Functions are independent of those boundaries.

It goes further than this as I can use control flow to inject context providers fairly easily but the reading(consumer) gets much more complicated as the current executing code, which is independent of it's place in the hierarchy, needs to know it's got the right context. The only constant is its owning context or the Elements being rendered at that scope of templating. What ultimately comes down to is that Fine Grained regardless of how nested the tree, flattens the problem. This is something I used to my advantage in Sierpinski Triangle Demo to show really controlled finite Asynchronous rendering to provide ultimate control and performance, but it bites us here pretty hard.

There are other ways to solve DI like import global registries as a variation on the Singleton where Dependencies are constructed and registered. This method is just as controllable for testing and is in the majority effectively the same as using hierarchical look up but it is definitely not as nice. I can fake it behind a lovely API that makes it looks just like React and no one would be the wiser, well until they tried to nest contexts of the same type in multi levels of the tree.

I think I'm just not as familiar with what Angular or other non-React libraries do here so I'm hoping investigation lends to a good option.

Debugging

I'm fairly happy with Debugging since the compiled code hides the complications of triggers but exposes the direct imperative DOM operations and node creation, so by dropping breakpoints right in your implementation code you can very transparently see what is going on. Much more so than with a Virtual DOM or earlier non-compiled Fine Grained libraries. However, working around those restrictions I feel VDOM libraries have come up with really clever ways to improve the debugging experience. Even through browser tooling. I'd love to know what sort of debugging experience elements you really like from your experience or anything you would like to see with Solid. I played with the idea of Chrome Extension but mapping mapping the Real Dom back to Components or Context didn't feel very useful. It's possible visualization like Cycle.js/RxJS is the better option here. This is an area that earlier Fine Grained predecessors have been pretty poor in tooling.


In any case any thoughts on these topics would be much appreciated. I feel these are the final pieces to really solidify (excuse the pun) what the 1.0.0 release would look like.

ryansolid commented 5 years ago

A solution like this could work for components and context. It would be easy for the compiler to split {( )} dynamic expressions off and wrap them in a function, and change from calling Function components directly to creating them with createComponent. Skipping the the extra proxy wrap when there aren't any dynamic properties would be trivial. But it wouldn't cleanly support spread operators for dynamic data. It would also be awkward for HyperScript as it wouldn't be able to leverage the compiler.

In one sense this might all be unnecessary especially for those using Web Components. In another case maybe this is the missing piece.

let currentContext = {};
export function createContext(initFn) {
  return { id: Symbol('context'), initFn };
}

export function createProvider(context, value) {
  return currentContext[context.id] = context.initFn ? context.initFn(value) : value;
}

function lookupContext(current, key) {
  return current[key] || (current.parent && lookupContext(current.parent, key));
}
export function useContext (context) { return lookupContext(currentContext, context.id); }

function wrapContext(fn) {
  currentContext = {parent: currentContext};
  const ret = fn();
  currentContext = currentContext.parent;
  return ret;
}

function wrapProps(props, dynamicProps) {
  return new Proxy(props, {
    get(target, property) {
      let value;
      if (value = dynamicProps[property]) return value();
      return target[property];
    }
  });
}

function createComponent(fn, props, dynamicProps) {
  dynamicProps && (props = wrapProps(props, dynamicProps))
  return wrapContext(() => fn(props));
}

In another words:

// picture a Component like this
const Comp = props => <div>{( props.name )}</div>

// consuming with dynamic prop
const view = <Comp name={( state.name )} />

// compiles to
const view = createComponent(Comp, { children: [] }, { name: () => state.name })

Context still puts a bit of a cost on the Component but I imagine for simple non-dynamic Components there would be minimal overhead. Any thoughts?

ryansolid commented 5 years ago

I suppose I could wrap every prop in a function. It would still require consideration for HyperScript but it would be consistent and I could have the compiler handle the spread operator better. Although I question the value of the spread operator in this scenario since it isn't really dynamic on Components (only the keys present on original run will be dynamic).

function wrapProps(props) {
  return new Proxy(props, {
    get(target, property) {
      let value;
      if (value = target[property]) return value();
    }
  });
}

function createComponent(fn, props) {
  return wrapContext(() => fn(wrapProps(props)));
}

So,

const view = <Comp name={ state.name } handleClick={ clickHandler } />

// compiles to
const view = createComponent(Comp, { name: () => state.name, handleClick: () => clickHandler, children: [] })

//used like
const Comp = props => <div onClick={props.handleClick}>{( props.name )}</div>

I guess the caveat in both of these approaches is destructuring props would be a no go for anything dynamic since the getter triggers on property access. So again questioning if the Proxy magic is truly transparent in this scenario. I think in many ways this approach is more inline of what my thinking would be first seeing the library, but I'd also immediately fall into trying to destructure and having it not dynamically update. Event handlers etc, would work fine, but name in the example would not work.

ie

// name would never update
const Comp = ({handleClick, name})=> <div onClick={handleClick}>{( name )}</div>

Although this is true of the web component solution and kind of is the cost of entry I suppose. Probably a non-issue. Ultimately you are either stuck wrapping the dynamic props yourself in methods and accessing them that way or having it happen automatically for you and you can't destructure. I think the latter is what I'd expect as a new user and is less of a hassle in most cases, but I can see how it could be completely unnecessary for someone who knew what they were doing.

brodycj commented 5 years ago

What about React Native?

ryansolid commented 5 years ago

What about React Native?

That's a fair question. The state part of the library is React compatible(react-solid-state) to potentially work with React Native (I'm currently investigating). The renderer as presented here is not.

While the state management solution benchmarks favourable against something like MobX, the majority of Solid's performance comes from a completely different approach to rendering (that can also benefit MobX too). While I've tentatively looked at things like Native Script and see some potential there, I think it is going to be a much bigger undertaking since the approach to rendering is significantly different than most other solutions that have JS/native bridge API's. On the plus the way I've approached this stuff, if I find a solution it would benefit more than this library but take these techniques to potentially allow libraries like MobX to work without React or any sort of fine grained library (like Knockout) have a mobile solution.

For now I think PWA's and Web Components is probably the tact this library will take for mobile in the near term given my limited resources as largely a single contributor. While the Web rendering has really good performance, I feel I still am going to focus on improving it over the next several months. And investigating how I can continue to improve web component development experience.

ryansolid commented 5 years ago

Back on task, the more I think about it the more I like the idea I posted above in https://github.com/ryansolid/solid/issues/18#issuecomment-478050373. Basically at any point you can always just call a function so I think people coming from a React perspective will expect JSX tag Components to be contextual (have context).

What I mean the overhead is completely opt in. If you didn't want to create these components could always still just use a Function call:

const Child = ({ getName }) => (
  <div>{( getName() )}</div>
);

const App = () => {
  const [state, setState] = createState({ name: 'John' });
  return Child({getName: () => state.name});
}

I think the fact that functions are just Templates can still be powerful without using JSX Component syntax. You can wrap these calls in expressions in other Elements or Fragments so it's just as flexible just not as clean. But this allows for having Components as well which lets you:

const Child = props => (
  <div>{( props.name )}</div>
);

const App = () => {
  const [state, setState] = createState({ name: 'John' });
  return <Child name={state.name} />
}

This is a potential breaking change for all the libraries that depend on the Babel Plugin so I need to spend some time with it. Other consideration is the role of HOCs. It's pretty trivial to compose plain functions. While the same is true for these Components there is a real overhead (multi depth Proxies). I might need to expose access into setters in the Proxy so that except for the outer call we can use simple function composition and only take the Component overhead once for all layers of Mixin. Sure with Hook like interface less of a concern but I need to spend some time working through these patterns as well before I can settle on this decision.


EDIT: This is still not a full solution for Dependency Injection. While the initial construction works fine, as soon as you hit dynamic content like loops/conditionals it falls apart. Some asynchronously fired even will no longer carry it's context and there is no static tree to do lookup against. There might be potential by hooking into control flow but that discounts the ability to do custom flow easily. I will probably have to look at completely different patterns here.

ryansolid commented 5 years ago

So I started implementing solutions and I noticed a couple things. I wanted to update this thread. Where I'm sitting right now this likely what I'm implementing at this time.

Components

I looked where I was using spreads mostly were to pass down handler functions. The thing is spreading over state is anti-pattern. Observable objects trigger based on property access so it is always a ton of work to wrap and unwrap. It makes more sense to pass the whole state object. Similarly as soon as you to pick out certain properties you aren't spreading. So largely I'm leaving spreads out of consideration. At that point it's easy to indicate which properties need to be wrapped. We can follow the same convention as native elements {( )}. It's consistent and it is completely opt in. By setting wrapped keys explicitly I can use getter instead of proxy which keeps backward compatibility for non es6 platforms. There is no weirdness around wrapping unnecessary fields. Largely these Components just remain a function call. There is no overhead or performance hit if you don't use the feature. Basically this becomes non-breaking change.

const Child = props => (
  <div onClick={ props.handleClick } >{( props.name )}</div>
);

const App = () => {
  const [state, setState] = createState({ name: 'John' }),
    clickHandler=() => console.log('Clicked');
  return <Child name={( state.name )} handleClick={ clickHandler } />
}

Mostly I think extends the mentality of Solid and how it differs from Surplus. In Surplus everything is generally live and each new context creates a layer in reactive tree. Every piece automatically updates and fallsback to higher context. Even if you make a mistake things will update. This is the common mentality in fine grained libraries. If you wire it will update.

Solid on the otherhand is mostly inert. Changes are whitelisted. You need to indicate if you want things to update. This means things may not update when mistakes are made but it is less likely things will update unexpectedly. Admittedly there are still contexts that can cause unexpected updates, but wherever there are library means Solid will attempt to keep updates contained.

EDIT Released in Babel Plugin JSX DOM Expressions 0.5.2

Dependency Injection

I finally figured out exactly why this is so hard to do. By that I mean Hierarchical Container DI as found in React and Angular. It is easy to use other patterns. Single global DI, or Service Locator pattern is fairly straightforward. But not worth making part of this library.

There are 2 solutions to Hierarchical DI in Solid as far as I can tell. Approach 1 is using the reactive graph. With a simple modification to S.js I can backtrace up Owner contexts. This is probably on the surface the most reasonable approach if it weren't for asynchronous side effects. And maybe I'm worrying about stuff too much but I hit this issue immediately trying this approach. Appending a Web Component to the page on a polyfilled environment is scheduled on microtask instead of synchronously. I also hit this with a router doing async imports. I like how easy it is to do async rendering in Solid. Having a solution not work nicely with it is too restrictive. There may be library supported features to manage these tasks in the future but it seems like I'd have trouble accounting for every scenario.

Approach 2 is building up a node tree. Components do this as they construct the Virtual DOM and in a sense we do the same with the real DOM. However the lack of lifecycles bite us here. See React always re-renders the tree so it knows it's parents as it goes even on updates. Web Components do most of their work in the ConnectedCallback meaning the root is already connected to the DOM. A Solid Template fragment has no idea what it is going to be inserted to, so it basically goes down the tree building and only attaches it as it goes back up. Meaning on initial construction I can't easily pull the data from DOM nodes going up the tree (fragment support doesn't make this any easier). Without messing up with timing of the templates or inserting more unnecessary elements I don't see this working.

For now I think I need to leave this in the domain of 3rd party libraries or the Solid Components solution.

ryansolid commented 5 years ago

Spreads

While I didn't have many use cases since I've been building for performance spreads, transparent prop passing for HOCs is a thing. For that reason I need to support dynamic (wrapped) spreads. I realized that I can use the same wrapping Parenthesis trick to inform the compiler. For example:

const MyHOC = Child => props => {
    // Do some logic .....
    return <Child {...(props)} addedProp={(state.data)} />
}

To be consistent I think all Spreads should be static first render only unless they are wrapped in parens for both Components and native elements. Essentially:

const view = <div {...firstRender} {...(dynamicUpdate)} />

forwardRef

This one is a little tricky since all Components return is their rendered DOM elements. I'm thinking of piggybacking an element reference on the returned element under a special key. The only downside of this approach besides some complication in compilation is it would be only applicable on first render and not under dynamic updates since the parent context would only be reading off the reference once. So if the ref inside the Component was in a node in a conditional causing that node to not render on initial render it would never forward the ref properly.

The solution could be to change how Refs work. Currently they are just simple normal variables. If refs were identity functions or specialized objects like React we could dynamically update them. The thing is outside of the forward ref scenario there is no reason to do anything so fancy with refs. At that point you are just passing props in essentially that keep their reference. Using either a function or an object to have a key set on is the same deal. The only benefit of making this a thing, is for consistencies sake.

I mean I could hide these details with the compiler by

  1. Always compiling refs on Components to functions with assignment
  2. Reserving a forwardRef prop to call said function with reference
const Child = props => <div forwardRef={props.ref} />

const Parent = () => {
  let ref;
  return <Child ref={ref} />
}

// Becomes
const Child = props => {
  const $el = $tmpl.content.firstChild.cloneNode(true);
  props.ref($el);
  return $el;
}

const Parent = () => {
  let ref;
  return Child({ ref: r$ => ref = r$ })
}

The benefit is the end user never thinks of this more than a simple assignment and there is no specialized object you have to worry about. The HyperScript version already handles all refs with functions anyway so this would keep consistent-ish for those solutions (they wouldn't need forwardRef).

EDIT Both Features released in Babel Plugin JSX DOM Expressions 0.5.3

ryansolid commented 5 years ago

I think this thread has served it's purpose or as much as it can given my long monologue. In the future I will move ideas and discussions like this mostly in the community chat under https://spectrum.chat/solid-js/features