tmeasday / storybook-skeleton

3 stars 1 forks source link

feat - Add sideloading for stories #28

Open bebraw opened 3 years ago

bebraw commented 3 years ago

I added the following endpoints to wds/wps:

To test:

  1. Run yarn webpack serve --config ./skeleton/webpack/webpack.config.js at storybook-skeleton/examples/template
  2. Examples to try: http://localhost:5000/api/stories, http://localhost:5000/api/story?id=example-page--loggedout

TODO:

Open questions:

Closes #22.

bebraw commented 3 years ago

@tmeasday This is good for an early look. Sorry it took a while.

I'll add change tracking and the websocket bit next. There are some questions on Storybook side I've outlined above.

tmeasday commented 3 years ago

@bebraw I am away this week but will try to take a look soon.

@shilman -- you probably want to have a look.

bebraw commented 3 years ago

@tmeasday No probs. Take care. :)

bebraw commented 3 years ago

I added a simple change tracker to the PR.

Is there a way for us to manipulate Storybook manager at the skeleton? I think it's the composing-storybook bit and we'll want to adjust it to use the new endpoints and the web socket (to be added).

bebraw commented 3 years ago

I had a quick call with @shilman. Here are the key points:

tmeasday commented 3 years ago

production + development - Since the format is the same for both, there's an opportunity to write the information as a side effect during development to avoid processing during production. This isn't essential, though, and it may be fast enough simply to regenerate the data on a static build.

I think it is typical for prod + development to happen in different places (CI vs developer machine), so I am not sure the potential pitfalls with this approach would really be justified. My 2c.

bebraw commented 3 years ago

I think it is typical for prod + development to happen in different places (CI vs developer machine), so I am not sure the potential pitfalls with this approach would really be justified. My 2c.

Yeah. If there's some cost to generating the metadata, then the classic approach of pushing the metadata after build to S3 or so and then pulling it for the next build would be better. In this case you have a cache invalidation issue to consider and we don't have to touch the problem at all for now.

Do we think watchpack is the best tool for the job here? It probably makes sense to use the same tech (and perf profile and bugs etc) as webpack uses, but I wander what other options exist.

Likely the choice won't matter as long as it's known to be stable and provides basic FS triggers. In some environments you don't necessarily even need an external package for this. I know webpack had some issues with chokidar and that's why they moved to something custom of their own. I would say let's start from one like watchpack as it seems to work for webpack itself and change if needed.

Are we just going to add the CSF files themselves / dirs where they are contained to the list of watched files? i. For the "static" stories.json generation approach, this is fine. ii. For more dynamic things that cross file boundaries (e.g. react-docgen, especially with typescript), we are going to need to also add dependencies to the list. For instance, if Button.stories.tsx imports Button.tsx, we will want to watch Button.tsx, right? Or if not, we should think about the implications of that.

I went with a naïve approach of watching the project root directory. I can see the problem is that if you refer to something outside of it, it will fail. Most likely we can pull this information from something like tsconfig or other place where the developers define it usually and it makes sense to expose a couple of properties for adjusting the behavior.

I'm still worried that we are going to end up re-implementing some of the harder bits of webpack here ;)

It's unavoidable that there will be some overlap. At the same time this work should decouple us enough from webpack to allow the usage of other tools (snowpack etc.). Now the related functionality is coupled with webpack plugins and that's the dependency to break.

I was thinking of a related use case just a while ago. Consider type parsing. There we have the problem of different environments (React, Vue, others). To solve at the metadata server, each target needs its own parser but as long as it emits in the same format, a single frontend (I think this is what you call a plugin) can render from any.

With change tracking, we can solve updating data locally within the widget showing the types so when something gets changed, the widget gets notified with the new data and it will refresh itself automatically.

Another thing that's likely possible on top of this is interrogating the types for more information (maybe another endpoint) so you can make the user interface more interactive. It feels like that's something that would grow naturally on top of the API approach.

tmeasday commented 3 years ago

@bebraw I had a short discussion with @shilman earlier and I think we should document it here.

So I am concerned about the approach of adding the entire workspace to the watcher; we've seen with webpack that this can cause significant rebuild delays -- however, perhaps that is more based on what WP does next when a change happens (rescanning the glob, rebuilding the require context) rather than the watcher itself. We should investigate further.

In either case, there is going to be significant complexity with file watching and/or caching when you consider dependencies. Consider the case of generating the argTypes for a TypeScript component (I think this is what you mean by "type parsing", but i am not quite sure). You might have a series of dependencies like:

Button.stories.tsx => Button.tsx => Button.props.ts => SomeEnum.ts

If any of those files change, we need to recalculate the argTypes for Button. We may or may not need to know about the dependency chain when adding file/dir watchers also. Even if we just watch the entire workspace, we'll need to understand the chain to know when to invalidate the cache for the argTypes for Button and emit (i.e. if X.ts changes, which component's argTypes need to be recalculated?)

I think this is a pretty tricky problem, especially when you consider all the different methods of importing/requiring etc -- this is one of my "are we reimplementing a key part of webpack?" moments.


Here's a proposal that we discussed that might be worth considering as a first pass:

For /api/stories.json (the full list story names for every component), setup a file watcher over all CSF files that match the stories glob (e.g. ./src/components/**/*.stories.tsx). When any matching file changes or is added/removed, mark the cache dirty/recalculate stories.json and emit an event on some socket/channel. This will trigger the manager to rebuild the sidebar with the added/removed stories.

For /api/stories/<id>.json (detailed data about a single component[1]), do not setup any file watchers or a channel.

We still need to setup our own watchers/channel for /api/stories.json, because the preview doesn't load the CSF files for any other stories apart from the current component, and so won't be HMR-ed if another CSF file changes/is added/etc.

[1] Should it be /api/component/<id>.json I wonder? [2] If not the user has bigger problems than the metadata being out of date, and probably needs to hard-refresh anyway.

bebraw commented 3 years ago

Comments below.

So I am concerned about the approach of adding the entire workspace to the watcher; we've seen with webpack that this can cause significant rebuild delays -- however, perhaps that is more based on what WP does next when a change happens (rescanning the glob, rebuilding the require context) rather than the watcher itself. We should investigate further.

I believe webpack is doing far more than we're doing. Often file watchers rely on system (very fast) and one thing watchpack provides is batching the events so we can receive one every 100ms for example with all the changes that occurred within that time.

In either case, there is going to be significant complexity with file watching and/or caching when you consider dependencies. Consider the case of generating the argTypes for a TypeScript component (I think this is what you mean by "type parsing", but i am not quite sure). You might have a series of dependencies like:

Button.stories.tsx => Button.tsx => Button.props.ts => SomeEnum.ts

If any of those files change, we need to recalculate the argTypes for Button. We may or may not need to know about the dependency chain when adding file/dir watchers also. Even if we just watch the entire workspace, we'll need to understand the chain to know when to invalidate the cache for the argTypes for Button and emit (i.e. if X.ts changes, which component's argTypes need to be recalculated?)

I think this is a pretty tricky problem, especially when you consider all the different methods of importing/requiring etc -- this is one of my "are we reimplementing a key part of webpack?" moments.

The way it works now in webpack is that webpack itself is tracking module dependencies and then for type parsing it defers the work to TypeScript itself through a custom plugin (which is a little painful by itself :) ). As you pointed out, tracking dependencies can be tricky in this case.

Parsing types for a single component is way faster than parsing all of them (this occurs when you run webpack dev server now) and one option would be to ignore this problem.

What I mean is consider you are looking at a component page and you have the documentation tab with the types visible. Let's say the user makes some change to the file system. To keep it simple, we could refetch the type information in this case. It's possible the types didn't change but at the same time if the cost of parsing the information for a single component is low, then it won't matter.

That said, if we find out performance is an issue, then I would look into understanding TypeScript parsing process better since it definitely has to traverse through the module graph and track the information. In case we find a way to emit this information from it, we can be smarter with invalidation but that feels like a secondary problem until we know this is the bottleneck and avoid premature optimization.

It's important to note that going through endpoints avoids the cost of processing everything upfront that occurs with webpack based process right now and we also gain control over what happens and when while having something we understand over a black box. We're solving a much smaller problem than webpack.

For /api/stories.json (the full list story names for every component), setup a file watcher over all CSF files that match the stories glob (e.g. ./src/components/*/.stories.tsx). When any matching file changes or is added/removed, mark the cache dirty/recalculate stories.json and emit an event on some socket/channel. This will trigger the manager to rebuild the sidebar with the added/removed stories.

Yup, this makes sense.

For /api/stories/.json (detailed data about a single component[1]), do not setup any file watchers or a channel.

  • Part of the design is that the UI only cares about this data for the currently rendered component
  • When that component's CSF file or any of its dependencies changes, we assume there will be some HMR event in the preview triggered by the builder (e.g. webpack5, vite, et al). [2]
  • That HMR event should ultimately tell the manager that the component has changed (details TBD), which triggers the manager to refetch the .json with some header/query parameter to bust the cache.

We still need to setup our own watchers/channel for /api/stories.json, because the preview doesn't load the CSF files for any other stories apart from the current component, and so won't be HMR-ed if another CSF file changes/is added/etc.

Relying on HMR for the rest is fine too and likely it's more efficient than fetching component specific information on any change. I see latter as a naïve starting point as it's simple to implement and if we find out the simple approach doesn't work well enough, the problem can be solved per builder with its HMR mechanism (more work).

[1] Should it be /api/component/.json I wonder?

It's possible, yup. My understanding is that we should expose just enough API end points to replicate the current UI. I.e. fetching sidebar + fetching data related to it based on navigation (+ initial page based on url).

[2] If not the user has bigger problems than the metadata being out of date, and probably needs to hard-refresh anyway.

Yup, HMR mechanisms can be notoriously brittle and usually there's some case where they fail. That's why my current thinking is to do something as simple as possible (fetch on change) before diving into this swamp. :)

tmeasday commented 3 years ago

I think we are on the same page here. Let's start with the simplest possible mechanism (generate on request, no dependency tracking, no caching, no events, and so no filewatching) for metadata, one component at a time, and see how that performs.

For stories.json we will need a (pretty simple) filewatcher and event channel to ping when it changes. I'm OK with starting with watching the whole workspace but I would like to check how that performs on my example from https://github.com/webpack/webpack/issues/13636.

We also need to figure out what the channel between the web UI and the metadata server looks like. We currently have this slightly complex post-message based channel between the manager and preview in the UI. We could consider adding a "third spoke" to that channel somehow. The alternative would be do to something super simple (say a websocket, or SSE) and have both the manager and preview monitor it[1]

[1] Or as a small tweak have only one connect to it and to proxy the events over the existing channel.

bebraw commented 3 years ago

For stories.json we will need a (pretty simple) filewatcher and event channel to ping when it changes. I'm OK with starting with watching the whole workspace but I would like to check how that performs on my example from webpack/webpack#13636.

That's an interesting one. My hope is that with the work we can bypass the issue since we bypass webpack for stories.json generation assuming that's a fast operation.

We also need to figure out what the channel between the web UI and the metadata server looks like. We currently have this slightly complex post-message based channel between the manager and preview in the UI. We could consider adding a "third spoke" to that channel somehow. The alternative would be do to something super simple (say a websocket, or SSE) and have both the manager and preview monitor it[1]

What are you using for the current implementation? I can add socket.io/sockjs bits for the server so we get the most naïve thing done specifically for stories.json.

When it comes to the manager portion, what's a good way to add the client changes (fetching stories, fetching a specific story, handling changes coming from a channel) there? Is it possible for you to bring the relevant code to this repository or should we hack around with something like https://www.npmjs.com/package/patch-package ? That would give a diff against node_modules.

tmeasday commented 3 years ago

What are you using for the current implementation? I can add socket.io/sockjs bits for the server so we get the most naïve thing done specifically for stories.json.

Currently the manager+preview use postMessage to communicate

We also have a websocket channel implementation that is used by React Native (where the preview runs on device) for the same thing. It's pretty simple and I am not sure how robust it is or if that code is still used.

When it comes to the manager portion, what's a good way to add the client changes (fetching stories, fetching a specific story, handling changes coming from a channel) there? Is it possible for you to bring the relevant code to this repository or should we hack around with something like https://www.npmjs.com/package/patch-package ? That would give a diff against node_modules.

Hmmm.. I think probably the easiest thing to do is to use the actual SB monorepo to work on the manager. We can continue to use the skeleton to build the "next gen" preview (for now, although I am going to start work on a refactor in SB core soon).

I'll have a go at figuring out how to do that and post the instructions in a PR here.

bebraw commented 3 years ago

We also have a websocket channel implementation that is used by React Native (where the preview runs on device) for the same thing. It's pretty simple and I am not sure how robust it is or if that code is still used.

Ok, maybe we can piggyback on that. In general websocket code tends to be quite simple. Solutions like socketjs/sockjs provide features like long polling on top of that but given we're targeting modern browsers we don't have to worry about legacy support so much (or do we?).

Hmmm.. I think probably the easiest thing to do is to use the actual SB monorepo to work on the manager. We can continue to use the skeleton to build the "next gen" preview (for now, although I am going to start work on a refactor in SB core soon).

I'll have a go at figuring out how to do that and post the instructions in a PR here.

Ok, cool. Maybe we can npm link against it or so and then work on a branch at SB side.

tmeasday commented 3 years ago

Solutions like socketjs/sockjs provide features like long polling on top of that but given we're targeting modern browsers we don't have to worry about legacy support so much (or do we?).

I think we need to support IE11, but I guess that supports websockets so we are probably fine there.

tmeasday commented 3 years ago

@bebraw -- ok that was easier than I anticipated.

You can work on a manager that's connected to the skeleton as follows:

  1. Run the skeleton (preview) on port 5000 as normal:
cd examples/template
yarn webpack serve --config ./skeleton/webpack/webpack.config.js`
  1. Check out the monorepo, and bootstrap/setup as normal

  2. Run the official storybook example, with the --preview-url CLI flag:

cd examples/official-storybook
yarn storybook --preview-url=http://localhost:5000/iframe.html

If you want to make changes to the manager code, you likely also want to:

  1. Run yarn build in the monorepo also, in watch mode, with the packages you are working on selected.
bebraw commented 3 years ago

@tmeasday Cool, thanks. I'll give it a go (likely early next week).

bebraw commented 3 years ago

Thanks, this looks like something I can work against. 👍

bebraw commented 3 years ago

I set up a branch on Storybook to exercise the end points: https://github.com/storybookjs/storybook/tree/feature/skeleton-ui . Now it's fetching the initial data through /stories and I'll add more there (file watching, fetching single story info, fetching type data).

bebraw commented 3 years ago

I've been thinking about the file watching portion. I have an initial watcher in place in this PR but there are still some questions left to answer.

1. Do you imagine all of the meta server code will eventually live in Storybook itself?

I.e. do we merge the code from here to the server bit there or will the codebases remain somehow separate? I can see pros/cons to either. This isn't as important as the second question, though, and can be deferred for now. Likely I would merge eventually and maybe use the meta endpoints as a part of the current development server.

2. What should the communication between the meta server and Storybook look like?

It's clear the skeleton can provide a web socket server in which to connect but the question is where to connect on Storybook side. Can we use the lib/api portion and then defer messages through that to the frontend or should I go directly through the frontend (i.e. core-client)? In the lib/api part, I imagine the flow would look like this: meta server (skeleton) -> lib/api -> core-client .

In either case, core-client should take care of updating the sidebar on change. I think there will be weird corner cases like story removal to consider. Let's say you are looking at a story that was just removed, what should happen (redirect elsewhere?). Likely it's not enough to update the sidebar itself if the story view itself can contain information related to it.

Then on top of that we have cases like updates to types (the type parsing problem) but I imagine it's best to design that so that the meta type addon can deal with this independently (the widget rendering the types should receive a message and be able to update itself).

Regardless, I think we'll end up with a set of events that contain changes (i.e. STORIES_CHANGED, STORIES_REMOVED, STORIES_ADDED). In the beginning we can have a STORIES_CHANGED event that gets triggered regardless of the type of the change and it will contain the new stories data in its payload as we don't have to do something super efficient yet (more granular system can be added once the basic setup is proven to work and we have things in the right places).

tmeasday commented 3 years ago
  1. Do you imagine all of the meta server code will eventually live in Storybook itself?

I.e. do we merge the code from here to the server bit there or will the codebases remain somehow separate? I can see pros/cons to either.

I would imagine the metadata server would be part of the SB monorepo. It would probably be its own package, although @shilman probably has more opinions on that.

It's clear the skeleton can provide a web socket server in which to connect but the question is where to connect on Storybook side. Can we use the lib/api portion and then defer messages through that to the frontend or should I go directly through the frontend (i.e. core-client)? In the lib/api part, I imagine the flow would look like this: meta server (skeleton) -> lib/api -> core-client .

I'm not quite sure if you mean (a) where should the code live or (b) how should the data flow through the running SB.

So if you are asking (b), then I would say my current thinking is that both the manager and preview will independency connect to the metadata server to get the story list. So the metadata client code could live in core-client I suppose (actually in my current thinking the preview-side connection would happen in lib/client-api, which is a package used by core-client -- but that could be reworked).

I could be convinced otherwise if you think that's a bad idea. Keep in mind that it should ideally be possible to develop against the preview without a manager (i.e. if you open /iframe.html directly).

My understanding is that the core-client package is basically the root of the "SB web UI" both for manager and preview. I still find the package structure pretty confusing. Arguably the "connecting to metadata server" isn't a web-specific concern (i.e. for instance a React Native SB would still do it), so the metadata client code should live in a different package.

In either case, core-client should take care of updating the sidebar on change. I think there will be weird corner cases like story removal to consider. Let's say you are looking at a story that was just removed, what should happen (redirect elsewhere?). Likely it's not enough to update the sidebar itself if the story view itself can contain information related to it.

I'll admit I am not totally sure what happens in SB6.3 in this scenario. Definitely a good one to think about. Will try and find out from product/designy folks what the ideal behaviour is in any case.

Then on top of that we have cases like updates to types (the type parsing problem) but I imagine it's best to design that so that the meta type addon can deal with this independently (the widget rendering the types should receive a message and be able to update itself).

We need to dig into this deeper, but like you say it's a fairly separate problem to the story list generation.

Regardless, I think we'll end up with a set of events that contain changes (i.e. STORIES_CHANGED, STORIES_REMOVED, STORIES_ADDED). In the beginning we can have a STORIES_CHANGED event that gets triggered regardless of the type of the change and it will contain the new stories data in its payload as we don't have to do something super efficient yet (more granular system can be added once the basic setup is proven to work and we have things in the right places).

Given this is all happening in local development, and the story list data is fairly small, I wonder if we even need to be that smart. I was thinking something as simple as a STORIES_CHANGED event (with no payload) and then the metadata client will just go a refetch the /stories.json URL to get all the data again.

shilman commented 3 years ago

Yes the metadata server should live in the monorepo as @tmeasday suggested above. I don't think it makes a ton of sense on its own (at least not to start with -- maybe long term?), and having it in the monorepo makes it much easier to test & distribute with the rest of the changes on the manager/preview side.

I think stories.json generation is pretty core, but for the other stuff we'll need to come up with some kind of extensibility story (ultimately).

bebraw commented 3 years ago

So if you are asking (b), then I would say my current thinking is that both the manager and preview will independency connect to the metadata server to get the story list. So the metadata client code could live in core-client I suppose (actually in my current thinking the preview-side connection would happen in lib/client-api, which is a package used by core-client -- but that could be reworked).

I could be convinced otherwise if you think that's a bad idea. Keep in mind that it should ideally be possible to develop against the preview without a manager (i.e. if you open /iframe.html directly).

My understanding is that the core-client package is basically the root of the "SB web UI" both for manager and preview. I still find the package structure pretty confusing. Arguably the "connecting to metadata server" isn't a web-specific concern (i.e. for instance a React Native SB would still do it), so the metadata client code should live in a different package.

Ok, having it in its own package makes perfect sense and we can expose it as a middleware or so that you can integrate into different environments or even run it as standalone if preferred. The metadata bits feel orthogonal to the rest of the setup.

Having the client code in core-client sounds good to me though now that I think of it, maybe it makes sense to keep it close to the server (same package) and then pull it as a dependency to core-client.

In either case, core-client should take care of updating the sidebar on change. I think there will be weird corner cases like story removal to consider. Let's say you are looking at a story that was just removed, what should happen (redirect elsewhere?). Likely it's not enough to update the sidebar itself if the story view itself can contain information related to it.

I'll admit I am not totally sure what happens in SB6.3 in this scenario. Definitely a good one to think about. Will try and find out from product/designy folks what the ideal behaviour is in any case.

Ok, maybe it's good to enumerate the different update scenarios (add, update, remove come to mind first) and specify what should happen for each. The remove case feels like the hardest and the rest more unambiguous.

Based on what I've seen so far when using Storybook, in the worst case I've had to restart the entire development server to pick up some stories but it's possible the behavior has been improved since.

Then on top of that we have cases like updates to types (the type parsing problem) but I imagine it's best to design that so that the meta type addon can deal with this independently (the widget rendering the types should receive a message and be able to update itself).

We need to dig into this deeper, but like you say it's a fairly separate problem to the story list generation.

Yup. 👍

Regardless, I think we'll end up with a set of events that contain changes (i.e. STORIES_CHANGED, STORIES_REMOVED, STORIES_ADDED). In the beginning we can have a STORIES_CHANGED event that gets triggered regardless of the type of the change and it will contain the new stories data in its payload as we don't have to do something super efficient yet (more granular system can be added once the basic setup is proven to work and we have things in the right places).

Given this is all happening in local development, and the story list data is fairly small, I wonder if we even need to be that smart. I was thinking something as simple as a STORIES_CHANGED event (with no payload) and then the metadata client will just go a refetch the /stories.json URL to get all the data again.

Yup, let's do STORIES_CHANGED for now as that's enough.

The more granular events come into play with the update behavior (point above) to avoid effort at the client (no need to resolve what changed compared to previous as the info is already provide).


Thanks for the comments. I'll continue from https://github.com/tmeasday/storybook-skeleton/issues/22#issuecomment-888283910 .

bebraw commented 3 years ago

I added a simple websocket server and client to this PR. Those are separate from rest of the work and can be run independently (I didn't worry about integration yet). There's also a small script to test the performance of stories.json generation.

bebraw commented 3 years ago

@tmeasday The additional work is in a good place to check. Now change operations are fast (8-30 ms) but it seems there's delay at the file watcher side (seconds!).

We can also do a quick call next week where I can show how it works. There are initial instructions at the readme too.

For the file watcher, it would be good to understand what's making it slower than I would expect. We can also benchmark some other option against watchpack and I expect we'll see system level differences as well here.

bebraw commented 3 years ago

I got a nice tip. FB's watchman (https://facebook.github.io/watchman/) would be worth a go if you are fine with it. It seems they defer to system and it's somewhat battle tested by Jest and co. Let me know what you think.

bebraw commented 3 years ago

https://github.com/watchexec/watchexec is worth looking into too.

bebraw commented 3 years ago

Watchman comes with a big binary dep so that's maybe a no go here.

https://github.com/paulmillr/chokidar is the golden standard so I would say let's benchmark against that. I wonder why webpack moved from chokidar to a custom solution of their own, though.

tmeasday commented 3 years ago

@bebraw have you had a chance to look any more closely at the state of the art for file watching?

It would be good to understand where the time is being spent.

bebraw commented 3 years ago

I added a simple benchmark for file watchers. All it's doing is to touch a watched file and then calculate how long it takes for the change event to trigger. It gave the following insights (both respond to your questions):

tmeasday commented 3 years ago

Thanks for this @bebraw, very interesting!

The recommendation here is to offer polling as an option and not to have it enabled by default. It's some special cases (networking, whatnot) where it's needed.

Sounds great.

Therefore your recommendation of scoping watching at least to src seems smart and I suspect it makes sense to expose this as an option for power users that want more control.

My thought is that if the user has specified a stories glob like ../src/**/*.stories.js we can go ahead and scope it without requiring any special configuration from the user. We have already have some glob handling code that we could generalize for this logic: https://github.com/storybookjs/storybook/blob/next/lib/core-common/src/utils/to-require-context.ts

bebraw commented 3 years ago

My thought is that if the user has specified a stories glob like ../src/*/.stories.js we can go ahead and scope it without requiring any special configuration from the user. We have already have some glob handling code that we could generalize for this logic: https://github.com/storybookjs/storybook/blob/next/lib/core-common/src/utils/to-require-context.ts

Yeah, that sounds like a good way to go.

When it comes to watching, can you think of anything indirect beyond `*.stories.js** that the watcher should catch? Type parsing seems like something obvious (maybe type parser itself should tackle this?) but I wonder if there's something else like this esp. in the core.

tmeasday commented 3 years ago

When it comes to watching, can you think of anything indirect beyond `*.stories.js** that the watcher should catch?

Currently with our "static" story list generator (which is very simple, just reads the names of the exports and a couple of exported fields), it isn't possible for the output to be influenced by anything not within a CSF file.

This precludes people doing dynamic things, as an example (of something you might want to do):

import { generateTitle } from './generateTitle';

export default {
  title: generateTitle(__filename),
}

(For this specific example we have added an automatic title generating mechanism in 6.3, which goes some way to making the above unnecessary)

For 6.4 we probably won't include any kind of "dynamic" stories list generation (at least this is our current thinking). The whole stories list mechanism will be opt-in, so people who are doing dynamic things will be unable to use it. We'll probably try to get an understanding of what folks will need it for before deciding whether we want to build it (dynamic stories list generation). Perhaps we can solve their problems in other ways.

Another use case that is a common impetus is a programmatic list of stories, via our deprecated storiesOf() API:

import { storiesOf } from '@storybook/react';

const kind = storiesOf('Foo');
for (const i = 0; i < 100; i++) {
  kind.add(`Story ${i}`, ....);
}

Currently it isn't possible to do anything dynamic like that in the CSF format. If we allow fully dynamic code like that in CSF somehow, it will very likely lead to all sorts of imports (and thus make the problem of properly watching for changes much harder) as people find complex ways to generate combinations of stories etc.

This sort of problem has tended to lead us in the direction of not supporting fully dynamic stories and again, looking to understand the use cases folks are hitting in writing their stories like that, and trying to support those use cases in declarative, static ways.

So I would say for now, we assume the only dependencies for the story list is the files matching the initial glob.


Type parsing seems like something obvious (maybe type parser itself should tackle this?) but I wonder if there's something else like this esp. in the core.

I wrote up an extended discussion about the complexities of tracking dependencies in type parsing, see: https://github.com/tmeasday/storybook-skeleton/issues/29

bebraw commented 3 years ago

So I would say for now, we assume the only dependencies for the story list is the files matching the initial glob.

Ok, that's good news.

It may still be good to consider the issue of story additions separately. In simple cases it would be enough to watch the directory containing the stories to tell a story file may have been added and then check the filename to see if it matches the glob. In this case we can send a "file added" event with the related data.


I wrote up an extended discussion about the complexities of tracking dependencies in type parsing, see: #29

Cool. I commented there on a possible direction.