Open ghost opened 4 years ago
What exactly do you mean by dynamically generated?
As this plugin is primarily meant to build speech output at build time, it can not handle HTML generated at runtime.
However, it should indeed be able to include custom React components (the initial state as it is during build time). Currently, it skips React components embedded in the MDX files. This is due to the fact that the MDX AST only contains those components as single nodes without any text children. To achieve this, we would therefore need to transform the mdAst
as used in the getSsmlFromMdxAst.ts
to JSX (as it is done in the gatsby-plugin-mdx
, see mdx.js). Then we can parse it including all children and generate the TTS files from it.
Dynamically generated is probably the wrong word, sorry. Yes, I meant that by including custom react components the speech output should be generated for those as well. Thank you for the MDX information.
Just found out that the gatsby-plugin-mdx
does not count embedded JSX components for their wordCount
and timeToRead
. Seems like it would be something entirely new.
Also checked how we could pre-render those components to then extract the text elements. Seems like we would have to basically do the same as Gatsby does: server-side render the JSX generated from the MDX files and then parse the resulting HTML file...
Not sure if this is best practice or if we should look for another option to add TTS to custom elements?
Could you describe me what kind of custom elements you'd like to embed?
For example there's a component on the landing page of rgz-blind.ch that displays the next 3 upcoming events. Displaying those would be a component. It would be nice if this plugin could read those activities out as well.
I see. Is this data coming from an external service? We could build a solution where 3rd party content can be transformed to speech output in a similar way to the local MDX files. But if it really is dynamically loaded (at runtime) we would have to move the TTS generation to runtime as well.
No, the activities are markdown files committed to git and edited using the Netlify CMS. They are gathered at build time.
Perfect! In that case I believe we can solve this using this approach:
Also checked how we could pre-render those components to then extract the text elements. Seems like we would have to basically do the same as Gatsby does: server-side render the JSX generated from the MDX files and then parse the resulting HTML file...
This is an issue based on the discussion in https://github.com/flogy/gatsby-mdx-tts/issues/2.
Currently the plugin only creates speech output for markdown files. But if you have a dynamically generated page it doesn't work, even just a contact form doesn't. Perhaps there's a way of adding support for this?