Open datakurre opened 6 years ago
Hmm... this might be related to gatsby’s implicit graphql schema creation.
Would it help to have options for sample content?
I handled this in my private example project by having private “sample” content, while filtering to include only public pages in final gatsby-site.
Possibly we could support reading such samples from filesystem, but still it would require filtering them off later.
Alternative
@datakurre So you mean to say, dummy data that's loaded in the case there's no data available for a particular type or property?
@ajayns Yes. But better to make that as simple as possible, because in any case the gatsby site developer still needs to know to filter them from real content.
How about a feature simply import Plone JSON export from filesystem?
We could do that after recursive traversal import, because the solution would be similar: we could configure a single json file to import and from that file (representing Plone folder) it would know to recursively import all its children.
So I would recommend leaving this open for now. It’s possible that there would be solutions in upstream to specify “mock” data for schemas so that it wont’t be visible for graphql queries.
If there’s no solution, we could reuse recursive import code to create import from filesystem.
Right now the workaround is to create private mock content into site.
Update: Private mock content for populating gatsby schemas did not work out as it is too hard to remove all traces of private data from gatsby.
Upstream issue https://github.com/gatsbyjs/gatsby/issues/3344 links to https://github.com/Undistraction/gatsby-plugin-node-fields
We might be able to use automatically support gatsby-plugin-node-fields from type schemas.
But that seems to fix only normalizing available fields on each. Not predefining available node types.
So, I'll continue by creating public sample content, but just keep them away from navigation with Plone's exclude from navigation flag.
I've stumbled across a similar problem where mock data is needed if remote database is not configured which might be the case during local development. I've added a local lowdb database file to the project sources and pull its data using https://github.com/gutenye/gatsby-transformer-lowdb File generated from this DB are excluded from deployment to production.
I reopen this, because GatsbyJS has now support for exporting and importing schema (schema-lockdown):
https://github.com/gatsbyjs/gatsby/pull/16291#issuecomment-518302621
With the latest gatsbyjs, these are the lines in gatsby-node.js to save a known good Plone type schema during development and load it during build:
const fs = require('fs')
const PLONE_SCHEMA = 'plone-typedefs.graphql'
exports.createSchemaCustomization = ({ actions }) => {
if (fs.existsSync(PLONE_SCHEMA)) {
actions.createTypes(fs.readFileSync(PLONE_SCHEMA, { encoding: 'utf-8' }));
} else {
actions.printTypeDefinitions({
path: PLONE_SCHEMA,
include: { plugins: ['gatsby-source-plone'] },
});
}
};
I tried the plugin for almost a clean site, copied our test project as base for my project, and had a few issues:
If Plone site did not yet (or no longer) have all the content types used in graphql queries, I got graphql errors during the build, breaking the build.
If e.g. a single Document on a Plone site was missing body text, again I got error, because graphq queries for documents expects them all to have text
So, to make this useful and robust, we need to find a way to coup with incomplete content. It's not practical that if e.g. all news items on a site is switched private and none should longer be published, gatsby build goes broken until all references of news items have been removed from the code.