ainc-gatsby-sanity
A portfolio using structured content and a static site builder.
Deployed from sanity.io/create.
What you have
Quick start
- Clone this repository from your GitHub account
yarn install
in the project root folder on local
yarn run dev
to start the Studio and frontend locally **
yarn run build
to build to production locally
** Note: You may have better success opening two separate terminals to and running yarn run dev
in both /studio
and /web
Notes
- Having troucble with
yarn install
?
Verified Node versions: 14.xx
, 17.xx
Workflow to create new documents for production
Steps
[Import/Export Documentation](https://www.sanity.io/docs/migrating-data)
- Use `local` machine to create document on your personal tagged dataset [(Commit for reference on how to switch to change tags)](https://github.com/ainc/ainc-gatsby-sanity/commit/83a6e89290f1b83a4fd9d0a0223cc858c05bca8b#:~:text=%3A%20%27beta%27-,graphqlTag,-%3A%20%27beta%27)
- Export from your tagged dataset, then import into the `dev` dataset using either `--missing` or `--replace` flags [(Documentation)](https://www.sanity.io/docs/importing-data#:~:text=tar.gz%20production-,Protip,-The%20import%20will)
- In the `dev` dataset, then add content to your new document in the Sanity Dashboard
- Then export from the `dev` dataset, and import into the `production` dataset using the `--missing skip` flag (adds any missing data, skips any data with same Id's)
- Possible have to do `sanity graphql deploy` to update the GraphQL (After adding code in `/documents))
- Yay you're done... hopefully (Refer to the commands below)
- - Note: these commands will only transfer the content of the documents, you will still need to add the document code to the `studio/douments` folder
### Confirmed command sequence once a schema is made in your `tagged` dataset
These 2 commands will export from your `tagged` dataset into the `dev` dataset
- `sanity dataset export dev --tag [tagName] ./tagged.tar.gz` (Export from `tagged` dataset)
- `sanity dataset import ./tagged.tar.gz dev --missing skip` (Import into `dev` dataset) this will add all missing data and skip any data with the same id
After this step, you would populate the content in sanity
Then, these 2 commands will import your data from `dev` to `production`
- `sanity dataset export dev ./dev.tar.gz` (Export from `dev` dataset)
Create a backup of `production` dataset (Possible GitHub action)
- `sanity dataset export production ./production.tar.gz` (Export from `production` dataset)
Import from `dev` into `production`
- `sanity dataset import ./dev.tar.gz production --missing skip` (Import into `production` dataset)
Sanity Workflow
Sanity runs into issues with overwriting work when trying to update schemas simultaneously on different branches. The ideal workflow for updating schema is as follows:
-
Plan out all necessary schema for development.
-
Add schema and push to the main branch on Github before any changes are made by other users.
-
Other users should pull your schema changes before adding any new schema.
-
Continue development on the front-end accessing the already committed schema.
The entire goal is to eliminate concurrent development of Sanity schema since they will overwrite each other.
Other potential solutions: