Whenever any commits are pushed onto the main branch, this will tell GitHub to build our Jekyll site, like before. However, this custom Action will first run the three Wax tasks before building the site.
Proof of Concept in my personal fork of this repo, where I removed all generated pages, images, and search index and the proof of concept GitHub Workflow was run to generate everything needed, build the site, and deployed
Next steps
Once this is merged into main and we verify that it's running correctly (verify search and document pages are working), we can delete:
_keywords/*.md : autogenerated pages
_keywords/excerpts/* all files here should be kept, as they hold custom data
search/ this folder can be removed
img/ folder can be removed
With current behavior, pages don't get updated when the data in the CSV is updated because Wax tasks skip pages that already exist, regardless of if they are up to date. This approach will ensure that our site will always have the most up to date data from the CSV(s)
We will need to adjust our approach a bit when building out the site. Since these pages will no longer be part of the git history, whenever you pull changes from GitHub onto your computer, you'll have to make sure to:
Remove any current derivative files in these folders
Rerun the wax tasks to generate fresh derivative files
When we have the People Set in this site, we will need to update this YML file to run Wax tasks for that data too
Whenever any commits are pushed onto the
main
branch, this will tell GitHub to build our Jekyll site, like before. However, this custom Action will first run the three Wax tasks before building the site.Proof of Concept in my personal fork of this repo, where I removed all generated pages, images, and search index and the proof of concept GitHub Workflow was run to generate everything needed, build the site, and deployed
Next steps
Once this is merged into
main
and we verify that it's running correctly (verify search and document pages are working), we can delete:_keywords/*.md
: autogenerated pages_keywords/excerpts/*
all files here should be kept, as they hold custom datasearch/
this folder can be removedimg/
folder can be removedWith current behavior, pages don't get updated when the data in the CSV is updated because Wax tasks skip pages that already exist, regardless of if they are up to date. This approach will ensure that our site will always have the most up to date data from the CSV(s)
We will need to adjust our approach a bit when building out the site. Since these pages will no longer be part of the git history, whenever you pull changes from GitHub onto your computer, you'll have to make sure to:
When we have the People Set in this site, we will need to update this YML file to run Wax tasks for that data too