[x] ✨feature: Introduces completely new code or new features.
[ ] 🐛fix: Implements changes that fix a bug. Ideally, reference an issue if present.
[x] ♻️refactor: Includes any code-related change that is neither a fix nor a feature.
[ ] ✅build: Encompasses all changes related to the build of the software, including changes to dependencies or the addition of new ones.
[ ] ⚡️test: Pertains to all changes regarding tests, whether adding new tests or modifying existing ones.
[ ] 🚰ci: Involves all changes related to the configuration of continuous integration, such as GitHub Actions or other CI systems.
[ ] 📚docs: Includes all changes to documentation, such as README files, or any other documentation present in the repository.
[ ] 🗑️chore: Captures all changes to the repository that do not fit into the above categories.
Description
What did you change? How did you change it?
I essentially took Elliot's past script to generate large amounts of randomized data to seed our database and modified it to fit our current schema. That being said, the new script is called newGen.py and it can be found in packages/db/src/scripts/python and its generated .json files are stored in packages/db/src/new-gendata.
I kept the old mock-data as well just in case we'd like to use it for demonstration. Currently the script I updated can generate a lot of data, but it isn't particularly plausible. If I get the chance later tomorrow or Tuesday, I will manually update a lot of the data, especially in exercises, to make it seem more realistic. I can update the script as well but that'd take some more time which we likely don't have.
Tests
How was this tested?
[ ] Unit tests
[ ] Integration tests
[ ] E2E tests
[x] Manual tests
[ ] Tests were NOT needed
[ ] Other (explain below)
[Optional] Screenshots
Just ran pnpm db:seed at the root and checked if my instance of the database had the new schema + data. It did.
To manually test the script, run it on your machine locally (you might have to install the faker library) to see the generated jsons. You can seed faker differently to get unique results.
Documentation
[ ] Added to README.me
[ ] Seperate document
[x] NO documentation needed
Link to external documentation:
[Optional] Are there any post-deployment tasks we need to perform?
There is the major to-do of making the data more plausible. As I said, I will likely get to it when I have the chance after finishing my current frontend issue. Will most likely have to be done manually as auto-generating natural language is a bit difficult to say the least. Maybe ChatGPT can help here by having it generate many many unique workout descriptions, names, etc.
What type of PR is this? (Check all that apply)
Description
What did you change? How did you change it? I essentially took Elliot's past script to generate large amounts of randomized data to seed our database and modified it to fit our current schema. That being said, the new script is called newGen.py and it can be found in
packages/db/src/scripts/python
and its generated .json files are stored inpackages/db/src/new-gendata
.I kept the old mock-data as well just in case we'd like to use it for demonstration. Currently the script I updated can generate a lot of data, but it isn't particularly plausible. If I get the chance later tomorrow or Tuesday, I will manually update a lot of the data, especially in exercises, to make it seem more realistic. I can update the script as well but that'd take some more time which we likely don't have.
Tests
How was this tested?
[Optional] Screenshots
Just ran pnpm db:seed at the root and checked if my instance of the database had the new schema + data. It did.
To manually test the script, run it on your machine locally (you might have to install the faker library) to see the generated jsons. You can seed faker differently to get unique results.
Documentation
Link to external documentation:
[Optional] Are there any post-deployment tasks we need to perform?
There is the major to-do of making the data more plausible. As I said, I will likely get to it when I have the chance after finishing my current frontend issue. Will most likely have to be done manually as auto-generating natural language is a bit difficult to say the least. Maybe ChatGPT can help here by having it generate many many unique workout descriptions, names, etc.