Closed nelsonic closed 1 year ago
This appears to be the issue:
MALFORMED_ENV_FILE ./lib/api/localhost.json
{"value":2}
Error: Process completed with exit code 1.
Looks like this was last updated 2 weeks ago: https://github.com/dwyl/mvp/blob/main/lib/api/localhost.json
Attempting to revert back to previous version: https://github.com/dwyl/mvp/blob/ed665c5b9ad56eba39ae7e8f57b3118d315e4917/lib/api/localhost.json
Don't understand why the localhost.json
is "malformed" ... the JSON
is valid: https://jsonlint.com/
Tried to change this on my branch: https://github.com/dwyl/mvp/blob/5bbe3ff8417c66b58db7247806b4d071e41eb229/lib/api/localhost.json But it still fails: https://github.com/dwyl/mvp/actions/runs/4133676611/jobs/7143874195#step:9:23
Not going to spend anymore time on this right now. ⏳ @LuchoTurtle please take a look and create a PR to fix. 🙏
The values of each key in the JSON file should be strings. Created #299 to fix this.
Good catch. Thanks for fixing it. 👌
New versions of the MVP
are not being deployed because API
tests are failing:
https://github.com/dwyl/mvp/actions/runs/4213589397/jobs/7328272540#step:9:493
@LuchoTurtle as discussed on our standup this morning, we need to investigate if it's possible to make outbound HTTP/REST requests to an external URL/Endpoint from GitHub Actions
: https://github.com/dwyl/mvp/issues/315 🔍
I'm proposing a refactor to the CI files.
Deploying the review app is done on the review_app.yml
file, which is another entire workflow. You can pause a workflow to wait for another by using https://github.com/marketplace/actions/wait-on-check. However, the review app
should not be called when pushes are made to the main
file. This would make the code more difficult to manage.
Another issue is that I need seeds.exs
to run so the API definition tests actually work.
Therefore, I suggest having two different files:
PR_ci.yml
, which is run on every PR that tries to merge into the main file and runs build and test
, deploy review app
and api definition tests
, in order.main_yml
, which simply deploys the app to fly.io. We could run build and tests
here but since the main
branch is protected (see image below), it's redundant since this work is already being done on the PR. Even if we wanted to do so here, I'm not entirely sure how I'd add tests to the api definition
in this file since it needs an endpoint.I think this would also improve readability.
@nelsonic what do you think?
@LuchoTurtle no objection to reworking the CI files if that's needed to make this work.
API Tests
should check for the presence of a branch that is not main
and infer the Review App ID
from the branch name to derive the URL of the Review App. e.g: mvp-pr-314
-> https://mvp-pr-314.fly.dev My reading of seeds.exs
as it stands right now:
is that none of these dummy records should be required for anything to work.
It's just creating an item
with tags
and then starting a timer
for that item
.
All of this can (should!) be done via the REST API
if it's required.
No tests should depend on the existence of these records.
I thought I had responded to this but apparently not.
I did not want to have to seed.exs
the project for the API tests to work. The reason for this is that hopps
CLI does not execute the test suite in order of GUI appearance. The folder execution seems to be random every time a request is added or not.
Each folder seems to be run from last to first but inside each folder, the tests run in sequence. This behaviour is not expected and I tried going through the seeds
route so it's more perceptible.
Hence why I opened:
https://github.com/hoppscotch/hoppscotch/issues/2928
@LuchoTurtle with regards to Hoppscotch
not running the API
tests sequentially https://github.com/hoppscotch/hoppscotch/issues/2928
that still feels like a pretty major feature-gap in their test runner.
I'd be very happy for you to spend a few hours writing some TypeScript
to add a flag to their CLI
https://github.com/hoppscotch/hoppscotch/tree/main/packages/hoppscotch-cli
so that everyone using it can run tests sequentially when needed.
There are many use cases where the sequence of requests is essential.
Buuutt ... given that they are unlikely to respond to your issue any time soon. ⏳
It might be faster to create a controller
in our MVP
and an endpoint to seed the data.
I can pair with you on this if you're free.
Despite having many stars ⭐ the Hoppscotch CLI
is not very popular on NPM:
https://www.npmjs.com/package/@hoppscotch/cli
From our discussion on Standup sounds like you're going to investigate using "Pre-request Scripts": https://docs.hoppscotch.io/quickstart/scripts
Keep us posted with your progress. 👌
From what I've gathered from docs.hoppscotch.io/quickstart/scripts, you can't really make requests in Pre-request
scripts, only set up env variables.
Therefore, in each folder, I need to create the necessary resources first, which might lead to some duplicated requests in-between folders.
Someone requested being able to do requests in Pre-request
scripts in https://github.com/hoppscotch/hoppscotch/issues/2205#issuecomment-1085021876 but it seems that there isn't any update on this 😞
The tests now run properly on a clean database. But the resources that are created on previous runs make them fail (they are not idempotent, nor will they ever be. For example, we have to create a new tag with new values every time and we can try to do so by having it with a random value -> https://docs.hoppscotch.io/quickstart/scripts#generating-random-values-to-test-api ; but this operation is doesn't guarantee values to always being unique).
All review apps use the mvp-db
. Would it make sense to create a new database for each instance? Should I go for the random values approach (even though it will add test records to the mvp-db
in each run)?
What do you think? @nelsonic
@LuchoTurtle I suggested the idea of having a new/separate DB per review app when @SimonLab was setting up our review apps script. In the end we went with his suggestion of reusing the MVP DB for the review apps. I still agree, that's a reasonable approach as it minimises resources. What you are describing is an attempt to get around the issue of having known item.id and timer.ids so that the API tests will pass. That is quite brittle and will require truncating the DB for each new commit.
Please bear in mind that both item.id and timer.id will be CIDs in the "real" version of our API and APP SKD, so using integers is only temporary.
@nelsonic The problem is that there is no way of making these tests idempotent because we have requests creating item
s or tag
s and using the returned id
to perform other tests. By running each test, we are adding more data to the database in each commit, which leads to us having to truncate the DB, as you mentioned.
So, if I can't make the tests idempotent and if I we're maintaining the mvp-db
to be shared with PRs, I admit I have no idea how to turn this around without doing what we were doing before (having a temporary server running on localhost that is run in the github action pipeline) and scraping what I've done.
If you want to meet for a Zoom call, I can. I've been trying to have random values being assigned on each pre-request test but I'm having errors like TEST_SCRIPT_ERROR Script evaluation failed: [object Object]
...
We could go with the route of having an endpoint that seeds the database when called. But if that endpoint is open for anyone to do, why not seed the database regardless if it's in production or dev mode? If anyone can do it by calling the endpoint, might as well just seed it straight away. Am I missing something?
This question was addressed yesterday in: https://github.com/dwyl/mvp/issues/315
But for completeness: There is always a way of making API requests idempotent
.
If our API requests are depending on a particular item.id
to be a specific integer
they are wrong.
They should receive the item.id
or timer.id
from the API response
and use it in subsequent requests.
Please avoid making assumptions and then stating them as facts. 🙏
ALL ids
will be cryptographically generated in the near future using cid
.
Perhaps I worded the word idempotent
wrong. I was alluring alluding to the fact that I can't make execute the same test suites without adding more records to the database.
All "review app" instances are considered expendable. We can create as many records in them as we like; they will be deleted once no longer required. If anything we should be running load testing on them; i.e. creating tens of millions of records to test if there is a slow-down in the API once it has a decent number of records.
I agree. What I'm saying is that each review app
shared the mvp-db
, so any operations done in each commit will have impact in the mvp-db
.
This can be found in the review-apps.sh
script:
# attach existing posgres
echo "attach postgres cluster - create new database based on app_name"
flyctl postgres attach "$FLY_POSTGRES_NAME" -a "$app"
Where $FLY_POSTGRES_NAME
is passed as argument in the yaml
file as:
- name: Run Review App Script
run: ./.github/scripts/review-apps.sh
env:
...
FLY_POSTGRES_NAME: mvp-db
For sure. When we start building the "real" API we will be doing quite a few things differently. including having a "production" DB where no Devs have access and no "scripts" are executed. For now, the mvp-db
is a general purpose testing place. 👍
https://github.com/dwyl/mvp/actions/runs/4114299426/jobs/7143013781
@LuchoTurtle could you please investigate this when you're back online. 🙏