Open mrsimpson opened 2 years ago
Also we noted, that running tests from a package-subdir fails (probably since jest is not a dev-dependency of each package). Finally, we noted that we cannot trigger a test with watch easily. Is there parameter in pnpm which we need to set in order to pass the watch-parameter to the module?
In other words: How the heck do you test locally in an efficient manner?
/cc @jschirrmacher
hey @mrsimpson !
Repo maintainer here. I normally run tests from either the project root, or in the package directory (when I'm making a small change), as: pnpm install && pnpm build && pnpm test
.
If the logs are too verbose, you can add QUIET=true pnpm test
.
All you should need, outside of the npm dependencies, is a local installation of postgresql
and redis
, both running on default ports.
If your test run is still failing, can you post the full logs here? I'd be glad to help debug anything else that's missing.
@nessur great to get to know you and thanks for the offered help.
I'll try to document what I understood and where things go wrong
As per my understanding:
mds-cache
pnpm build
was not necessary in order to run tests, since ts-jest
transpiles the files under test on-the-fly
Am I wrong with either of that?I am running a (compose-based) redis and postgres locally.
Running
pnpm install && pnpm build && pnpm test
on a832fdbb7824d7df24e0ba74d75b59d81f017478 results in the following errors (in the order of appearance)
mdstest
does not exist=> ✅ CREATE DATABASE mdstest;
– well, that was easy ;
pretest
script fails: @mds-core/mds-attachment-service@0.5.24 pretest .../mds-core/packages/mds-attachment-service
pnpm typeorm -- schema:drop
[...]
Error during schema drop:
error: password authentication failed for user "oliverjaegle"
=> typeorm doesn't know about the environment and doesn't properly connect to the postgres => ❓ export $(cat ../../env | xargs) && pnpm typeorm -- schema:drop
fixes that one
arm64
) @mds-core/mds-attachment-service@0.5.24 test .../mds-core/packages/mds-attachment-service
DOTENV_CONFIG_PATH=../../.env jest --runInBand
FAIL tests/index.spec.ts
● Test suite failed to run
Something went wrong installing the "sharp" module
Cannot find module '../build/Release/sharp-darwin-arm64v8.node' from '../../node_modules/.pnpm/sharp@0.29.3/node_modules/sharp/lib/sharp.js'
=> skipping mds-attachment-service
for now
mds-collector-service
is the next one to fail with the following logs
FAIL tests/service.spec.ts (6.194 s)1259","ISOTimestamp":"2022-03-23T22:58:11.259Z","namespace":"mds-service-helpers","message":"Terminating process @mds-core/mds-colle
Collector Service
✓ Service Unavailable (15 ms)
Repository Migrations
✓ Run Migrations (32 ms)
✓ Revert Migrations (219 ms)
Service Methods
✓ Register Schema (OK) (89 ms)
✓ Register Schema (Error) (34 ms)
✓ Get Schema (OK) (21 ms)
✓ Get Schema (Invalid schema_id) (42 ms)
✕ Write Schema Messages (OK) (29 ms)
✓ Write Schema Messages (Invalid schema_id) (17 ms)
✓ Write Schema Messages (Invalid provider_id) (4 ms)
✓ Write Schema Messages (Invalid message) (9 ms)
● Collector Service › Service Methods › Write Schema Messages (OK)
Error Writing Messages for Schema test
[...the coverage report...]
Ran all test suites.
{"level":"info","timestamp":"1648076291260","ISOTimestamp":"2022-03-23T22:58:11.260Z","namespace":"mds-rpc-common","message":"Stopping RPC server listening for application/grpc-web+json requests"}
{"level":"info","timestamp":"1648076291260","ISOTimestamp":"2022-03-23T22:58:11.260Z","namespace":"mds-repository","message":"Terminating R/W repository: collector"}
{"level":"info","timestamp":"1648076291260","ISOTimestamp":"2022-03-23T22:58:11.260Z","namespace":"mds-repository","message":"Terminating R/O connection: collector-ro-78251964-cb7c-4008-afb1-6d129906a845"}
{"level":"info","timestamp":"1648076291260","ISOTimestamp":"2022-03-23T22:58:11.260Z","namespace":"mds-repository","message":"Terminating R/W connection: collector-rw-78251964-cb7c-4008-afb1-6d129906a845"}
/Users/oliverjaegle/projects/deutschebahn/fermata/mds-core/packages/mds-collector-service:
ERR_PNPM_RECURSIVE_RUN_FIRST_FAIL @mds-core/mds-collector-service@0.4.24 test: `DOTENV_CONFIG_PATH=../../.env jest --runInBand`
Exit status 1
ELIFECYCLE Command failed with exit code 1.
ELIFECYCLE Test failed. See above for more details.
=> I try to analyze this single problem by descending into its directory and calling pnpm test
packages/mds-collector-service ‹a832fdbb●›
╰─$ pnpm test
[...]
ts-node ./node_modules/typeorm/cli.js "schema:drop"
sh: ts-node: command not found
=> As far as I understood this is due to the sub-package not having dev-dependencies on their own, thus no local node_modules
is available. The node_modules
folder is not available unless I run all the tests from the workspace root.
In other packages, it's jest
which cannot be located:
/packages/mds-geography-api ‹a832fdbb●›
╰─$ pnpm test
[...]
sh: jest: command not found
Those are the main issues, more to come ;)
Regarding 3, this problem occurs in our special environment because we use ignore-scripts
setting in .npmrc
to prevent scripts in some direct or indirect dependencies to run automatically. We normally solve that by adding npm rebuild <lib name>
in a postinstall
target in package.json
, and will do that in our fork as well.
I don't have all the answers yet, but your DB config issue with (2. Pretest) is that PG_NAME
is missing from your shell's ENV. My ~/.zshrc
has:
export PG_HOST=localhost
export PG_NAME=mdstest
export PG_USER=mdsadmin
export PG_MIGRATIONS=true. (so that `pnpm start` in a package will also run migrations on boot )
where mdsadmin
is a SUPERUSER account w/ no password. If your PG_USER
value is empty, it'll use your default user oliverjaegle
which has a password, so you need to add export PG_PASS=something
as well.
@nessur Thx, I'm well aware of that. And what you say actually confirms my guess: There is some external configuration (e. g. your zsh-environment) which I don't have.
Since we already configure the environment in .env
, I think it should be respected in the pretest as well. I created #962 for that purpose.
Not a bad suggestion. I don't love relying both on mds-core/.env
and exports in my .zshrc
file.
Can you try removing ignore-scripts
from your NPM config? It seems like that could mess up package.json
scripts, as mentioned here: https://stackoverflow.com/questions/59471962/how-does-npm-behave-differently-with-ignore-scripts-set-to-true
I removed the ignore-scripts
(temporarily, we need this for security reasons) and re-triggered a pnpm clean
.
The collector service tests still fail:
FAIL tests/service.spec.ts
│ Collector Service
│ ✓ Service Unavailable (11 ms)
│ Repository Migrations
│ ✓ Run Migrations (30 ms)
│ ✓ Revert Migrations (176 ms)
│ Service Methods
│ ✓ Register Schema (OK) (66 ms)
│ ✓ Register Schema (Error) (14 ms)
│ ✓ Get Schema (OK) (4 ms)
│ ✓ Get Schema (Invalid schema_id) (37 ms)
│ ✕ Write Schema Messages (OK) (18 ms)
│ ✓ Write Schema Messages (Invalid schema_id) (9 ms)
│ ✓ Write Schema Messages (Invalid provider_id) (4 ms)
│ ✓ Write Schema Messages (Invalid message) (4 ms)
│ ● Collector Service › Service Methods › Write Schema Messages (OK)
│ Error Writing Messages for Schema test
Remark: My containerized postgres does not allow for a passwordless login, but I use postgres/pwd
in my .env
-file and this seems to work fine in other tests, so I assume this is not related.
If we know, which scripts in dependencies we want to execute, it is still possible to do so by running npm rebuild <library name>
, for example in package.json
's postinstall
script.
can-i-ignore-scripts Tool helps us to figure out, which scripts exist and which we want to be run.
I removed the
ignore-scripts
(temporarily, we need this for security reasons) and re-triggered apnpm clean
. The collector service tests still fail:
My concern was that you were unable to execute pnpm test
within packages/mds-collector-service
. Sounds like that's resolved now?
Jest isn't great about printing verbose errors, specifically it can hide any error-stacks that may be returned in an exception. Looking at the contents of 'Write Schema Messages (OK)', the only unique item that could fail here, but not cause other tests to also fail, is the check to instantiate a Kafka Producer.
If I try to make my tests fail this way, i can do KAFKA_HOST=xxyy pnpm test
. This, however, produces a lot of hard to ignore errors on my test outputs, though it does produce the exact same list of successes/failures as you posted above.
Do you use an IDE that lets you set break points, run an interactive debugger? In VSCode, I can set a breakpoint inside an exception handler (like packages/mds-collector-service/service/producer.ts:155) to see what error details are being squelched.
Re: 3,
I have an M1 machine too, and I can't seem to reproduce... What version of node
are you running? A version misalignment may cause this. The current tip of develop
expects Node v16.14.2
@avatarneil with respect to sharp
I am positive this has got something to do with disabling scripts. We'll figure out how to do that in conjunction with pnpm
.
For the moment, I cannot force it to re-install and execute the postinstall script. I'll look into that tomorrow, skipping it for now is ok as it's a local issue. The other issues are more important.
EDIT: I removed the pnpm cache and re-installed. No clue why, but now the post-installations scripts are being executed:
node_modules/.pnpm/aws-sdk@2.999.0/node_modules/aws-sdk: Running postinstall script, done in 63ms
node_modules/.pnpm/protobufjs@6.11.2/node_modules/protobufjs: Running postinstall script, done in 68ms
node_modules/.pnpm/@datadog+native-metrics@1.1.0/node_modules/@datadog/native-metrics: Running install script, done in 838ms
node_modules/.pnpm/sharp@0.29.3/node_modules/sharp: Running install script, done in 6.6s
node_modules/.pnpm/@datadog+pprof@0.1.3/node_modules/@datadog/pprof: Running preinstall script, done in 79ms
node_modules/.pnpm/@datadog+pprof@0.3.0/node_modules/@datadog/pprof: Running preinstall script, done in 59ms
node_modules/.pnpm/@datadog+pprof@0.3.0/node_modules/@datadog/pprof: Running install script, done in 6.8s
node_modules/.pnpm/@datadog+pprof@0.1.3/node_modules/@datadog/pprof: Running install script, done in 6.8s
node_modules/.pnpm/dd-trace@1.5.0/node_modules/dd-trace: Running preinstall script, done in 77ms
node_modules/.pnpm/dd-trace@1.5.1/node_modules/dd-trace: Running preinstall script, done in 56ms
node_modules/.pnpm/nodemon@2.0.13/node_modules/nodemon: Running postinstall script, done in 200ms
@nessur
If I try to make my tests fail this way, i can do KAFKA_HOST=xxyy pnpm test. This, however, produces a lot of hard to ignore errors on my test outputs, though it does produce the exact same list of successes/failures as you posted above.
Thanks for that hint! In fact, there was a left-over in my .env
from a previous issue (with Kafka) where I set Kafka to KAFKA_HOST=""
. I commented that now (as well as a not-availavle NATS
variable) =>
The tests (except the attachment-service which still complains about sharp
) are now executing successfully 🎉
Do you use an IDE that lets you set break points, run an interactive debugger?
This was the first thing I tried – developing without an interactive debugger is just no option for me. I hear legends of trace-only-developers, aka console-out-for-ftw
, but I am afraid to meet one of that kind. They are supposed to be a scary crowd ;)
I am as well using VSCode, but the breakpoints are not picked up. Running the tests from the test-explorer fails (I assume that's an issue similar to 4) above: When executing a single tes, the node_modules from the workspace root are not considered). If I set the BP and execute via pnpm t
from the WS root, it's nos imply not stoppin. Do I need to pass another parameter there?
@avatarneil current failure:
FAIL tests/api.spec.tsmp":"1648157231961","ISOTimestamp":"2022-03-24T21:27:11.961Z","namespace":"mds-api-server","message":"4d0687b8-f7a3-4a68-ad33-518c9a522e03 POST /col
Collector API
API Endpoints
✓ GET /collector/not-found (404 NOT FOUND) (32 ms)
✓ GET /collector/health (200 OK) (5 ms)
✕ GET /collector/schema/forbidden (403 FORBIDDEN) (43 ms)
✕ GET /collector/schema/forbidden (403 FORBIDDEN) (13 ms)
✓ GET /collector/schema/notfound (404 NOT FOUND) (8 ms)
✓ GET /collector/schema/test (200 OK) (5 ms)
✕ POST /collector/messages/forbidden (403 FORBIDDEN) (11 ms)
✕ POST /collector/messages/forbidden (403 FORBIDDEN) (3 ms)
✓ POST /collector/messages/notfound (404 NOT FOUND) (8 ms)
✓ POST /collector/messages/test (201 CREATED) (18 ms)
✓ POST /collector/messages/test (400 BAD REQUEST) (7 ms)
● Collector API › API Endpoints › GET /collector/schema/forbidden (403 FORBIDDEN)
expected 403 "Forbidden", got 404 "Not Found"
51 | : {}
52 | )
> 53 | .expect(code)
| ^
54 | ).toMatchObject(response)
55 | })
56 | }
at Object.<anonymous> (tests/api.spec.ts:53:14)
----
at Test.Object.<anonymous>.Test._assertStatus (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:304:12)
at ../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:80:15
at Test.Object.<anonymous>.Test._assertFunction (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:338:11)
at Test.Object.<anonymous>.Test.assert (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:209:21)
at Server.localAssert (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:167:12)
● Collector API › API Endpoints › GET /collector/schema/forbidden (403 FORBIDDEN)
expected 403 "Forbidden", got 404 "Not Found"
51 | : {}
52 | )
> 53 | .expect(code)
| ^
54 | ).toMatchObject(response)
55 | })
56 | }
at Object.<anonymous> (tests/api.spec.ts:53:14)
----
at Test.Object.<anonymous>.Test._assertStatus (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:304:12)
at ../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:80:15
at Test.Object.<anonymous>.Test._assertFunction (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:338:11)
at Test.Object.<anonymous>.Test.assert (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:209:21)
at Server.localAssert (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:167:12)
● Collector API › API Endpoints › POST /collector/messages/forbidden (403 FORBIDDEN)
expected 403 "Forbidden", got 400 "Bad Request"
71 | )
72 | .send(body)
> 73 | .expect(code)
| ^
74 | ).toMatchObject(response)
75 | })
76 | }
at Object.<anonymous> (tests/api.spec.ts:73:14)
----
at Test.Object.<anonymous>.Test._assertStatus (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:304:12)
at ../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:80:15
at Test.Object.<anonymous>.Test._assertFunction (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:338:11)
at Test.Object.<anonymous>.Test.assert (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:209:21)
at Server.localAssert (../../node_modules/.pnpm/supertest@6.1.6/node_modules/supertest/lib/test.js:167:12)
● Collector API › API Endpoints › POST /collector/messages/forbidden (403 FORBIDDEN)
expected 403 "Forbidden", got 400 "Bad Request"
71 | )
72 | .send(body)
> 73 | .expect(code)
| ^
74 | ).toMatchObject(response)
75 | })
76 | }
P. S. Any functional reason to run the test sequentially? As a lucky user of an M1, I prefer pnpm run:concurrent test
🚀 fail fast 💣😬
@avatarneil the above failure is due to my .env
file containing VERIFY_ACCESS_TOKEN_SCOPE=false
.
From what I can see there are quite some dependencies in tests with respect to the environment which may differ from the interactive local manual test where you spin up the services.
In our projects we are using DOTENV-flow which supports .env[.production|develop|test][.local]
files to distinguish the environments. However, other non-node scripts which rely on the environment would have to replicate this behavior.
Having written the above, I propose to use dotenv-flow
instead of dotenv. I updated the PR and re-activated it.
@mrsimpson we run the tests sequentially to avoid suites potentially colliding when using shared resources, and invalidating other tests. For example, imagine tests from one package generate & persist a bunch of devices, and tests from another package expect there to only be one device returned from a query (or something along those lines).
Also, dotenv-flow looks great! Very interested in checking it out :)
@mrsimpson any progress on this? Or, you still stuck on collector-service
tests?
If there's more debug logging in the test run you can expose, perhaps I can offer more advice.
I am still stuck on running individual package-tests. Most of the issues seem to have been env related. Let me rebase the referencing PR. Once this is merged, I'll follow up on the rest of this issue, e. g. By providing a compose-file which spins up an always-fresh configured Postgres and Redis.
When running tests locally, there are a couple of issues which we encounter
"typeorm": "export $(cat ../../.env | xargs) && ts-node ./node_modules/typeorm/cli.js"
)403
are not met)Is there something we need to configure locally in order to make the tests run?