typegoose / mongodb-memory-server

Manage & spin up mongodb server binaries with zero(or slight) configuration for tests.
https://typegoose.github.io/mongodb-memory-server/
MIT License
2.56k stars 185 forks source link

Occasional `StdoutInstanceError: Port "39697" already in use` with v10.0.0 #883

Closed B4nan closed 1 week ago

B4nan commented 1 month ago

Versions

package: mongo-memory-server-core

What is the Problem?

After upgrade to v10, I've hit this error once in the CI:

Starting the MongoMemoryServer Instance failed, enable debug log for more information. Error:
 StdoutInstanceError: Port "39697" already in use
    at MongoInstance.checkErrorInLine (/home/runner/work/mikro-orm/mikro-orm/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:357:58)
    at MongoInstance.stdoutHandler (/home/runner/work/mikro-orm/mikro-orm/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:336:14)
    at Socket.emit (node:events:517:28)
    at addChunk (node:internal/streams/readable:368:12)
    at readableAddChunk (node:internal/streams/readable:341:9)
    at Readable.push (node:internal/streams/readable:278:[10](https://github.com/mikro-orm/mikro-orm/actions/runs/10226440366/job/28296683190#step:10:11))
    at Pipe.onStreamRead (node:internal/stream_base_commons:190:23)
Starting the MongoMemoryReplSet Instance failed, enable debug log for more information. Error:
 StdoutInstanceError: Port "39697" already in use
    at MongoInstance.checkErrorInLine (/home/runner/work/mikro-orm/mikro-orm/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:357:58)
    at MongoInstance.stdoutHandler (/home/runner/work/mikro-orm/mikro-orm/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:336:14)
    at Socket.emit (node:events:517:28)
    at addChunk (node:internal/streams/readable:368:[12](https://github.com/mikro-orm/mikro-orm/actions/runs/10226440366/job/28296683190#step:10:13))
    at readableAddChunk (node:internal/streams/readable:341:9)
    at Readable.push (node:internal/streams/readable:278:10)
    at Pipe.onStreamRead (node:internal/stream_base_commons:190:23)
Error: Port "39697" already in use
    at MongoInstance.checkErrorInLine (/home/runner/work/mikro-orm/mikro-orm/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:357:58)
    at MongoInstance.stdoutHandler (/home/runner/work/mikro-orm/mikro-orm/node_modules/mongodb-memory-server-core/lib/util/MongoInstance.js:336:[14](https://github.com/mikro-orm/mikro-orm/actions/runs/10226440366/job/28296683190#step:10:15))
    at Socket.emit (node:events:517:28)
    at addChunk (node:internal/streams/readable:368:12)
    at readableAddChunk (node:internal/streams/readable:341:9)
    at Readable.push (node:internal/streams/readable:278:10)
    at Pipe.onStreamRead (node:internal/stream_base_commons:190:23)

Most likely the same as the initial issue in #827.

Code Example

Based on the docs, jest with global setup/teardown, nothing special.

https://github.com/mikro-orm/mikro-orm/commit/5dfe27aedb1e4ed19c72ad7714b4e189b6cfaf1b

Debug Output

Didn't happen locally, I don't have time to hunt that in the CI, added manual retry instead (https://github.com/mikro-orm/mikro-orm/commit/677ff1b9013d7b403f46aa5e3f537fdd6e51c0f9).

Do you know why it happenes?

no

hasezoey commented 1 month ago

thanks for reporting, debugging those issues is quite hard as they are likely sporadic and can depend on a lot of external values, so i dont know if this can be fixed "for good".

in any case, i have done 2 changes in the hopes of that those were what may have caused this issue this time, they are available for now in 10.0.1-beta.1, could you try running this for a while?


as a off-topic question regarding your workflow, why do > COVERAGE_RESULT and then cat it later? to my knowledge the output from that is not actual coverage data (which jest will generate in directory coverage, which codecov uses) and so only makes test logs harder to read?

Log example ```txt info - 2024-08-03 08:03:50,667 -- > /home/runner/work/mikro-orm/mikro-orm/coverage/lcov.info info - 2024-08-03 08:03:50,667 -- > /home/runner/work/mikro-orm/mikro-orm/coverage/coverage-final.json info - 2024-08-03 08:03:50,667 -- > /home/runner/work/mikro-orm/mikro-orm/coverage/clover.xml ```
B4nan commented 1 month ago

Thanks for the prompt fix in the blind, updated to beta and removed the manual retries, will let you know if it happens again, the initial build passed just fine.

The COVERAGE_RESULT is a forgotten thing from the past, it does include the coverage too, as it contains the whole output of yarn test, including the coverage report at the end, and it used to be forwarded to coveralls at a later stage. I've cleaned that up, that's for mentioning that.


Btw a connected question, I tried using the mongodb-memory-server-core since I already have a mongo instance running via docker, but in the CI logs I see it's not being used (I see Downloading MongoDB "7.0.11"), what am I missing, do I need to set it up on linux differently? It works fine locally on macos.

hasezoey commented 1 month ago

Btw a connected question, I tried using the mongodb-memory-server-core since I already have a mongo instance running via docker

mongodb-memory-server downloads, starts and manages the binaries, it does nothing with docker, the download part could be omitted by using config option SYSTEM_BINARY to point it to a existing binary (like downloaded via apt or something). mongodb-memory-server is meant so that you just have to clone the repo, install dependencies (via npm / yarn, etc) and be able to run the tests without further configuration (at least regarding MMS), consistently across all systems mongodb offers binaries for. (ie not having to worry about downloading the binaries, not forgetting to start them or mess with docker; also everyone is very likely on the same mongodb version)

TL;DR: either use mongodb-memory-server OR a docker instance of mongodb

PS: i will let this issue stay open for a while (lets say a month) before closing.

B4nan commented 1 month ago

FYI I use this package only for transaction-related tests, since I never found a working setup for docker with replicasets and this just works. But using this for everything yields much slower results on my end, at least locally, there are just a few tests that use transactions in mongo (like 3 or 4, vs ~700 of the rest), I am more than happy with using both this and the docker instance (docker is mandatory because of other drivers anyway, this is just another service in the compose file). So I was thinking I could use the mongo instance I already have around, it shouldn't be relevant if its installed on the system or via docker, right?

hasezoey commented 1 month ago

So I was thinking I could use the mongo instance I already have around, it shouldn't be relevant if its installed on the system or via docker, right?

if i understand correctly, then yes, you can use that singular instance for your other tests, but mongodb-memory-server will not interact with it in any way (ie making it part of a replicaset).

But using this for everything yields much slower results on my end, at least locally, there are just a few tests that use transactions in mongo

i would be a bit interested into why this is, so some questions:

also what do you mean with "everything yields much slower results" more exactly, each operation taking longer or just the whole setup / teardown or literally everything about it?

B4nan commented 1 month ago

yes, yes and i guess yes as well, that's a good point, i just tried to use the same connection uri in ~20 more tests, and they started taking ages (10-20s each vs a second or two with native mongo), its not the setup/teardown, nothing changed about that

by everything i mean tests that use mongo take much longer, i guess it adds up as i am running 8 workers locally. I'll try how it works if i create a separate single instance for rest of the tests that don't need transactions

B4nan commented 1 month ago

FYI this haven't happened again since your fix.

B4nan commented 1 week ago

I'll close this one, never happened since your fixes, thanks again!