Closed boulepick closed 2 years ago
@boulepick Hey, thanks for detailed description. I have never used it with MongoDB Atlas cluster. Since you see this difference between localhost and cluster, it is somehow related to cluster specifics and RateLimiterMongoDB
not being able to follow this.
As I see from info you provided, new object is created on the first consume to a cluster upserted: new ObjectId("621fec0568d6587a94266266")
. If you could understand, why cluster response is different, it could help to improve RateLimiterMongoDB
.
Ok so I did some digging and was not able to get an understanding of why the cluster responds differently, I also do not have time to dig in more into it. But after changing all the "this._collection.findOneAndUpdate" to "this._collection.updateOne", it all worked without issue.
@boulepick MongoDB docs say updateOne
doesn't return updated document, see here https://docs.mongodb.com/manual/reference/method/db.collection.updateOne
Experiencing the same issue. Must be relatively new since this wasn't an issue for me last year.
@MRG95 Do you also use latest mongoose
to connect to MongoDB Atlas? If so, the issue is somewhere between them. It doesn't return a document on upsert, so RateLimiterMongoDB
can't work with empty data. You can find context in the issue description above.
@animir Yes that's correct since I only noticed it in a new project with everything updated. I haven't tested it on a cluster, only Atlas for now. I'd be curious to see if I have the same result as op or if it's actually just related to the mongoose version (I'm on 6.2.6). Thanks for the insight! I may consider downgrading mongoose for the time being.
Experiencing the same issue when using latest mongoose which updates to latest mongodb driver. This is a blocker for upgrading. Is there a work around?
@ywilkof Do you also use MongoDB Atlas?
This is happening to me when using rate-limiter-flexible (2.3.6), mongoose (6.2.7) and mongo-memory-server (8.4.1). I get the exact same error message, and my solution is the same -- downgrading to mongoose 5.X solves the issue. This leads to me to assume this is not an atlas issue specifically.
@ywilkof I tested rate-limiter-flexible (2.3.6) with mongoose
6.2.7 and 6.2.8 with MongoDB 5.0.6 locally. It works fine. Could you share your code with mongoose connection creation and all rate-limiter-flexible
and mongoose
options?
FWIW I've just encountered the same issue with Atlas but the issue goes away if I up my duration
to a really large number.
With my duration
set to 1 second it doesn't look like anything is being written to the collection, but if I up the duration
to say 10000 then I can see the document being created in the collection. I wonder if it's a date/time issue?
It doesn't look like a date thing - it looks like if there isn't a document in the collection then consume
rejects.
For me the issue is occurring because getDriverVersion
is erroring and returning the version { major: 0, feature: 0, patch: 0 }
which results in the upsertOptions.returnDocument = 'after';
to not be set - which in turn means the document inserted by _upsert
is not returned.
So I think I figured out the issue (at least in my case). It turns out I was initialising the middleware with the mongoose.connection
before the mongoose.connect
had resolved so the clients topology wasn't present. I change my initialisation order and it now works. 🤷
@glennflanagan I am glad, you have it fixed. Could you confirm this wiki page https://github.com/animir/node-rate-limiter-flexible/wiki/Mongo has a not good example of preparing connection with mongoose package?
Yeah that example is good. I was migrating an existing project to Atlas and I have the mongoConn
exported as a module from another file. It was just slightly out of order 😔
@glennflanagan Thank you for the help. It is clear now, that the bug is related to how mongoose
version 6.x creates a connection https://mongoosejs.com/docs/migrating_to_6.html#mongoose-connect-returns-a-promise
It returns a promise not connection instance as before. Looks like many of us had the issue with MongoDB Atlas simply because it takes longer to create a connection. If promise is not awaited, it may result to connection be not established at the moment of the first consume, as a result it may brake things under the hood.
For all who experienced this issue, I updated the example for the mongoose
version 6.x https://github.com/animir/node-rate-limiter-flexible/wiki/Mongo.
With mongoose version 6.x it should be:
try {
mongoConn = await mongoose.connect(`mongodb://127.0.0.1:27017/${dbName}`);
} catch (error) {
handleError(error);
}
Hi @animir , is there any other way to handle the promise in the library? If I handle it using the method you mention,
try {
mongoConn = await mongoose.connect(`mongodb://127.0.0.1:27017/${dbName}`);
} catch (error) {
handleError(error);
}
node js will report Top-level 'await' expressions are only allowed when the 'module' option is set to 'es2022', 'esnext', 'system', or 'nodenext', and the 'target' option is set to 'es2017' or higher.
@donaldkg Hey. It is not about this package, it is about how you structure your code. You can use old-school then
/ catch
promise handlers. Or you can make a separate function getMongoConnection
and use it in your project. Or you can create separate script, place all limiters there and then export getMongoLimiter
function. Yes you'll need to deal with promises anyway, but that's how mongoose
works now.
There seems to be an issue when using a clustered connection from MongoDB Atlas. I tried the same setup both on a local server and a mongodb Atlas server, there is always an error on the first consume when testing on the Atlas server.
Setup: "MongoDB": 5.0.6 "Mongoose": "^6.2.2", "NodeJS" : 14.18.1
When using local server: Localhost
in RatelimiterMongo.js line 14
the result value coming in from the this._collection.findOneAndUpdate on the first consume looks like this
Second consume localhost call => no error on localhost
however, when testing with the mongoDB Cluster I get a different result format generating the error.
MongoDb Cluster
First consume to Atlas cluster always throw error
Second consume to Atlas => no error
Am I doing this wrong or is there a bug?