animir / node-rate-limiter-flexible

Atomic counters and rate limiting tools. Limit resource access at any scale.
ISC License
3.06k stars 158 forks source link

Mongodb: Cannot read property 'points' of null at RateLimiterMongo._getRateLimiterRes #161

Closed boulepick closed 2 years ago

boulepick commented 2 years ago

There seems to be an issue when using a clustered connection from MongoDB Atlas. I tried the same setup both on a local server and a mongodb Atlas server, there is always an error on the first consume when testing on the Atlas server.

Setup: "MongoDB": 5.0.6 "Mongoose": "^6.2.2", "NodeJS" : 14.18.1

const limiterSlowBruteByIPOps = {
  keyPrefix: '',
  storeClient: dbConnection,
  tableName: 'login_fail_ip_per_day',
  points: maxConsecutiveFailsByAccountAndIP,
  duration: 60 * 60 * 24 * 90, // Store number for 90 days since first fail
  blockDuration: 60 * 60 * 24 * 365 * 20, // Block for infinity after consecutive
};

When using local server: Localhost

in RatelimiterMongo.js line 14

 _getRateLimiterRes(rlKey, changedPoints, result) {
    const res = new RateLimiterRes();

    let doc;
    if (typeof result.value === 'undefined') {
      doc = result;
    } else {
      doc = result.value;
    }

    **res.isFirstInDuration = doc.points === changedPoints;**
    res.consumedPoints = doc.points;
   ....
}

the result value coming in from the this._collection.findOneAndUpdate on the first consume looks like this

First consume call => no error on localhost
{
  lastErrorObject: {
    n: 1,
    updatedExisting: false,
    upserted: new ObjectId("621feaf6404e93c2a20983d7")
  },
  value: {
    _id: new ObjectId("621feaf6404e93c2a20983d7"),
    key: '::1',
    expire: 2022-05-31T22:08:54.769Z,
    points: 1
  },
  ok: 1
}

Second consume localhost call => no error on localhost

{
  lastErrorObject: {
    n: 1,
    updatedExisting: true
},
  value: {
    _id: new ObjectId("621fedca404e93c2a20985c4"),
    key: '::1',
    expire: 2022-05-31T22:20:58.609Z,
    points: 2
  },
  ok: 1
}

however, when testing with the mongoDB Cluster I get a different result format generating the error.

MongoDb Cluster

First consume to Atlas cluster always throw error

{
  lastErrorObject: {
    n: 1,
    updatedExisting: false,
    upserted: new ObjectId("621fec0568d6587a94266266")
  },
  value: null, // **value here is null and throws error on this line "res.isFirstInDuration = doc.points === changedPoints**;"
  ok: 1,
  '$clusterTime': {
    clusterTime: new Timestamp({ t: 1646259205, i: 3 }),
    signature: {
      hash: new Binary(Buffer.from("9406443ec068e13b88f3a0f867044c3d4d18afa0", "hex"), 0),
      keyId: new Long("7020773886748786690")
    }
  },
  operationTime: new Timestamp({ t: 1646259205, i: 3 })
}

Second consume to Atlas => no error

{
  lastErrorObject: {
        n: 1,
        updatedExisting: true
    },
  value: {
    _id: new ObjectId("621fec0568d6587a94266264"),
    key: '::1',
    expire: 2022-05-31T22:13:26.591Z,
    points: 1
  },
  ok: 1,
  '$clusterTime': {
    clusterTime: new Timestamp({ t: 1646259511, i: 4 }),
    signature: {
      hash: new Binary(Buffer.from("aebd4a7b566496affeb8c2bbb7622add8f84043d", "hex"), 0),
      keyId: new Long("7020773886748786690")
    }
  },
  operationTime: new Timestamp({ t: 1646259511, i: 4 })
}

Am I doing this wrong or is there a bug?

animir commented 2 years ago

@boulepick Hey, thanks for detailed description. I have never used it with MongoDB Atlas cluster. Since you see this difference between localhost and cluster, it is somehow related to cluster specifics and RateLimiterMongoDB not being able to follow this.

As I see from info you provided, new object is created on the first consume to a cluster upserted: new ObjectId("621fec0568d6587a94266266"). If you could understand, why cluster response is different, it could help to improve RateLimiterMongoDB.

boulepick commented 2 years ago

Ok so I did some digging and was not able to get an understanding of why the cluster responds differently, I also do not have time to dig in more into it. But after changing all the "this._collection.findOneAndUpdate" to "this._collection.updateOne", it all worked without issue.

animir commented 2 years ago

@boulepick MongoDB docs say updateOne doesn't return updated document, see here https://docs.mongodb.com/manual/reference/method/db.collection.updateOne

MattPryze commented 2 years ago

Experiencing the same issue. Must be relatively new since this wasn't an issue for me last year.

animir commented 2 years ago

@MRG95 Do you also use latest mongoose to connect to MongoDB Atlas? If so, the issue is somewhere between them. It doesn't return a document on upsert, so RateLimiterMongoDB can't work with empty data. You can find context in the issue description above.

MattPryze commented 2 years ago

@animir Yes that's correct since I only noticed it in a new project with everything updated. I haven't tested it on a cluster, only Atlas for now. I'd be curious to see if I have the same result as op or if it's actually just related to the mongoose version (I'm on 6.2.6). Thanks for the insight! I may consider downgrading mongoose for the time being.

ywilkof commented 2 years ago

Experiencing the same issue when using latest mongoose which updates to latest mongodb driver. This is a blocker for upgrading. Is there a work around?

animir commented 2 years ago

@ywilkof Do you also use MongoDB Atlas?

ywilkof commented 2 years ago

This is happening to me when using rate-limiter-flexible (2.3.6), mongoose (6.2.7) and mongo-memory-server (8.4.1). I get the exact same error message, and my solution is the same -- downgrading to mongoose 5.X solves the issue. This leads to me to assume this is not an atlas issue specifically.

animir commented 2 years ago

@ywilkof I tested rate-limiter-flexible (2.3.6) with mongoose 6.2.7 and 6.2.8 with MongoDB 5.0.6 locally. It works fine. Could you share your code with mongoose connection creation and all rate-limiter-flexible and mongoose options?

glennflanagan commented 2 years ago

FWIW I've just encountered the same issue with Atlas but the issue goes away if I up my duration to a really large number. With my duration set to 1 second it doesn't look like anything is being written to the collection, but if I up the duration to say 10000 then I can see the document being created in the collection. I wonder if it's a date/time issue?

glennflanagan commented 2 years ago

It doesn't look like a date thing - it looks like if there isn't a document in the collection then consume rejects.

glennflanagan commented 2 years ago

For me the issue is occurring because getDriverVersion is erroring and returning the version { major: 0, feature: 0, patch: 0 } which results in the upsertOptions.returnDocument = 'after'; to not be set - which in turn means the document inserted by _upsert is not returned.

glennflanagan commented 2 years ago

So I think I figured out the issue (at least in my case). It turns out I was initialising the middleware with the mongoose.connection before the mongoose.connect had resolved so the clients topology wasn't present. I change my initialisation order and it now works. 🤷

animir commented 2 years ago

@glennflanagan I am glad, you have it fixed. Could you confirm this wiki page https://github.com/animir/node-rate-limiter-flexible/wiki/Mongo has a not good example of preparing connection with mongoose package?

glennflanagan commented 2 years ago

Yeah that example is good. I was migrating an existing project to Atlas and I have the mongoConn exported as a module from another file. It was just slightly out of order 😔

animir commented 2 years ago

@glennflanagan Thank you for the help. It is clear now, that the bug is related to how mongoose version 6.x creates a connection https://mongoosejs.com/docs/migrating_to_6.html#mongoose-connect-returns-a-promise It returns a promise not connection instance as before. Looks like many of us had the issue with MongoDB Atlas simply because it takes longer to create a connection. If promise is not awaited, it may result to connection be not established at the moment of the first consume, as a result it may brake things under the hood.

For all who experienced this issue, I updated the example for the mongoose version 6.x https://github.com/animir/node-rate-limiter-flexible/wiki/Mongo. With mongoose version 6.x it should be:

try {
  mongoConn = await mongoose.connect(`mongodb://127.0.0.1:27017/${dbName}`);
} catch (error) {
  handleError(error);
}
donaldkg commented 2 years ago

Hi @animir , is there any other way to handle the promise in the library? If I handle it using the method you mention,

try {
  mongoConn = await mongoose.connect(`mongodb://127.0.0.1:27017/${dbName}`);
} catch (error) {
  handleError(error);
}

node js will report Top-level 'await' expressions are only allowed when the 'module' option is set to 'es2022', 'esnext', 'system', or 'nodenext', and the 'target' option is set to 'es2017' or higher.

animir commented 2 years ago

@donaldkg Hey. It is not about this package, it is about how you structure your code. You can use old-school then / catch promise handlers. Or you can make a separate function getMongoConnection and use it in your project. Or you can create separate script, place all limiters there and then export getMongoLimiter function. Yes you'll need to deal with promises anyway, but that's how mongoose works now.