Closed nathanielrindlaub closed 2 years ago
Update on this: Microsoft bumped our request rate up to 100 calls every 5 minutes, so I increase the frequency of the poll to pull down and process up to 10 messages every minute (every 1 minute is the highest frequency Event Bridge will allow).
Done! Now that we are hosting MDv5 ourselves, we were able to trigger animl-api-ingest
lambda whenever new messages are added to the SQS queue, which greatly increased efficiency.
Right now the Megadetector API caps us at 10 requests per 5 minutes. I've talked with Dan Morris at MS AI 4 Earth, and he's totally fine with increasing this for us, I just need to get back to him with our expected usage. So that would be an obvious first step.
However, the whole inference worker is structured around this limitation: in order to not inundate the Megadetector with requests once we've maxed out, I have the inference handler.js function poll SQS for new messages every 5 minutes, pull the first 10 out of the queue, and process request inference on them. There's a bunch of ways this could be improved:
It's also worth noting that there are really two separate use cases that we need to support but might have different solutions: (1) real time inference from images coming into the system from wireless camera traps, and (2) bulk inference from users uploading images from a hard drive.