Closed brandonbullen closed 1 year ago
Hi @brandonbullen 👋 thanks for reaching out!
I think the best solution for this specific use case would be using preProcessorARN
and postProcessorARN
(doc here) to create the exact input/output condition for each function invocation, while using parallelInvocation=false
to avoid conflicts between different power values (which happen in parallel anyway).
This way, you can create/remove the messages from your queue(s) without interfering with the invocation time (or the code) or the function you're power-tuning.
I understand this requires you to write additional code, but it's the most flexible way to address all possible use cases for a tool such as Lambda Power Tuning :)
Let me know if you have any doubts or encounter issues.
Thanks for the suggestions, I completely looked over this as an option to solve the problem! I'll give it a shot and reach out if I encounter issues here.
Great!
Let me know :)
This may already exist, but I am having difficulty identifying the correct way to handle this.
Majority of our lambda functions are backed by an SQS queue that feeds them event data to work off of. The routines for these functions involve consuming the SQS data format, and then eventually removing the message from the queue when work is complete.
When mocking the SQS data format in the payload, I receive errors when trying to remove the receiptHandle. And when trying to span out for multiple invocations, the message would be removed before the second attempt would go through.
Would it make sense to add some functionality around pulling X number of messages from a source queue in a prior step to executing the lambda functions, and then passing one message receipt to each of the parallel invocation that follows?
Adding in something into the input like: