Open firemax opened 7 years ago
I'm having the same problem. If I go in and change the code to a higher random value I can sustain more throughput without running into the duplicate batch ids. I was running into duplicate batch ids at about 1000 requests per minute.
I switched up the function to this:
def generate_id charset = %w{ 2 3 4 6 7 9 A C D E F G H J K M N P Q R T V W X Y Z} (0...80).map{ charset.to_a[rand(charset.size)] }.join end
And now I can dump 300000 requests per minute through this plugin. Obviously it buffers for a while getting this data into the queue.
Are we using this incorrectly? Should we be adopting a different enqueueing strategy for high volumes of requests?
FYI I'm tailing /etc/nginx/access.log
I still get batch id conflicts because the unique_val can occasionally be empty string.
SQS returns 'Aws::SQS::Errors::BatchEntryIdsNotDistinct' exception when trying to send high amount of messages. i think generate_id should be more unique.
when i replace 'generate_id' with something unique, it works fine.
def generate_id
charset = %w{ 2 3 4 6 7 9 A C D E F G H J K M N P Q R T V W X Y Z}
(0...10).map{ charset.to_a[rand(charset.size)] }.join
end