praekeltfoundation / vumi

Messaging engine for the delivery of SMS, Star Menu and chat messages to diverse audiences in emerging markets and beyond.
BSD 3-Clause "New" or "Revised" License
421 stars 131 forks source link

mt_tps does not seem to affect number of messages hitting the SMPP endpoint #1076

Open Telewa opened 6 years ago

Telewa commented 6 years ago

We have mt_tps set to 1.

However, if 100 messages are queued in the outbound queue, all of them seem to be sent in less than a minute. (SMPP) This causes issues where the terms of usage require that only 1 message is sent per second. How can we ensure that only 1 message is sent per second?

Please somebody advice.

rudigiesler commented 6 years ago

Hi @Telewa

The mt_tps is the correct config value for the SMPP transport for limiting the rate of outbound messages.

If this is not correctly throttling messages, then it's possible that there is a bug in the throttling code. Do you see anything about throttling in the logs? There are a number of logs for various throttling actions at the INFO log level.

I would suggest looking at https://github.com/praekelt/vumi/blob/develop/vumi/transports/smpp/smpp_service.py#L74-L194 for a start at the throttling code.

Our contributing guidelines can be found at: https://github.com/praekelt/vumi/blob/develop/CONTRIBUTING.rst

Telewa commented 6 years ago

Thank you @rudigiesler,

Yes I can see throttle messages. These are messages like No more throttled messages to retry. and No longer throttling outbound messages. however, the even if mt_tps is set to 1, 100 messages are delivered pretty quickly without regard to this parameter. I would expect this param to determine how long it would take to deliver x number of messages.

rudigiesler commented 6 years ago

@Telewa Yes, this parameter should limit the amount of messages sent within each 1 second window. So as long as it's an integer greater than 0, it should limit the outbound speed to the configured messages per second. Keep in mind that this is vumi messages per second, not PDUs per second, but it seems like in your case you're talking about vumi messages per second.

If it's not then it's possible that there's a bug somewhere in the throttling code that we haven't been able to pick up on.

Telewa commented 6 years ago

@rudigiesler just a clarification, does window in your previous message mean window_size as used here? https://github.com/praekelt/vumi/blob/e179b69296f368ce4ad6bce34f96c73df675c8b3/vumi/components/window_manager.py#L23

in case it is, how can we make this parameter dynamic ?

rudigiesler commented 6 years ago

No, we're not using the window manager in the SMPP transport. We're using a twisted looping call: https://twistedmatrix.com/documents/current/api/twisted.internet.task.LoopingCall.html . It's hardcoded to a window size of 1 second: https://github.com/praekelt/vumi/blob/e179b69296f368ce4ad6bce34f96c73df675c8b3/vumi/transports/smpp/smpp_service.py#L59 , I don't see any reason to change this though. I don't think that we need a window size smaller than 1 second.

Telewa commented 6 years ago

Cool. Thanks @rudigiesler for the clarification.

Do you confirm however, that if we have 100 messages in rabbitmq vumi outbound queue, to be sent out, and we have set the mt_tps to 1, then it should take 100 seconds to completely deliver all these messages?

rudigiesler commented 6 years ago

Yes, if you're using the SMPP transport then that should be the case. If it doesn't limit the sending rate, then there's possibly a bug in the SMPP transport.

Telewa commented 6 years ago

Thanks a lot @rudigiesler. i'll look at the code then. There could be a bug somewhere.