Closed robertmircea closed 8 years ago
As far as I know, there is nothing useful in smpp specs. I've missed to ask in the previous post whether the library allows some kind of throughput control. Our (real) SMSC has hard limits on number of submit_sm per second which are allowed to be sent by ESMEs on a single connection. Yet it allows multiple simultaneous smpp connections. If you try to send faster or you ignore window size, you'll get ESME_RTHROTTLED for submit_sm above limits. If this happens, ESME needs to "calm down" for a while and then retry the submit_sm pdus which were throttled.
So for loadbalancing and failover, it is common to see large accounts (systemids) configured like this:
Our ESME clients usually use a kind of round robin routing for sending submit_sm messages through the available (connected) smpp sessions for a large account. If any of the client smsc connections drops, the messages go through the remaining connected smpp sessions, while the dropped connection automatically re-connects and then binds to SMSC server.
Alright, so this Go package will provide you with:
What it does not have for now is the ability to set window size, which I don't know how to do. If you can provide more information (is it a PDU or the value of a specific PDU that sets window size?) it shouldn't be too hard to implement.
Throttling can be easily implemented with buffered channels per connection. At this point I'm not entirely convinced it belongs to the library but due to the simplicity of doing this in Go it might be worth exploring a LimitedTransmitter
which is an extension of the current Transmitter
.
Related to throttling, I believe it should be a core feature - I haven't seen any real SMSCs which do not have very strict rules regarding throughput.
Also, please take into consideration that some more advanced platforms have dynamic throughput control (e.g. depending on internal load it can slow down ESMEs by rejecting submit_sm pdus with status=ESME_RTHROTTLED which is an indication that the client should reduce throughput for a while.
Regarding window size: window size is not a field in any smpp protocol pdu. It is the ability of a smpp session to send up to n smpp pdus (where n = window size) to SMSC without waiting for submit_sm_resp to come back from SMSC synchronously. As you have implemented the package, you are actually using a window size = 1 (meaning you wait for submit_sm_resp before sending another submit_sm)
An actual example: Let's say you configure smsc account and your smpp session with a sending window = 5 In this scenario, you can send up to 5 submit_sm pdus from client before receiving a single submit_sm_resp pdu from SMSC.
When you've sent 5 pdus but you have not received any submit_sm_resp from SMSC, you should block the sending until you receive at least one submit_sm_resp. In that moment, you can send another submit_sm to fill the window again and you block again. If multiple submit_sm_resp are coming from SMSC quickly, you are allowed to submit again pdus in order to fill the window again to configured maximum (5 in our example). Sending should simultaneously respect both the throttle limit (the other limit mentioned above) and the sending window size.
Should you fail to respect the window size (e.g. you send the 6th submit_sm), the SMSC will start rejecting your submit_sm pdus by replying with a submit_sm_resp with status = ESME_RTHROTTLED and, in that case, the ESME should retry the submit after a while.
This windowed sending mechanism allows higher throughput over more latent connections (e.g. ESME and SMSC connect via Internet or using a VPN tunnel over Internet).
Answers inline
On Jan 24, 2016, at 9:28 PM, Robert Mircea notifications@github.com wrote:
Related to throttling, I believe it should be a core feature - I haven't seen any real SMSCs which do not have very strict rules regarding throughput.
Also, please take into consideration that some more advanced platforms have dynamic throughput control (e.g. depending on internal load it can slow down ESMEs by rejecting submit_sm pdus with status=ESME_RTHROTTLED which is an indication that the client should reduce throughput for a while.
Right. As of now, this is expected to be handled by users of this library, it's not automatic.
Regarding window size: window size is not a field in any smpp protocol pdu. It is the ability of a smpp session to send up to n smpp pdus (where n = window size) to SMSC without waiting for submit_sm_resp to come back from SMSC synchronously. As you have implemented the package, you are actually using a window size = 1 (meaning you wait for submit_sm_resp before sending another submit_sm)
An actual example: Let's say you configure smsc account and your smpp session with a sending window = 5 In this scenario, you can send up to 5 submit_sm pdus from client before receiving a single submit_sm_resp pdu from SMSC.
When you've sent 5 pdus but you have not received any submit_sm_resp from SMSC, you should block the sending until you receive at least one submit_sm_resp. In that moment, you can send another submit_sm to fill the window again and you block again. If multiple submit_sm_resp are coming from SMSC quickly, you are allowed to submit again pdus in order to fill the window again to configured maximum (5 in our example). Sending should simultaneously respect both the throttle limit (the other limit mentioned above) and the sending window size.
Should you fail to respect the window size (e.g. you send the 6th submit_sm), the SMSC will start rejecting your submit_sm pdus by replying with a submit_sm_resp with status = ESME_RTHROTTLED and, in that case, the ESME should retry the submit after a while.
This windowed sending mechanism allows higher throughput over more latent connections (e.g. ESME and SMSC connect via Internet or using a VPN tunnel over Internet).
The wire protocol implementation is asynchronous and would allow you to send as many submit_sm as you want before any submit_sm_resp arrives. This all happens amidst link enquiries and other commands. The Go API is blocking through, hiding implementation details to users of the package. That means calling Submit() will block until the corresponding response arrives, or a timeout (default 1s) occurs, but you can call Submit() multiple times in different goroutines and they'll all go through.
In other words we don't really have a window size today.
— Reply to this email directly or view it on GitHub https://github.com/fiorix/go-smpp/issues/2#issuecomment-174338333.
Any word here @VDVsx ?
@fiorix What's the opinion what you are looking for from @VDVsx: whether or not to support these features or something else which requires clarification about the mechanism for throttling and window control?
Just an opinion from someone else that's been using this package as well, nothing specific.
I've been thinking about what would be the right way to implement this and will eventually come to a conclusion. If you have opinions I'm all ears.
I guess you would like the client connection to handle not only the submit throughput but also eventual "throttled" responses and slow down on its own as opposed to just returning an error from submit?
As a consumer of the library, I would like to know if my submit was successfully acked or not by SMSC. This is useful because I might want to retry for a number of times if I want my app to be robust against transient errors. I see ESME_RTHROTTLED the same way treated by library as any incoming enquire_link which unexpectedly comes from SMSC: automatically handled.
The automatic behaviour of the library in case of the temporary errors (like ESME_RTHROTTLED) can be an option when you instantiate the transmitter. Ex. delayWhenThrottledMs = 0 (no delay) or 500 (wait 1/2 second before another submit_sm) etc
Was looking at the simulator I use for integration tests and it doesn't support throttling. This means I'd have to start by adding rate limit in the smpptest
package, which is a server implementation, then move over to the client implementation.
On the client I'm unsure about how it can be correctly implemented. For example, the exported API (to consumers of the package) is currently a blocking API, and should remain that way. However, while the server is throttling these calls would potentially block for longer periods and would hit the max timeout.
Also, uncertain about handling ESME_RTHROTTLED automatically by retrying as opposed to just failing it and then internally slowing down the consumer channel that dispatch messages temporarily.
IMO both features would be nice to have, for mass sending current setup does not scale that well, since a lot of go routines need to be fired in parallel(+ same number of open connections to the SMSC), but I don't see a easy way to do this using a blocking API for the clients, but maybe introduce something new that would handle all this logic in the background ?
This package was designed to scale using goroutines like any other client/server in go, and integrating well with http servers and so on. For mass sending, you can have a pool of goroutines fed by a channel. Depending on your compiler version you can have 100 goroutines for roughly 400KB of ram.
As far as rate limiting, what I'm considering doing is adding a configuration attribute to the Transmitter
struct that takes an optional RateLimiter
type, defined as follows:
type RateLimiter interface {
Wait(ctx context.Context) error
}
When present, the transmitter would call Wait
internally before any Submit
to pace the sending. This way you can either implement your own rate limiter or use the one from https://github.com/golang/time for example.
I might also have to add attributes to control the # of retries in case the SMSC returns ESME_RTHROTTLED, so we can unblock the call to Submit
after N attempts.
Do any of you know if the SMSC allow bursts?
Also, I know it's common practice to limit the # of client connections per account, which in this case is how many concurrent Transmitters
you can create to the same server.
Is there a case where the rate limit is applied per account, not per connection? If that's true we might have to share the Context
used for the Wait
amongst connections... hopefully not, to keep things simple.
No, rate limit is not applied per account! Only per connection. SMSCs (Acision and Huawei - which I have experienced) are very strict and do not allow bursts (or at least they were configured not to allow)
I've uploaded the patch to the dev branch, please check it. It supports optional rate limiting but does not handle ESME_RTHROTTLED automatically. If that happens, your Submit
will return an error although rate limiting still applies.
I'm still not convinced that rate limiting belongs to the package given its simplicity to implement. For example, if someone uses our rate limiter in an http server, it might slow down http requests making it harder to abort or cancel it on a per request basis. On the other hand, would be much easier to implement the rate limiter in the http server to control how fast Submit
can be called per SMPP connection.
Anyway, check it out and let's take it from there.
Haven't been using this library yet (been reading through the source code though and have a good grasp of the functionality and features). Will give my opinion based on other SMPP libraries I've used (mainly Java ones).
Windowing should imo not be handled by the core library. As @fiorix mentioned, run multiple goroutines and you can make this happen, e.g. start 10 goroutines sharing the same input channel. They all do Submit() (which is blocking) meaning that you will at most have 10 outstanding Submit_SM PDU's.
Throttling should also be handled by the client using the go-smpp library. The core library needs to actively provide these errors back so that the user can decide how they want to handle the error.
I guess most of it boils down to: Is this a SMPP protocol library or a full SMPP server/client library. If the first then the users of the library should be implementing windowing/throttling etc themselves. I'd rather see one more project implementing a full server/client scenario, using this library, to solve the need for that.
That's valuable input, thanks @xintron. It seems to me we should just provide examples of how to use a rate limiter such as github.com/golang/time/rate together with this package rather than supporting it internally. Any word here @robertmircea? Also, have you looked at the patch?
+1
Would love to see / work on a full smpp server implementation but agree this likely belongs in a separate project.
On Wednesday, February 10, 2016, Alexandre Fiori notifications@github.com wrote:
That's valuable input, thanks @xintron https://github.com/xintron. It seems to me we should just provide examples of how to use a rate limiter such as github.com/golang/time/rate together with this package rather than supporting it internally. Any word here @robertmircea https://github.com/robertmircea? Also, have you looked at the patch?
— Reply to this email directly or view it on GitHub https://github.com/fiorix/go-smpp/issues/2#issuecomment-182385430.
For the server side, there's a rough implementation in the smpptest
package that could be expanded. Since we support most of the wire protocol it's a matter of glueing things together.
Most of my effort in building this package was for the client side.
I'm happy to help further the server implementation.
On Wednesday, February 10, 2016, Alexandre Fiori notifications@github.com wrote:
For the server side, there's a rough implementation in the smpptest package that could be expanded. Since we support most of the wire protocol it's a matter of glueing things together.
Most of my effort in building this package was for the client side.
— Reply to this email directly or view it on GitHub https://github.com/fiorix/go-smpp/issues/2#issuecomment-182422357.
@fiorix Still believe that the mechanism for handling windowing or throttling is an implementation detail of the library rather than a concern that the user of lib must take care of. I understand the need to keep the library pure, but both features are so basic that they need to be implemented by client almost 100% every time when used with real SMSC or commercial smpp gateway like mblox. I found this Java (cloudhopper-smpp) implementation https://github.com/fizzed/cloudhopper-smpp/blob/master/src/main/java/com/cloudhopper/smpp/impl/DefaultSmppSession.java#L493 which, in my opinion is a good, production quality code. It takes care of all error handling specific to protocol, windowing, timeouts, but yet it manages to publish towards the user all the events for decision or extension.The api exposed to the user enables both sync and async communication pattern. (yes, it's Java style verbose code :))
Giving the last "dilemma" from @xintron, I will surely go with the value provided by the full fledged client library rather for a pure protocol implementation. Of course, a client library can be a separate package which depends on this one, as last resort.
I will take a look at the patch and try it during the weekend.
I have successfully implemented this functionality using a circuit breaker. As I suspected, there's no reason to have such logic in this package's code.
The circuit breaker I'm using, for reference: https://github.com/sony/gobreaker/
Closing this ticket for now.
Added the option to set a max window size, and the client will refuse to send messages if you hit that. This helps with the circuit breaker.
I want to create a pool of connections let say 10 provided by SMSC and reuse them in goroutines. I see above that I can create as many connection objects as I want and use them but I don't know how to achieve this. Can I get a sample may be? or some steps.
Thanks in advance.
@mwangox you can simply create as many transmitters or transceivers as you need. Not sure what's the deal here.
Many thanks!