Open goharahmed opened 2 years ago
Possible resolution. I've yet to try this:
wsc.proxy = proxy.NewProxy(Cfg)
if wsc.proxy == nil {
logrus.Fatal("could not initialize SIP proxy")
}
Don't create another new Proxy in every incoming WS/WSS client request. Instead make it once at start time and attach the same proxy instances to all requests. Then we probably have to attach our internally generated cmd_id and match the responses back based on that cmd_id and once matched then deliver event/response back to the client.
go func(c *cmd.Cmd) {
for event := range c.Wait() {
//Something like
if cmd_id == event.cmd.ID {
agg <- &WSCmdEvent{c, event}
}
}
logrus.Debugf("done reading events for cmd %s (%s)", c.Command, c.ID)
agg <- &WSCmdEvent{c, nil}
}(c)
@goharahmed UDP does not actually open a connection, it simply allocates a port and only uses it when sending a command. That being said, I believe the only issue here is that the connection remains open - if we close it every time, we would indeed open a port for each request, but that would not result in extra traffic. So my suggestion is to simply close the socket after each request.Any thoughts?
Yeah that could work. I will check back on this one, I've done modifications on my local repo which (AFAIR) keeps the UDP socket open and exchanged all new requests and their replies over single UDP socket and yet was able to distribute back to correct WSS client. I will recheck my code and come back with the option that works nicely.
I am not saying no to your approach either, your approach is actually better. However, we still need to make sure that a proxy connection gets eventually closed at a certain point, correct? That's why probably the best approach would be a combination of the two. If you could create a PR with your changes, I will try to take a look. Thanks!
I totally agree and follow your directions. I need to revive my IDE and code repo to get the best understanding. I've since implemented AMQP interface as well in parallel to WSS and OpenSIPS MI_HTTP interface as well. So in time, will create separate PRs for both.
Hi, Probably my poor understanding of the code but it seems like the UDP event listener function are never ending loops. I've a test setup where initially sending a mi rpc request via call-api-client, call-api created a newer UDP local listener which never gets terminated.
Here is what I've understood is happening:
STEP1:
Create a NewProxy object every time a new wss/ws client sends request! (Can we not reuse older ones?) NewProxy() has two major tasks. 1 - Establish connection to the opensips mi-datagram socket 2 - Create an event listener which can do subscriptions and filter on events
STEP2:
NewProxy() must connect to the opensips mi_datagram socket for sending commands, but to return a response/event it creates another UDP connection to the OpenSIPS mi_datagram socket.? Could we not have used the first connection's LocalAddr() instead.
EventHandler created another UDPConn to opensips mi_datagram socket and setup a goroutine waitForEvents()
STEP3:
waitForEvents() create a never expiring/ending for loop on the local side of the second mi_datagram UDPConn. Untill some error happens this doesn't break !
Questions: Is this correct understanding? Can we reuse the first established socket to mi_datagram for both sending commands and receiving replies for all incoming wss clients. Right now, every new WSS client sending command created a few sockets which live way longer than needed.
I'm implementing mi_http interface here, do I really need to have Events subscriptions or waitforEvents() functions?