Closed matthews1977 closed 8 years ago
did you try with standard port 1935?
No. Because I have a backup server configured to use it that I can start on demand. I couldn't find any references to the default port in the code specific to 1935. Everything I could find was changed to 1940. Is there something going on I don't understand?
it sounds like a firewall problem or something like that.
I've actually added a stuck IP to the firewall to block further packet transmission and it did virtually nothing. The IP remained as 'established' in netstat until I closed the rtmp server down. As a sanity check i've reconfigured the server to use 1935 and will post back.
try to disable your firewall a while before to test
I am not disabling iptables over this. Server red5 never had this issue with my current configuration. This is not a reasonable suggestion.
up to you...
Connections are still hanging. This instance is a group conference and interestingly the connection to the remaining 5 players from the broadcasting IP remain as well. Ie. Broadcaster on camera 1, playing 2-5. Netstat shows indefinite connection from IP for 6 instances. Closing rtmp.py is the only thing that ends these connections.
I am new to Python. I scanned over the source but being new, it's difficult to translate for me to logical steps. Ideally I think if no sizable packets are received by or sent from the destination then the connection closed handler should be thrown? Is this sort of handler in the code?
Hi,
One thing you can try first is to add TCP keepalive after line 946 in rtmp.py: https://github.com/theintencity/rtmplite/blob/master/rtmp.py The TCP keepalive code is shown in http://stackoverflow.com/questions/12248132/how-to-change-tcp-keepalive-timer-using-python-script look for the answer with most points (~20). If you are new to Python, make sure to keep the indentation correct. I think a keepalive of 1 min should be okay, but of a few seconds may be too aggressive.
I expect that if keepalive is enabled, then the server will receive the TCP close signal if the publishing client is unresponsive.
If this does not work then we can create a application level keep alive using RTMP RPC.
Following up for other users. The following code change to run(self): fixed this issue for me. Thanks for the help!
###################################################
def run(self):
try:
while True:
sock, remote = (yield multitask.accept(self.sock)) # receive client TCP
if sock == None:
if _debug: print 'rtmp.Server accept(sock) returned None.'
break
if _debug: print 'connection received from', remote
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) # make it non-block
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1) # Issue #106
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 10) # Issue #106
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 10) # Issue #106
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 2) # Issue #106
client = Client(sock, self)
except GeneratorExit: pass # terminate
except:
if _debug: print 'rtmp.Server exception ', (sys and sys.exc_info() or None)
if (self.sock):
try: self.sock.close(); self.sock = None
except: pass
if (self.queue):
yield self.queue.put((None, None))
self.queue = None
##########################################
Circumstance: Using VideoIO to broadcast to rtmp.py (latest versions, both). Calling connection directly, same domain, but custom port (Ie. domain.com:1940/myapp/stream1).
A client will hard disconnect (close page, etc) and the connection will hang indefinitely until the server is restarted. During this time no other client can connect to the specific stream affected. This does not happen consistently across all clients, but randomly.
Working with a hung client I advised them to restart their PC in the event that a flash instance was running away in the background. The restart had no effect and the connection remained. This leads me to believe the issue is solidly placed server side.