Closed GoogleCodeExporter closed 9 years ago
I cannot reproduce the issue on Linux.
What happens if you do this?
- while (socket_map or tasks) and not self._exit.is_set():
+ while socket_map and not self._exit.is_set():
Original comment by g.rodola
on 4 Apr 2013 at 4:42
Thanks for your fast reply,
If i remove tasks and change the line to
while socket_map and not self._exit.is_set():
it works perfectly.
Original comment by aybars.b...@gmail.com
on 4 Apr 2013 at 4:49
Forget about that: it breaks tests.
Are you able to provide a Python script which reproduces the problem?
I tried to connect with Filezilla, login and disconnect but nothing happened.
I used
https://code.google.com/p/pyftpdlib/source/browse/trunk/demo/multi_thread_ftp.py
as server.
Original comment by g.rodola
on 4 Apr 2013 at 4:51
Ok, I managed to reproduce the issue (sorry =)).
I'm gonna look into it.
Original comment by g.rodola
on 4 Apr 2013 at 4:58
yes, attached thserver.py though its the same as demo multi threaded
I'll try to give more details. Here is the steps i do, I run thserver.py,
connect with filezilla, then download a file, then close filezilla, in htop the
cpu goes high, and the thread doesnt end until 5 minutes. so if you connect
with a couple of sessions, or start downloading big files and exit, you can
reach 100% cpu usage.
Attached htop screenshot
Original comment by aybars.b...@gmail.com
on 4 Apr 2013 at 5:02
Attachments:
Fixed in r1203.
Will release a new version soon as this makes ThreadedFtpServer unusable.
Original comment by g.rodola
on 5 Apr 2013 at 11:56
Original comment by g.rodola
on 5 Apr 2013 at 11:56
Original comment by g.rodola
on 9 Apr 2013 at 2:47
Original comment by g.rodola
on 9 Apr 2013 at 4:29
Hi,
Sorry to bug you again, but I can repeat the problem, after sockets are
disconnected, the tasks are still there continuesly looping and hogging all the
cpu
after heapify, ioloop.sched._tasks gets emptied, but tasks are still there, i
am not really sure why.
on servers.py at 337:
if not socket_map:
print ">>>>>>>>>>>>>>>>>", tasks, ioloop.sched._tasks
time.sleep(1)
ioloop.sched.reheapify() # get rid of cancel()led calls
soonest_timeout = sched_poll()
if soonest_timeout:
time.sleep(min(soonest_timeout, 1))
I can try to write a unit test if you want ?
Original comment by aybars.b...@gmail.com
on 11 Apr 2013 at 1:21
Shit!
Yes please, or at least print this:
...
soonest_timeout = sched_poll()
print tasks
if soonest_timeout:
time.sleep(min(soonest_timeout, 1))
Original comment by g.rodola
on 11 Apr 2013 at 1:24
Hi,
I modified the code to this,
if tasks:
soonest_timeout = sched_poll()
print "tasks====================="
print tasks
# Handle the case where socket_map is emty but some
# cancelled scheduled calls are still around causing
# this while loop to hog CPU resources.
# In theory this should never happen as all the sched
# functions are supposed to be cancel()ed on close()
# but by using threads we can incur into
# synchronization issues such as this one.
# https://code.google.com/p/pyftpdlib/issues/detail?id=245
if not socket_map:
print "!!!!!!!!!! sockets disconnected !!!!!!!!!!!!!!!!"
time.sleep(1)
ioloop.sched.reheapify() # get rid of cancel()led calls
soonest_timeout = sched_poll()
if soonest_timeout:
time.sleep(min(soonest_timeout, 1))
attached is the output, hope it helps, i'll try to add a test case too.
ps: i connect with filezilla, download one or two files and disconnect.
Original comment by aybars.b...@gmail.com
on 11 Apr 2013 at 1:35
Attachments:
Ok, I should have now fixed it in r1215.
Can you please confirm that fixes the problem?
Original comment by g.rodola
on 17 Apr 2013 at 6:04
Original comment by g.rodola
on 19 Apr 2013 at 12:49
Sorry for the late reply, cloned from svn, then I double checked, triple
checked and it works like charm.
Thank you so much.
Original comment by aybars.b...@gmail.com
on 19 Apr 2013 at 3:05
Original comment by g.rodola
on 22 Apr 2013 at 2:50
Original issue reported on code.google.com by
aybars.b...@gmail.com
on 4 Apr 2013 at 4:17