patrickpeng2091 / lsyncd

Automatically exported from code.google.com/p/lsyncd
GNU General Public License v2.0
0 stars 0 forks source link

Very high disk usage on init #75

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
What steps will reproduce the problem?
1. Start lsyncd with several sync stanzas each watching a large file tree
2. Monitor load averages / disk usage

What is the expected output? What do you see instead?

Expected / Desirable : Lsyncd does the initial scan of directories in serial or 
with a limited number of threads

Actual Result : Lsyncd starts a thread for every sync stanza which results in 
very high disk usage when there are several
sync stanzas with large trees

What version of the product are you using? On what operating system?

2.0.4 on Debian Lenny compiled against lenny-backports packages for Lua

Please provide any additional information below.

We have several large asset directories (15G+) for web sites and we would like 
to use lsyncd to push the assets to a backup server in (almost) realtime. It 
works great for this. However when we restart lsyncd it starts a thread for 
each synz stanza to do its initial scan. This doesn't scale well as the more 
sync points there are, the heavier the
disk activity when lsycnd is restarted. 

Could Lsyncd internally limit the number of initial sync threads to be kinder 
to the disks? Ideally there would be some configuration options to allow the 
administrator to specify how the initial sync is handled, for example max 
number of threads, nice value for initial rsync processes?

Alternatively would it be possible to write a layer 3/4 function to control 
this? A crude implementation might write a marker file to disk for directories 
that have had an initial sync and discard the init event if the marker exists?

Otherwise Lsyncd is an excellent piece of software. Thank you for all your hard 
work.

Original issue reported on code.google.com by ro...@d3r.com on 3 Aug 2011 at 7:48

GoogleCodeExporter commented 9 years ago
I'm going to add a global maximum over all watches on how many rsync processes 
lsyncd runs at once. But please be patient, got a lot of non-lsyncd-stuff to do 
with deadlines.

I recently had a similar problem myself. I'm using Lsyncd to watch 25 trees, 
and on startup the remote ssh-server refuses to accept more than 10 starting 
logins at once.

Original comment by axk...@gmail.com on 3 Aug 2011 at 8:00

GoogleCodeExporter commented 9 years ago
That sounds great - thanks for the fast feedback. I'll keep an eye out and test 
as soon as the patch is in.

Original comment by ro...@d3r.com on 3 Aug 2011 at 8:04

GoogleCodeExporter commented 9 years ago
Nice feature. If you can set ionice priority, it would be best solution imho. 
Have a look at it if you have time.

PS: I have started using lsyncd a few days ago and it is just working. Great. 
Thanks.

Original comment by luva...@gmail.com on 15 Aug 2011 at 8:41

GoogleCodeExporter commented 9 years ago
Just added the settings.maxProcesses limit to SVN. Going to be in Lsyncd 2.0.5

ionice, yes that can be fine, depending on your needs. But you can just call 
Lsyncd with ionice if you want to. No need for me to include it into Lsyncd.

Original comment by axk...@gmail.com on 18 Aug 2011 at 1:31

GoogleCodeExporter commented 9 years ago
Excellent!! Many thanks - I'll regenerate our package from the latest source 
and see how it goes.

Thanks for all the work!!

Original comment by ro...@d3r.com on 18 Aug 2011 at 1:52

GoogleCodeExporter commented 9 years ago
I'll upload a beta version package in a few minutes.

Original comment by axk...@gmail.com on 18 Aug 2011 at 1:54

GoogleCodeExporter commented 9 years ago
I've now rebuilt our debian package with the new 2.0.5-beta source. All seems 
to be working great so far. Many thanks for the speedy response and great piece 
of software!

Original comment by st...@d3r.com on 22 Aug 2011 at 3:24

GoogleCodeExporter commented 9 years ago
Many thanks for testing!

Original comment by axk...@gmail.com on 22 Aug 2011 at 3:27