Flameeyes / scan2pdf

GNU General Public License v3.0
8 stars 4 forks source link

Changes for embedded system support #4

Closed chuckb closed 12 years ago

chuckb commented 12 years ago
Flameeyes commented 12 years ago

I have rewritten your third commit (to support sane-backends 1.0.21) to use a proper version comparison (your code would have broken with 2.0.0 and 1.3.0), but I'm not really a fan of the other two.

First, the use of free as a check sounds like a very bad idea to me, especially toward portability. I've done my best for this to run on any posix-compatible system where sane-backends could run, I don't think free is portable. Plus I dislike relying on output from procps utilities, considering Debian is taking it over for good.

As a final touch, it'll be waiting for my box to have 5GB of free memory... that's never going to happen on a normal day.

What I plan on doing next is adding a -Wu option that allows passing options directly to unpaper, so that -Wu,--no-qpixels would work without otherwise changing scan2pdf — that combined with the configuration file support I just pushed should do the trick for you about the qpixels problem.

As for the multiple processes at once, I need to find a way to implement a -j option akin to make's which allows setting how many jobs at once to proceed with.

chuckb commented 12 years ago

Thanks for the quick response on my changes. I see now what you mean with the -Wu switch...that would be very beneficial and provide the flexibility needed for controlling the image processing.  Could not quite figure out what you meant in last message. Re memory/process management, I'd like the script to handle figuring that all out for me, and/or maybe provide me with a way to control it if I like.  The change I made did allow a 20 page scan to process on a 128MB embedded system (3-5 unpapers ran concurrently).  Without some kind of governor, I could not get more than 2 page scans to complete.  Given that unpaper is largely CPU bound, maybe the best thing is to only fire off one for each core detected.

--- On Thu, 12/29/11, Diego Elio Pettenò reply@reply.github.com wrote:

From: Diego Elio Pettenò reply@reply.github.com Subject: Re: [scan2pdf] Changes for embedded system support (#4) To: "chuckb" chuck_benedict@yahoo.com Date: Thursday, December 29, 2011, 8:44 AM

I have rewritten your third commit (to support sane-backends 1.0.21) to use a proper version comparison (your code would have broken with 2.0.0 and 1.3.0), but I'm not really a fan of the other two.

First, the use of free as a check sounds like a very bad idea to me, especially toward portability. I've done my best for this to run on any posix-compatible system where sane-backends could run, I don't think free is portable. Plus I dislike relying on output from procps utilities, considering Debian is taking it over for good.

As a final touch, it'll be waiting for my box to have 5GB of free memory... that's never going to happen on a normal day.

What I plan on doing next is adding a -Wu option that allows passing options directly to unpaper, so that -Wu,--no-qpixels would work without otherwise changing scan2pdf — that combined with the configuration file support I just pushed should do the trick for you about the qpixels problem.

As for the multiple processes at once, I need to find a way to implement a -j option akin to make's which allows setting how many jobs at once to proceed with.


Reply to this email directly or view it on GitHub: https://github.com/Flameeyes/scan2pdf/pull/4#issuecomment-3301149

Flameeyes commented 12 years ago

My idea is implementing job-control so that you can, on your system, use scan2pdf -j 1 to limit processing so that it happens serially, rather than forking a number of different processes altogether. This is what make does when building on multi-core systems. And since there is a configuration file now it would be easy to make it default on your system without having to make it explicit (or on the other hand, I could make it default to serial processing, and set it on my boxes to run 4 or 8 processes at a time.

The reason why I'm not keen on automating it in the script is that the ratio is really variable; one of my boxes has 16GB of RAM: waiting for it to have 30% of it free is quite unlikely.