Closed maganap closed 2 years ago
BTW, we're running Debian 10 with CUPS 2.2.10. It's a production server that hasn't been updated. I guess I could run a more up to date version in a different server if you believe that would be of any help.
@maganap The issue is one of throughput - CUPS normally waits for the printer to confirm it has printed the current job before moving on to the next, but high end printers like this can not only spool many jobs but depend on it to keep running. You can add the ?waitjob=no
option to the end of the IPP device URI for the queue to keep sending more jobs to the printer, although then you lose reliable accounting and job control… :/
A newer version of CUPS may allow you to use the “everywhere” model to send PDFs to the printer instead of converting to PostScript first as well, but adding the waitjob option will probably yield the largest gains here…
Thank you very much for your reply @michaelrsweet .
You can add the ?waitjob=no option to the end of the IPP device URI for the queue to keep sending more jobs to the printer,
Actually we were using IPP (directly to the printer) before using CUPS. Currently with CUPS we're using beh:/1/0/15/socket://<ip>:9100
.
We could switch to IPP backend to use waitjob
I guess, if that would help. But filling the printer queue happens only very occasionally.
What happens now is that CUPS sends a job, the printer confirms reception (I mean, it shows in the printer panel), and then CUPS starts processing the next file (it doesn't wait for the printer to finish printing). Depending on the file, CUPS may send it before the printer has finished printing the previous job, but most of the time the printer finishes first. So most of the time the printer is just waiting for CUPS to send another job, which is the major problem.
A newer version of CUPS may allow you to use the “everywhere” model to send PDFs to the printer
Our printers only handle IPP 1.1 so what you propose about using "everywhere" in a newer version of CUPS is not compatible. Am I correct? We were explained by Kyocera, Xerox and Konica Minolta tech support that their large high end printers mostly don't handle IPP 2.0. That's another story...
In any case, the big major bottleneck is definitively PostScript processing. Any other ideas how we could get rid of it? We're open to any options!
Thank you again in advance.
@maganap regarding waiting for the printer to complete the printing of the job see https://openprinting.github.io/cups/doc/network.html what you could set for the 'socket' backend. I have no detailed experience with the 'beh' wrapper backend so I don't know if
beh:/1/0/15/socket://<ip>:9100/?waiteof=false
actually works - I guess it works but I don't know.
Regarding speed up print job processing in general:
An offhanded totally untested idea:
The basic idea is to process several jobs in parallel. A single CUPS queue processes its jobs one after the other. Multiple CUPS queues process their jobs in parallel.
So you may first as a test set up several queues for the same printer all with same queue settings (in particular same device URI) but only with different queue names and submit many jobs evenly distributed to those queues and check how that works.
Some cheap printers get totally mad if they get several jobs at the same time, cf. "Network printer or printserver box does not work reliably" in https://en.opensuse.org/SDB:Printing_via_TCP/IP_network and see also the "optimistic backend for a network printer" part in https://en.opensuse.org/SDB:Using_Your_Own_Backends_to_Print_with_CUPS
But when a printer has properly working networking and spooler built-in it should "just work" to send it several jobs simultaneously. When it works reliably to send many jobs continuously and simultaneously via several queues to your particular printer you could do the second step:
Create a so called "class" of those queues, see "classes" in https://openprinting.github.io/cups/doc/admin.html and submit many jobs to that class.
The class should evenly distribute the jobs to its member queues so that in the end as many jobs as there are member queues should be processed in parallel and their printing output data should also be sent in parallel to the one printer.
When several queues process print jobs in parallel shorter jobs could finish before longer jobs so shorter jobs can pass by longer jobs which means the jobs could get output in a different ordering than how they have been initially submitted.
Again: This is only a totally untested idea off the top of my head.
Johannes and Mike provided tips for the original question, I cannot add more info to that.
We're working with 2 high throughput printer models: KONICA MINOLTA 1250 and C1100.
Only PDF files are being printed. Each printer has its own queue, and the filters being applied are
pdftopdf
and thenpdftops
. The later takes a lot of time to process.THE PROBLEM We're experiencing important delays because the printers print much faster than what it takes CUPS to process a file before being sent to the printer.
THE QUESTION Would you have any suggestions to allow the CUPS queues run faster?
OUR TESTS AND SOME IDEAS
The PPD files don't have any
cupsFilters
definition, so I guess the default filters are being applied.The printers are large enough to hold many jobs in their own internal queues. We were previously communicating over IPP and sending the PDF files directly (some of them were being preprocessed by us using
gs
,qpdf
, etc.). I understand auto rotation, n-up, fit to page, etc. are handled bypdftopdf
filter, which was our main concern. Since the printers can receive PDF directly, we thought about applying thepdftopdf
filter only and make CUPS send the resultingapplication/vnd.cups-pdf
to the printer, skipping thepdftops
filter. Does that even make sense? Would it bring other compatibility issues?In any case, we're not sure how to configure CUPS or the queue or the specific
lp
command we're running to achieve this, if at all possible.OUR SITUATION We're in a hurry and out of ideas. Any help would be very very much appreciated. Thank you in advance.