Open mikhailnov opened 3 years ago
Hi @mikhailnov ,
my 2 cents about the RFE - the similar functionality is accessible if you set cupsd to log into syslog (which means systemd-journald in Linux OS), then you can get job-specific logs via:
$ journalctl -u cups JID=N
As for logging into /var/log/cups/error_log, IMHO grepping the file is sufficient. Additionally, it is valuable to see other 'not exactly' job related logs during debugging a print job, so implementing the RFE you've proposed would lead into storing logs into two places - error_log and error_log.jobid (jobid is an unique number in CUPS, it is relevant to a queue where it was sent).
Because of above, IMO the RFE is not something I see as important to implement in CUPS. @michaelrsweet thoughts?
Of course I do not suggest to enable per-tasks logs by default. Thanks for suggesting to use special fields in systemd-journald, I did not know that CUPS sets them. Anyway, all that does not allow to view a per-job log from the web interface. For real printers viewing a per-job log is probably a very rarely needed task.
@zdohnal
only a note off the top of my head - I didn't verify if it still behaves this way:
At least in the past not all log messages that belong to the processing
of one specific print job are marked with [Job NNNN]
.
I think stderr messages from nested child processes
(i.e. grandchilden, great-grandchilden,... of cupsd)
are not (or not always) marked with [Job NNNN]
.
@mikhailnov in enterprise environments where cupsd is continuously under high load a fully functional per-job log would be a very useful functionality so that an admin could easily see what actually had happened when there was an issue with one specific print job. E.g. a user may ask the admin why one specific print job did not come out of the printer in an environment where thousands of jobs are continuously printed.
@zdohnal only a note off the top of my head - I didn't verify if it still behaves this way: At least in the past not all log messages that belong to the processing of one specific print job are marked with
[Job NNNN]
. I think messages from nested child processes (i.e. grandchilden, great-grandchilden,... of cupsd) are not (or not always) marked with[Job NNNN]
.
These would be those 'not exactly' job related logs ;) - not marked by [Job N], but still relevant to see :) .
E.g. a user may ask the admin why one specific print job did not come out of the printer in an environment where thousands of jobs are continuously printed.
Thanks for the example! I have been thinking about the RFE in a way the admin in enterprise usually goes via ssh+CLI commands, so journalctl/grep would be okay, but then I realize admins currently prefer a web interface for the machine (f.e. like cockpit) and they don't need to ssh to see the logs. That way I can see the benefit of the RFE if the following is doable:
In my above https://github.com/OpenPrinting/cups-sharing/issues/3 I was wrong about "stderr messages from nested child processes not marked with [Job NNNN]" I think I remember something from some possibly longer time ago where it had been an issue but it is no longer an issue with current CUPS.
At least for me with CUPS 2.3.3op2 on my homeoffice laptop stderr messages from nested child processes are marked with [Job NNNN].
So with up-to-date CUPS 2.3.x an admin can easily see what had happened when there was an issue for one specific print job by extracting those log messages that are marked with [Job NNNN].
FYI: To verify that stderr messages from nested child processes are marked with [Job NNNN] I used a selfmade CUPS backend /usr/lib/cups/backend/tofile as described in the section "A backend that sends its input into a file for debugging" in https://en.opensuse.org/SDB:Using_Your_Own_Backends_to_Print_with_CUPS
I modified the backend code at its beginning like
#! /bin/bash
# Have debug info in /var/log/cups/error_log:
set -x
# Test stderr from sub processes:
( echo begin sub 1>&2 ; ( echo begin subsub 1>&2 ; sleep 2 ; echo end subsub 1>&2 ) & sleep 1 ; echo end sub 1>&2 ) & pstree -Aplau $$ 1>&2 ; sleep 3
# Output "device discovery" information on stdout:
...
I set up a test queue
# lpadmin -p ljet4 -v tofile:/tmp/tofile.out -P /usr/share/cups/model/OpenPrintingPPDs/ghostscript/Generic-PCL_5e_Printer.ljet4.ppd.gz -E
and ptinted to it with LogLevel debug
in /etc/cups/cupsd.conf
which results in /var/log/cups/error_log this output (excerpts):
I [08/Nov/2021:12:30:34 +0100] [Job 33] Started backend /usr/lib/cups/backend/tofile (PID 19845)
...
D [08/Nov/2021:12:30:34 +0100] [Job 33] + pstree -Aplau 19845
D [08/Nov/2021:12:30:34 +0100] [Job 33] + echo begin sub
D [08/Nov/2021:12:30:34 +0100] [Job 33] begin sub
D [08/Nov/2021:12:30:34 +0100] [Job 33] + sleep 1
D [08/Nov/2021:12:30:34 +0100] [Job 33] + echo begin subsub
D [08/Nov/2021:12:30:34 +0100] [Job 33] begin subsub
D [08/Nov/2021:12:30:34 +0100] [Job 33] + sleep 2
...
D [08/Nov/2021:12:30:34 +0100] [Job 33] tofile,19845,lp /usr/lib/cups/backend/tofile 33 ...
D [08/Nov/2021:12:30:34 +0100] [Job 33] |-pstree,19851 -Aplau 19845
D [08/Nov/2021:12:30:34 +0100] [Job 33] `-tofile,19850 /usr/lib/cups/backend/tofile 33 ...
D [08/Nov/2021:12:30:34 +0100] [Job 33] |-sleep,19853 1
D [08/Nov/2021:12:30:34 +0100] [Job 33] `-tofile,19852 /usr/lib/cups/backend/tofile 33 ...
D [08/Nov/2021:12:30:34 +0100] [Job 33] `-sleep,19854 2
D [08/Nov/2021:12:30:34 +0100] [Job 33] + sleep 3
...
D [08/Nov/2021:12:30:35 +0100] [Job 33] + echo end sub
D [08/Nov/2021:12:30:35 +0100] [Job 33] end sub
D [08/Nov/2021:12:30:36 +0100] [Job 33] + echo end subsub
D [08/Nov/2021:12:30:36 +0100] [Job 33] end subsub
...
D [08/Nov/2021:12:30:37 +0100] [Job 33] + exit 0
D [08/Nov/2021:12:30:37 +0100] [Job 33] PID 19845 (/usr/lib/cups/backend/tofile) exited with no errors.
I am not sure than relying on systemd-journald is better then creating files by CUPS because it will require to: a) implement build conditions to disable it on systems without systemd-journald (other syslog implementation will not allow to query syslog by job ID) bot in the backend and in the frontend b) implement quering an external program, reading output from it, sanitizing it to ensure not broken HTML and displaying it to the user, which is way harder to implement properly than just opening a text file in a new tab by feeding the browser with its plain text content
Old logs can be deleted by logrotate or systemd-tmpfiles, it will be enough to ship a config for them. But in general your idea about how to implement it looks good to me.
Also, if use simple text files and open them as plain text, logs can be queried by e.g. curl(1). But I do not know why it may be needed and why SSH cannot be used ))
@mikhailnov Creating log files per job would greatly increase the processing load of cupsd, not to mention adding more files that would be open for that instead of for client connections (that is already a major limiting factor for the number of simultaneous clients...) Also, providing web access to the log is nice but is yet another security/privacy risk you have to deal with...
Anyways, for future versions of the sharing server I can see providing a way for an admin to download/view a log for a print job, which would be extracted from the log file(s) or syslog service as needed.
@jsmeix thanks for the check, Johannes :)
@mikhailnov
a) implement build conditions to disable it on systems without systemd-journald (other syslog implementation will not allow to query syslog by job ID) bot in the backend and in the frontend
We have already a #ifdef for systemd support, IMO we can use them in a new cgi script as well.
b) implement quering an external program, reading output from it, sanitizing it to ensure not broken HTML and displaying it to the user, which is way harder to implement properly than just opening a text file in a new tab by feeding the browser with its plain text content
IMO I can look into journalctl implementation for how they work with sd-journal.h API functions. What I'm currently not sure whether we really need to structure the output into HTML to view/download it in web ui, or we can just put the logs as plain text in a temp file, which will be generated and showed/downloaded when an user clicks on it (I'm not sure whether we will need help of some other language to do it - PHP - or we suffice with current cgi+template solutions).
A feature request: add per-task logs as text files and add ability to read them via web interface.
Currently the only way to get a per-task log is the following:
grep '\[Job XXX\]' /var/log/cups/error_log
What I would like: a separate file like /var/log/cups/per-task-logs/printerName-jobID.log
Why? I am using CUPS to build RPM packages: https://github.com/mikhailnov/pkg-builder-cups (I do not know how this usecase will be covered after cups-backend becomes depreceated, but currently I did not find anything better than CUPS to be a simple scheduler for building RPM packages). I want to have per-build logs and have access to them also via a web UI. Currently I see a full build log in
/var/log/cups/error_log
.I understand that this use case does not match CUPS goals, but still openning this issue, may be it will be interesting for something else.