Closed p5pRT closed 9 years ago
This is a bug report for perl from rurban@cpanel.net\, generated with the help of perlbug 1.40 running under perl 5.21.10.
$ touch file
$ perl -e'open(my $fh\,"\<"\,"file") && print "$!\n";'
Inappropriate ioctl for device
When we push the buffer layer to PerlIO and do a failing isatty() check
which obviously fails on all normal files\, reset the errno to 0
to ignore the wrong global ENOTTY.
See also
http://stackoverflow.com/questions/1605195/inappropriate-ioctl-for-device
Flags: category=core severity=medium
Site configuration information for perl 5.21.10:
Configured by rurban at Tue Mar 31 14:38:17 CEST 2015.
Summary of my perl5 (revision 5 version 21 subversion 10) configuration:
Platform:
osname=linux\, osvers=3.16.0-4-amd64\, archname=x86_64-linux
uname='linux reini 3.16.0-4-amd64 #1 smp debian 3.16.7-ckt2-1 (2014-12-08) x86_64 gnulinux '
config_args='-de -Dusedevel -Uversiononly -Dinstallman1dir=none -Dinstallman3dir=none -Dinstallsiteman1dir=none -Dinstallsiteman3dir=none -Uuseithreads -Accflags=''-msse4.2'' -Accflags=''-march=corei7'' -Dcf_email=''rurban@cpanel.net'' -Dperladmin=''rurban@cpanel.net'''
hint=recommended\, useposix=true\, d_sigaction=define
useithreads=undef\, usemultiplicity=undef
use64bitint=define\, use64bitall=define\, uselongdouble=undef
usemymalloc=n\, bincompat5005=undef
Compiler:
cc='cc'\, ccflags ='-msse4.2 -march=corei7 -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_FORTIFY_SOURCE=2'\,
optimize='-O2'\,
cppflags='-msse4.2 -march=corei7 -fwrapv -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include'
ccversion=''\, gccversion='4.9.2'\, gccosandvers=''
intsize=4\, longsize=8\, ptrsize=8\, doublesize=8\, byteorder=12345678\, doublekind=3
d_longlong=define\, longlongsize=8\, d_longdbl=define\, longdblsize=16\, longdblkind=3
ivtype='long'\, ivsize=8\, nvtype='double'\, nvsize=8\, Off_t='off_t'\, lseeksize=8
alignbytes=8\, prototype=define
Linker and Libraries:
ld='cc'\, ldflags =' -fstack-protector-strong -L/usr/local/lib'
libpth=/usr/local/lib /usr/lib/gcc/x86_64-linux-gnu/4.9/include-fixed /usr/include/x86_64-linux-gnu /usr/lib /lib/x86_64-linux-gnu /lib/../lib /usr/lib/x86_64-linux-gnu /usr/lib/../lib /lib /lib64 /usr/lib64 /usr/local/lib64
libs=-lpthread -lnsl -lgdbm -ldb -ldl -lm -lcrypt -lutil -lc -lgdbm_compat
perllibs=-lpthread -lnsl -ldl -lm -lcrypt -lutil -lc
libc=libc-2.19.so\, so=so\, useshrplib=false\, libperl=libperl.a
gnulibc_version='2.19'
Dynamic Linking:
dlsrc=dl_dlopen.xs\, dlext=so\, d_dlsymun=undef\, ccdlflags='-Wl\,-E'
cccdlflags='-fPIC'\, lddlflags='-shared -O2 -L/usr/local/lib -fstack-protector-strong'
Locally applied patches: Devel::PatchPerl 1.30
@INC for perl 5.21.10: /usr/local/lib/perl5/site_perl/5.21.10/x86_64-linux /usr/local/lib/perl5/site_perl/5.21.10 /usr/local/lib/perl5/5.21.10/x86_64-linux /usr/local/lib/perl5/5.21.10 /usr/local/lib/perl5/site_perl/5.21.9 /usr/local/lib/perl5/site_perl/5.21.8 /usr/local/lib/perl5/site_perl/5.21.7 /usr/local/lib/perl5/site_perl/5.21.4 /usr/local/lib/perl5/site_perl/5.21.3 /usr/local/lib/perl5/site_perl/5.21.2 /usr/local/lib/perl5/site_perl/5.21.1 /usr/local/lib/perl5/site_perl/5.21.0 /usr/local/lib/perl5/site_perl/5.20.2 /usr/local/lib/perl5/site_perl/5.20.1 /usr/local/lib/perl5/site_perl/5.20.0 /usr/local/lib/perl5/site_perl/5.19.11 /usr/local/lib/perl5/site_perl/5.19.10 /usr/local/lib/perl5/site_perl/5.19.9 /usr/local/lib/perl5/site_perl/5.19.8 /usr/local/lib/perl5/site_perl/5.19.7 /usr/local/lib/perl5/site_perl/5.19.6 /usr/local/lib/perl5/site_perl/5.19.5 /usr/local/lib/perl5/site_perl/5.19.4 /usr/local/lib/perl5/site_perl/5.19.3 /usr/local/lib/perl5/site_perl/5.19.2 /usr/local/lib/perl5/site_perl/5.19.1 /usr/local/lib/perl5/site_perl/5.19.0 /usr/local/lib/perl5/site_perl/5.18.4 /usr/local/lib/perl5/site_perl/5.18.2 /usr/local/lib/perl5/site_perl/5.18.1 /usr/local/lib/perl5/site_perl/5.18.0 /usr/local/lib/perl5/site_perl/5.17.11 /usr/local/lib/perl5/site_perl/5.17.10 /usr/local/lib/perl5/site_perl/5.17.8 /usr/local/lib/perl5/site_perl/5.17.7 /usr/local/lib/perl5/site_perl/5.17.6 /usr/local/lib/perl5/site_perl/5.17.5 /usr/local/lib/perl5/site_perl/5.17.4 /usr/local/lib/perl5/site_perl/5.17.3 /usr/local/lib/perl5/site_perl/5.17.2 /usr/local/lib/perl5/site_perl/5.17.1 /usr/local/lib/perl5/site_perl/5.17.0 /usr/local/lib/perl5/site_perl/5.17 /usr/local/lib/perl5/site_perl/5.16.3 /usr/local/lib/perl5/site_perl/5.16.2 /usr/local/lib/perl5/site_perl/5.16.1 /usr/local/lib/perl5/site_perl/5.16.0 /usr/local/lib/perl5/site_perl/5.15.9 /usr/local/lib/perl5/site_perl/5.15.8 /usr/local/lib/perl5/site_perl/5.15.7 /usr/local/lib/perl5/site_perl/5.15.6 /usr/local/lib/perl5/site_perl/5.15.5 /usr/local/lib/perl5/site_perl/5.15.4 /usr/local/lib/perl5/site_perl/5.15.3 /usr/local/lib/perl5/site_perl/5.15.2 /usr/local/lib/perl5/site_perl/5.14.4 /usr/local/lib/perl5/site_perl/5.14.3 /usr/local/lib/perl5/site_perl/5.14.2 /usr/local/lib/perl5/site_perl/5.14.1 /usr/local/lib/perl5/site_perl/5.12.5 /usr/local/lib/perl5/site_perl/5.12.4 /usr/local/lib/perl5/site_perl/5.10.1 /usr/local/lib/perl5/site_perl/5.8.9 /usr/local/lib/perl5/site_perl/5.8.8 /usr/local/lib/perl5/site_perl/5.8.7 /usr/local/lib/perl5/site_perl/5.8.6 /usr/local/lib/perl5/site_perl/5.8.5 /usr/local/lib/perl5/site_perl/5.8.4 /usr/local/lib/perl5/site_perl/5.8.3 /usr/local/lib/perl5/site_perl/5.8.2 /usr/local/lib/perl5/site_perl/5.8.1 /usr/local/lib/perl5/site_perl/5.6.2 /usr/local/lib/perl5/site_perl .
Environment for perl 5.21.10: HOME=/home/rurban LANG=en_US.utf8 LANGUAGE (unset) LD_LIBRARY_PATH (unset) LOGDIR (unset) PATH=/home/rurban/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games PERL_BADLANG (unset) SHELL=/bin/bash
And the missing attachment:
On Fri Apr 03 03:29:12 2015\, rurban@cpanel.net wrote:
This is a bug report for perl from rurban@cpanel.net\, generated with the help of perlbug 1.40 running under perl 5.21.10.
----------------------------------------------------------------- $ touch file $ perl -e'open(my $fh\,"\<"\,"file") && print "$!\n";' Inappropriate ioctl for device
When we push the buffer layer to PerlIO and do a failing isatty() check which obviously fails on all normal files\, reset the errno to 0 to ignore the wrong global ENOTTY.
Why are we zeroing $!\, if $! is an "undefined" state? waste cpu? why are you printing $! if the open is successful?
-- bulk88 ~ bulk88 at hotmail.com
The RT System itself - Status changed from 'new' to 'open'
On Fri\, Apr 3\, 2015 at 4:01 PM\, bulk88 via RT \perlbug\-followup@​perl\.org wrote:
$ touch file $ perl -e'open(my $fh\,"\<"\,"file") && print "$!\n";' Inappropriate ioctl for device
When we push the buffer layer to PerlIO and do a failing isatty() check which obviously fails on all normal files\, reset the errno to 0 to ignore the wrong global ENOTTY.
Why are we zeroing $!\, if $! is an "undefined" state? waste cpu? why are you printing $! if the open is successful?
Simply setting it to the old value is the more sensible thing IMO. Lots of precedent for doing that sort of thing\, we even have macros for it (dSAVE_ERRNO and friends).
Leon
On Apr 3\, 2015\, at 4:09 PM\, Leon Timmermans via RT \perlbug\-followup@​perl\.org wrote:
On Fri\, Apr 3\, 2015 at 4:01 PM\, bulk88 via RT \perlbug\-followup@​perl\.org wrote:
$ touch file $ perl -e'open(my $fh\,"\<"\,"file") && print "$!\n";' Inappropriate ioctl for device
When we push the buffer layer to PerlIO and do a failing isatty() check which obviously fails on all normal files\, reset the errno to 0 to ignore the wrong global ENOTTY.
Why are we zeroing $!\, if $! is an "undefined" state? waste cpu? why are you printing $! if the open is successful?
Simply setting it to the old value is the more sensible thing IMO. Lots of precedent for doing that sort of thing\, we even have macros for it (dSAVE_ERRNO and friends).
Right\, this would be better. Revised patch attached.
On Fri\, Apr 03\, 2015 at 04:08:19PM +0200\, Leon Timmermans wrote:
On Fri\, Apr 3\, 2015 at 4:01 PM\, bulk88 via RT \perlbug\-followup@​perl\.org
Why are we zeroing $!\, if $! is an "undefined" state? waste cpu? why are you printing $! if the open is successful?
Simply setting it to the old value is the more sensible thing IMO. Lots of precedent for doing that sort of thing\, we even have macros for it (dSAVE_ERRNO and friends).
I still don't understand why we're bothering to give $! a meaningful value on success.
-- No matter how many dust sheets you use\, you will get paint on the carpet.
Le 07/04/2015 12:52\, Dave Mitchell a écrit :
On Fri\, Apr 03\, 2015 at 04:08:19PM +0200\, Leon Timmermans wrote:
On Fri\, Apr 3\, 2015 at 4:01 PM\, bulk88 via RT \perlbug\-followup@​perl\.org
Why are we zeroing $!\, if $! is an "undefined" state? waste cpu? why are you printing $! if the open is successful?
Simply setting it to the old value is the more sensible thing IMO. Lots of precedent for doing that sort of thing\, we even have macros for it (dSAVE_ERRNO and friends).
I still don't understand why we're bothering to give $! a meaningful value on success.
because we might be in the process of reporting on a previously detected error. Clearing $! on success is actually bad action at a distance in such a case. See #81586\, #116118 and #119555 for examples. This is one reason why $! should not be touched on success (hence the occasional need for dSAVE_ERRNO and friends\, or local ($!\, $^E) at perl level).
On Tue\, Apr 07\, 2015 at 06:17:29PM +0200\, Christian Millour wrote:
Le 07/04/2015 12:52\, Dave Mitchell a écrit :
On Fri\, Apr 03\, 2015 at 04:08:19PM +0200\, Leon Timmermans wrote:
On Fri\, Apr 3\, 2015 at 4:01 PM\, bulk88 via RT \perlbug\-followup@​perl\.org
Why are we zeroing $!\, if $! is an "undefined" state? waste cpu? why are you printing $! if the open is successful?
Simply setting it to the old value is the more sensible thing IMO. Lots of precedent for doing that sort of thing\, we even have macros for it (dSAVE_ERRNO and friends).
I still don't understand why we're bothering to give $! a meaningful value on success.
because we might be in the process of reporting on a previously detected error. Clearing $! on success is actually bad action at a distance in such a case. See #81586\, #116118 and #119555 for examples. This is one reason why $! should not be touched on success (hence the occasional need for dSAVE_ERRNO and friends\, or local ($!\, $^E) at perl level).
But those are all for a completely different situation\, where perl "unexpectedly" makes system calls\, for example
unless (open my $fh\, "\<"\, $file) { my $y = lc $x; # if $x is utf8\, may load utf8.pm and trash $! croak "open: ($y): $!\n"; }
In those cases it is of course correct for perl to use dSAVE_ERRNO etc to preserve the current value of $!.
But in the case in this thread\, we *know* we're making a system call\, and so expect $! to be meaningless on success. In fact $! is explicitly documented as such in perlvar: it even includes an example with open():
This means C\
if (open my $fh\, "\<"\, $filename) { # Here $! is meaningless. }
-- "Procrastination grows to fill the available time" -- Mitchell's corollary to Parkinson's Law
Le 07/04/2015 20:27\, Dave Mitchell a écrit :
On Tue\, Apr 07\, 2015 at 06:17:29PM +0200\, Christian Millour wrote:
Le 07/04/2015 12:52\, Dave Mitchell a écrit :
I still don't understand why we're bothering to give $! a meaningful value on success.
because we might be in the process of reporting on a previously detected error. Clearing $! on success is actually bad action at a distance in such a case. See #81586\, #116118 and #119555 for examples. This is one reason why $! should not be touched on success (hence the occasional need for dSAVE_ERRNO and friends\, or local ($!\, $^E) at perl level).
But those are all for a completely different situation\, where perl "unexpectedly" makes system calls\, for example
unless \(open my $fh\, "\<"\, $file\) \{ my $y = lc $x; \# if $x is utf8\, may load utf8\.pm and trash $\! croak "open​: \($y\)​: $\!\\n"; \}
In those cases it is of course correct for perl to use dSAVE_ERRNO etc to preserve the current value of $!.
rather\, it *would* be correct for perl tu use dSAVE_ERRNO... Which it does not do in the case or require\, to my chagrin (#119555).
But in the case in this thread\, we *know* we're making a system call\, and so expect $! to be meaningless on success.
We are making a system call that has its own way of signalling success. Hence $! is not needed for that\, and should be ignored on success. That does not grant anyone the right to trash it.
In fact $\! is explicitly
documented as such in perlvar: it even includes an example with open():
This means C\<errno>\, hence C\<$\!>\, is meaningful only I\<immediately> after a B\<failure>​: if \(open my $fh\, "\<"\, $filename\) \{ \# Here $\! is meaningless\. \}
This should be reworded to state that on success\, the value of $! (or $^E) is *unrelated to the successfull call*.
IIRC\, the C standard states the same thing\, that errno should not be touched on success. Microsoft violated this requirement in recent versions of atoi\, which AFAIK prompted (among other weaknesses) its replacement by grok_atou.
On Wed\, Apr 08\, 2015 at 12:59:07PM +0200\, Christian Millour wrote:
But in the case in this thread\, we *know* we're making a system call\, and so expect $! to be meaningless on success.
We are making a system call that has its own way of signalling success. Hence $! is not needed for that\, and should be ignored on success. That does not grant anyone the right to trash it.
I don't think perl has ever guaranteed that perl functions documented as setting $! on failure *preserve* $! on success.
After about 1 minute's effort I came up with this:
open my $fh\, "nosuchfile"; my $is_ascii = -T $^X; print "$!\n";
which prints
Inappropriate ioctl for device
I'm sure I could find many other perl system-ish functions that fail to protect $!.
IIRC\, the C standard states the same thing\, that errno should not be touched on success. Microsoft violated this requirement in recent versions of atoi\, which AFAIK prompted (among other weaknesses) its replacement by grok_atou.
But atoi() is a library function not a system call\, and isn't documented as setting errno. So for atoi() to modify errno would be surprising\, in the same way that in my perl example\, lc() modifying $! is surprising.
-- Art is anything that has a label (especially if the label is "untitled 1")
On Wed\, Apr 8\, 2015 at 9:02 AM\, Dave Mitchell \davem@​iabyn\.com wrote:
On Wed\, Apr 08\, 2015 at 12:59:07PM +0200\, Christian Millour wrote:
But in the case in this thread\, we *know* we're making a system call\, and so expect $! to be meaningless on success.
We are making a system call that has its own way of signalling success. Hence $! is not needed for that\, and should be ignored on success. That does not grant anyone the right to trash it.
I don't think perl has ever guaranteed that perl functions documented as setting $! on failure *preserve* $! on success.
Indeed\, it's always allowed for the opposite. It's extensively and clearly documented that system calls can trash $! on success.
Le 08/04/2015 16:28\, Eric Brine a écrit :
On Wed\, Apr 8\, 2015 at 9:02 AM\, Dave Mitchell \<davem@iabyn.com \mailto​:davem@​iabyn\.com> wrote:
On Wed\, Apr 08\, 2015 at 12​:59​:07PM \+0200\, Christian Millour wrote​: > >But in the case in this thread\, we \*know\* we're making a system call\, and > >so expect $\! to be meaningless on success\. > > We are making a system call that has its own way of signalling success\. > Hence $\! is not needed for that\, and should be ignored on success\. That does > not grant anyone the right to trash it\. I don't think perl has ever guaranteed that perl functions documented as setting $\! on failure \*preserve\* $\! on success\.
You are correct. That has never been guaranteed. IMO more by oversight than by design.
Indeed\, it's always allowed for the opposite. It's extensively and clearly documented that system calls can trash $! on success.
No it is not. Please provide links.
Most low level libraries are much more respectful of errno than you seem willing to be. For instance https://www.gnu.org/software/libc/manual/html_node/Checking-for-Errors.html clearly states (the emphasis on *do not change* is mine) :
The initial value of errno at program startup is zero. Many library functions are guaranteed to set it to certain nonzero values when they encounter certain kinds of errors. These error conditions are listed for each function. These functions *do not change* errno when they succeed; thus\, the value of errno after a successful call is not necessarily zero\, and you should not use errno to determine whether a call failed. The proper way to do that is documented for each function. If the call failed\, you can examine errno.
There are indeed exceptions but not that many
There are a few library functions\, like sqrt and atan\, that return a perfectly legitimate value in case of an error\, but also set errno. For these functions\, if you want to check to see whether an error occurred\, the recommended method is to set errno to zero before calling the function\, and then check its value afterward.
To me a prime example of the evil of resetting/trashing $! on success is the behavior of require. Since it does not use $! to signal success\, it has no business resetting it on success either\, and should instead -- if successful-- restore it (and $^E) as they were on entry. Please take a good look at https://rt.perl.org/Public/Bug/Display.html?id=119555#txn-1302177 and tell me if I'm wrong.
On Wed\, Apr 8\, 2015 at 11:37 AM\, Christian Millour \cm\.perl@​abtela\.com wrote:
Le 08/04/2015 16:28\, Eric Brine a écrit :
It's extensively and clearly documented that system calls can trash $! on success.
No it is not. Please provide links.
POSIX says\, "The setting of errno after a successful call to a function is unspecified unless the description of that function specifies that errno shall not be modified." [1]
The Linux man page says\, "a function that succeeds is allowed to change errno."[2]
[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/errno.html [2] http://man7.org/linux/man-pages/man3/errno.3.html
On Wed\, Apr 08\, 2015 at 06:37:21PM +0200\, Christian Millour wrote:
Most low level libraries are much more respectful of errno than you seem willing to be. For instance https://www.gnu.org/software/libc/manual/html_node/Checking-for-Errors.html clearly states (the emphasis on *do not change* is mine) :
The initial value of errno at program startup is zero. Many library functions are guaranteed to set it to certain nonzero values when they encounter certain kinds of errors. These error conditions are listed for each function. These functions *do not change* errno when they succeed; thus\, the value of errno after a successful call is not necessarily zero\, and you should not use errno to determine whether a call failed. The proper way to do that is documented for each function. If the call failed\, you can examine errno.
I think you are misreading the emphasis of that paragraph. I see it as:
"library functions *might not* change errno to zero on success; therefore you must not check for success by testing for errno being zero.".
It's all about the latter; it's making no guarantee about the former.
There are indeed exceptions but not that many
There are a few library functions\, like sqrt and atan\, that return a perfectly legitimate value in case of an error\, but also set errno. For these functions\, if you want to check to see whether an error occurred\, the recommended method is to set errno to zero before calling the function\, and then check its value afterward.
To me a prime example of the evil of resetting/trashing $! on success is the behavior of require. Since it does not use $! to signal success\, it has no business resetting it on success either\, and should instead -- if successful-- restore it (and $^E) as they were on entry. Please take a good look at https://rt.perl.org/Public/Bug/Display.html?id=119555#txn-1302177 and tell me if I'm wrong.
I think you're conflating different classes of functions:
1. Functions that\, on failure\, set errno/$! to specify what failure; but which signal success/failure through another means - typically by return value (e.g. most UNIX system calls); 2. Functions that have no other means of signalling success failure\, and instead guarantee not change errno on success; so you set errno to zero\, and see if it changes (e.g. the atan example you quoted above) 3. Functions that make no mention of errno/$! in their documentation\, and that therefore might reasonably be expected not not mess with it.
Things like atoi or lc() fall into category 3; open() is category 1. I think it could be argued that require() falls mostly into category 3 and that therefore it should preserve $! (except that it's documented to set $! to 0 on success\, so its a bit of an outlier).
But I don't see that the (mis)behaviour of require has any bearing on the behaviour of a class 1 function like open().
-- This email is confidential\, and now that you have read it you are legally obliged to shoot yourself. Or shoot a lawyer\, if you prefer. If you have received this email in error\, place it in its original wrapping and return for a full refund. By opening this email\, you accept that Elvis lives.
On Wed\, Apr 8\, 2015 at 12:37 PM\, Christian Millour \cm\.perl@​abtela\.com wrote:
Indeed\, it's always allowed for the opposite. It's extensively and
clearly documented that system calls can trash $! on success.
No it is not. Please provide links.
perldoc -v '$!'
It's repeated 5 times!
On Wed\, Apr 8\, 2015 at 7:06 PM\, Dave Mitchell \davem@​iabyn\.com wrote:
I think you are misreading the emphasis of that paragraph. I see it as:
"library functions *might not* change errno to zero on success; therefore you must not check for success by testing for errno being zero.".
Actually\, POSIX says «No function in this volume of IEEE Std 1003.1-2001 shall set *errno* to 0.» I read the emphasis rather differently. POSIX leaves things deliberately unspecified\, but many implementations try to be nicer than strictly required.
Leon
Le 08/04/2015 19:06\, Dave Mitchell a écrit :
On Wed\, Apr 08\, 2015 at 06:37:21PM +0200\, Christian Millour wrote:
Most low level libraries are much more respectful of errno than you seem willing to be. For instance https://www.gnu.org/software/libc/manual/html_node/Checking-for-Errors.html clearly states (the emphasis on *do not change* is mine) :
The initial value of errno at program startup is zero. Many library functions are guaranteed to set it to certain nonzero values when they encounter certain kinds of errors. These error conditions are listed for each function. These functions *do not change* errno when they succeed; thus\, the value of errno after a successful call is not necessarily zero\, and you should not use errno to determine whether a call failed. The proper way to do that is documented for each function. If the call failed\, you can examine errno.
I think you are misreading the emphasis of that paragraph. I see it as:
"library functions *might not* change errno to zero on success; therefore you must not check for success by testing for errno being zero.".
Why on earth should you read it that way ? Unlike Posix or Linux which\, (as quoted by Craig Berry elsewhere in this thread) explicitely make no guarantee\, this text makes a very precious and precise statement.
It's all about the latter; it's making no guarantee about the former.
As I read it\, it does. And I do wish Posix and Linux made such guarantees. And I do wonder how many\, and which\, Posix and Linux functions/system calls actually trash errno on success (apart from the math functions).
There are indeed exceptions but not that many
There are a few library functions\, like sqrt and atan\, that return a perfectly legitimate value in case of an error\, but also set errno. For these functions\, if you want to check to see whether an error occurred\, the recommended method is to set errno to zero before calling the function\, and then check its value afterward.
To me a prime example of the evil of resetting/trashing $! on success is the behavior of require. Since it does not use $! to signal success\, it has no business resetting it on success either\, and should instead -- if successful-- restore it (and $^E) as they were on entry. Please take a good look at https://rt.perl.org/Public/Bug/Display.html?id=119555#txn-1302177 and tell me if I'm wrong.
I think you're conflating different classes of functions:
1. Functions that\, on failure\, set errno/$! to specify what failure; but which signal success/failure through another means - typically by return value (e.g. most UNIX system calls); 2. Functions that have no other means of signalling success failure\, and instead guarantee not change errno on success; so you set errno to zero\, and see if it changes (e.g. the atan example you quoted above) 3. Functions that make no mention of errno/$! in their documentation\, and that therefore might reasonably be expected not not mess with it.
Things like atoi or lc() fall into category 3; open() is category 1. I think it could be argued that require() falls mostly into category 3 and that therefore it should preserve $! (except that it's documented to set $! to 0 on success\, so its a bit of an outlier).
But I don't see that the (mis)behaviour of require has any bearing on the behaviour of a class 1 function like open().
The problem is that if successfull function or system calls are allowed to trash errno\, you have no guarantee of getting the correct errno when handling errors. Consider
eval {...; $! = 21; die $!; 1} or do { my $e = $!; handle_error($e) }
maybe in the process of unwinding the stack in the eval block some destructor is called. Which calls open successfully. Let us say that open clears/trashes errno. What do you get in $e ?
Or consider the following (silly)
sub logdie { open my $f\, "\<"\, $ENV{LOGFILE}; # assume success print $f @_; die @_ }
if open\, print\, or the implicit close clear/trash errno on success\, and given
eval { $! = 21; logdie $!; 1} or do { $e = $!; ... }
again\, what do you get in $e ?
I would claim that the assumption that library functions and system calls do not alter errno on success is made all over the place in core and CPAN code\, especially in error reporting mechanisms. atoi resetting errno on windows gave rise to really baroque and hard to fathom misbehaviors (#81586\, #116118). I have shown in #119555 that the misfeature of require makes autouse / autodie suspect in practice for anyone interested is programmatic error handling (which is ironic given that the canonical example provided is "use autouse 'Carp' => qw(carp croak);" ...)
I wonder how many core and CPAN tests would fail with a Perl linked against a libc that would randomly alter errno on success.
To me the bottom line is that there are good reasons to try and preserve errno as much as possible. It is a fragile resource. The wording of perldoc -v "$!" should not be construed as a license to alter $! indiscriminately.
On Thu\, Apr 09\, 2015 at 03:06:25AM +0200\, Christian Millour wrote:
Le 08/04/2015 19:06\, Dave Mitchell a écrit :
On Wed\, Apr 08\, 2015 at 06:37:21PM +0200\, Christian Millour wrote:
Most low level libraries are much more respectful of errno than you seem willing to be. For instance https://www.gnu.org/software/libc/manual/html_node/Checking-for-Errors.html clearly states (the emphasis on *do not change* is mine) :
The initial value of errno at program startup is zero. Many library functions are guaranteed to set it to certain nonzero values when they encounter certain kinds of errors. These error conditions are listed for each function. These functions *do not change* errno when they succeed; thus\, the value of errno after a successful call is not necessarily zero\, and you should not use errno to determine whether a call failed. The proper way to do that is documented for each function. If the call failed\, you can examine errno.
I think you are misreading the emphasis of that paragraph. I see it as:
"library functions *might not* change errno to zero on success; therefore you must not check for success by testing for errno being zero.".
Why on earth should you read it that way ? Unlike Posix or Linux which\, (as quoted by Craig Berry elsewhere in this thread) explicitely make no guarantee\, this text makes a very precious and precise statement.
The whole purpose of that paragraph is to warn people not to assume that errno will be zero after a successful system call\, and that therefore you need to check the return value of the function rather than just errno. The rest of the paragraph is just explaining why this is the case.
It is not a paragraph whose main intent is to promise that you can rely on successful system calls not changing errno. Nowhere does it say that you can or should rely on this behaviour\, merely that this is the current behaviour of glibc.
The problem is that if successfull function or system calls are allowed to trash errno\, you have no guarantee of getting the correct errno when handling errors. Consider
eval {...; $! = 21; die $!; 1} or do { my $e = $!; handle_error($e) }
maybe in the process of unwinding the stack in the eval block some destructor is called. Which calls open successfully. Let us say that open clears/trashes errno. What do you get in $e ?
But that's an absurd example. Maybe in the process of unwinding the stack a destructor does a system call that fails? (For example something that cleans up temporary files\, not all of which may still be present.) What happens then? Expecting $! to remain unmolested as such a distance is just incorrect coding.
Or consider the following (silly)
sub logdie { open my $f\, "\<"\, $ENV{LOGFILE}; # assume success print $f @_; die @_ }
if open\, print\, or the implicit close clear/trash errno on success\, and given
eval { $! = 21; logdie $!; 1} or do { $e = $!; ... }
again\, what do you get in $e ?
Again\, what happens if logdie trashes errno on *failure*?
I would claim that the assumption that library functions and system calls do not alter errno on success is made all over the place in core and CPAN code\, especially in error reporting mechanisms. atoi resetting errno on windows gave rise to really baroque and hard to fathom misbehaviors (#81586\, #116118). I have shown in #119555 that the misfeature of require makes autouse / autodie suspect in practice for anyone interested is programmatic error handling (which is ironic given that the canonical example provided is "use autouse 'Carp' => qw(carp croak);" ...)
And once again you're conflating non-$! functions and $!-setting system calls. atoi() trashing errno is a bug. System calls trashing $! isn't.
I wonder how many core and CPAN tests would fail with a Perl linked against a libc that would randomly alter errno on success.
If they fail\, then they are buggy.
-- "I do not resent criticism\, even when\, for the sake of emphasis\, it parts for the time with reality". -- Winston Churchill\, House of Commons\, 22nd Jan 1941.
Le 09/04/2015 12:34\, Dave Mitchell a écrit :
On Thu\, Apr 09\, 2015 at 03:06:25AM +0200\, Christian Millour wrote:
The problem is that if successfull function or system calls are allowed to trash errno\, you have no guarantee of getting the correct errno when handling errors. Consider
eval {...; $! = 21; die $!; 1} or do { my $e = $!; handle_error($e) }
maybe in the process of unwinding the stack in the eval block some destructor is called. Which calls open successfully. Let us say that open clears/trashes errno. What do you get in $e ?
But that's an absurd example. Maybe in the process of unwinding the stack a destructor does a system call that fails? (For example something that cleans up temporary files\, not all of which may still be present.) What happens then? Expecting $! to remain unmolested as such a distance is just incorrect coding.
Not a problem with careful coding. I do this constantly. Irrelevant failures need not be propagated :
sub cleanup_temporary_files { local ($!\, $^E); eval { ... # remove temporaries\, ignoring errors } # whatever alterations of $! and $^E were performed during the # attempted removals above are undone when leaving the block }
So yes\, there is a need to be attentive to the impact of ones own code over $! and $^E. We have the tools for that both at perl level (local ($!\, $^E);) and XS level (save_ERRNO and friends).
What I don't want to do is wrap each and every invocation of a system call or library function in some errno-preserving-on-success infrastructure. Which I will not have to do if those calls or functions are well behaved and do not reset/trash errno on success.
Anyway\, aren't you basically stating that it is unreasonable to 'die $!' ?
On Thu\, Apr 09\, 2015 at 05:47:13PM +0200\, Christian Millour wrote:
Le 09/04/2015 12:34\, Dave Mitchell a écrit :
On Thu\, Apr 09\, 2015 at 03:06:25AM +0200\, Christian Millour wrote:
The problem is that if successfull function or system calls are allowed to trash errno\, you have no guarantee of getting the correct errno when handling errors. Consider
eval {...; $! = 21; die $!; 1} or do { my $e = $!; handle_error($e) }
maybe in the process of unwinding the stack in the eval block some destructor is called. Which calls open successfully. Let us say that open clears/trashes errno. What do you get in $e ?
But that's an absurd example. Maybe in the process of unwinding the stack a destructor does a system call that fails? (For example something that cleans up temporary files\, not all of which may still be present.) What happens then? Expecting $! to remain unmolested as such a distance is just incorrect coding.
Not a problem with careful coding. I do this constantly. Irrelevant failures need not be propagated :
sub cleanup_temporary_files { local ($!\, $^E); eval { ... # remove temporaries\, ignoring errors } # whatever alterations of $! and $^E were performed during the # attempted removals above are undone when leaving the block }
So your plan is that any code *anywhere* that *might* set $! needs to be wrapped in 'local $!'.
My plan is that you should only make use of $! directly after a failed system call (where directly is that you shouldn't do any other system calls in-between).
I know which I prefer.
So yes\, there is a need to be attentive to the impact of ones own code over $! and $^E. We have the tools for that both at perl level (local ($!\, $^E);) and XS level (save_ERRNO and friends).
What I don't want to do is wrap each and every invocation of a system call or library function in some errno-preserving-on-success infrastructure. Which I will not have to do if those calls or functions are well behaved and do not reset/trash errno on success.
Anyway\, aren't you basically stating that it is unreasonable to 'die $!' ?
No\, it's perfectly ok to "die $!". I'm stating that after you trap an exception\, you have no right to expect $! still to hold the value of the last errrored system call before the exception. This:
sub DESTROY { open my $fh\, "/nosuchfile" } eval { my $x = bless []; kill 0\, 9999999; die $! }; print "err=$@"; print "errno=$!\n";
gives:
err=No such process at /home/davem/tmp/p line 5. errno=No such file or directory
The die has worked correctly\, but $! has changed during the stack unwind.
-- The crew of the Enterprise encounter an alien life form which is surprisingly neither humanoid nor made from pure energy. -- Things That Never Happen in "Star Trek" #22
Le 09/04/2015 18:46\, Dave Mitchell a écrit :
On Thu\, Apr 09\, 2015 at 05:47:13PM +0200\, Christian Millour wrote:
Le 09/04/2015 12:34\, Dave Mitchell a écrit :
On Thu\, Apr 09\, 2015 at 03:06:25AM +0200\, Christian Millour wrote:
The problem is that if successfull function or system calls are allowed to trash errno\, you have no guarantee of getting the correct errno when handling errors. Consider
eval {...; $! = 21; die $!; 1} or do { my $e = $!; handle_error($e) }
maybe in the process of unwinding the stack in the eval block some destructor is called. Which calls open successfully. Let us say that open clears/trashes errno. What do you get in $e ?
But that's an absurd example. Maybe in the process of unwinding the stack a destructor does a system call that fails? (For example something that cleans up temporary files\, not all of which may still be present.) What happens then? Expecting $! to remain unmolested as such a distance is just incorrect coding.
Not a problem with careful coding. I do this constantly. Irrelevant failures need not be propagated :
sub cleanup_temporary_files { local ($!\, $^E); eval { ... # remove temporaries\, ignoring errors } # whatever alterations of $! and $^E were performed during the # attempted removals above are undone when leaving the block }
So your plan is that any code *anywhere* that *might* set $! needs to be wrapped in 'local $!'.
Oh please ! I was only explaining how I would handle your objection\,
My plan is that you should only make use of $! directly after a failed system call (where directly is that you shouldn't do any other system calls in-between).
I know which I prefer.
Fair enough. I really wish I were savvy enough to experiment with a version of libc where malloc\, calloc\, realloc\, write\, syswrite (for starters) would set errno to 0xDEAD on success\, and see what kind of error reporting such would allow.
So yes\, there is a need to be attentive to the impact of ones own code over $! and $^E. We have the tools for that both at perl level (local ($!\, $^E);) and XS level (save_ERRNO and friends).
What I don't want to do is wrap each and every invocation of a system call or library function in some errno-preserving-on-success infrastructure. Which I will not have to do if those calls or functions are well behaved and do not reset/trash errno on success.
Anyway\, aren't you basically stating that it is unreasonable to 'die $!' ?
No\, it's perfectly ok to "die $!". I'm stating that after you trap an exception\, you have no right to expect $! still to hold the value of the last errrored system call before the exception. This:
sub DESTROY \{ open my $fh\, "/nosuchfile" \} eval \{ my $x = bless \[\]; kill 0\, 9999999; die $\! \}; print "err=$@​"; print "errno=$\!\\n";
gives:
err=No such process at /home/davem/tmp/p line 5\. errno=No such file or directory
The die has worked correctly\, but $! has changed during the stack unwind.
Sure. But the problem with this is that even if you 'die $!' what you get in $@ is a string\, you have lost the dual nature of $!.
My stance on this might stem from a (maybe misguided) need to deal with the numerical value of $! (or $^E)\, rather than its string value\, for programmatic error handling\, as in
eval { some_sub_that_might_die_or_croak; 1 } or do { my ($evalerr\, $e\, $se) = ($@\, $!\, $^E); if (ref $evalerr) { ... # deal with exception object } elsif (I_can_make_sense_of_this_error_string($evalerr)) { ... # deal with error string } else { # deal with numerical errors if ($e == ENOENT) { ... } elsif {$e == EACCES) { ... etc. } };
The problem with I_can_make_sense_of_this_error_string() above is that it is extremely fragile (error strings may differ among platforms and perl versions)\, and orders of magnitude more costly than numerical comparisons. The string form or $^E is even worse on Windows\, as it depends on the execution locale.
Anyway. Our exchange started on my part as an honest attempt to address your original remark on this thread "I still don't understand why we're bothering to give $! a meaningful value on success"\, providing examples and rationales for how "meaningful" could be construed. You don't buy them\, fine. I am still glad that Reini agreed that dSAVE_ERRNO and friends were the way to go.
On vr\, 2015-04-10 at 00:14 +0200\, Christian Millour wrote:
Fair enough. I really wish I were savvy enough to experiment with a version of libc where malloc\, calloc\, realloc\, write\, syswrite (for starters) would set errno to 0xDEAD on success\, and see what kind of error reporting such would allow.
At least on linux succesful calls to e.g. malloc simply don't touch errno\, so anything that claims to be portable to linux will at least already see this behaviour.
errno is *not* for checking whether an error occured\, and any attempt to do so is fraught with peril.
dennis@spirit:\~$ cat foo.c #include \<stdlib.h> #include \<unistd.h> #include \<errno.h> #include \<string.h> #include \<stdio.h>
int main(int argc\, char **argv) { write(666\, "Not a valid fd\n"\, 15); (void)malloc(3); printf("Errno: %d - %s\n"\, errno\, strerror(errno)); return 0; } dennis@spirit:\~$ gcc -Wall -Werror -ofoo foo.c && ./foo Errno: 9 - Bad file descriptor
-- Dennis Kaarsemaker www.kaarsemaker.net
TL;DR: There are good reasons why Perl should be polite wrt to $! and $^E. PLease help make it so.
Le 10/04/2015 09:21\, Dennis Kaarsemaker a écrit :
On vr\, 2015-04-10 at 00:14 +0200\, Christian Millour wrote:
Fair enough. I really wish I were savvy enough to experiment with a version of libc where malloc\, calloc\, realloc\, write\, syswrite (for starters) would set errno to 0xDEAD on success\, and see what kind of error reporting such would allow.
At least on linux succesful calls to e.g. malloc simply don't touch errno\, so anything that claims to be portable to linux will at least already see this behaviour.
I certainly hope so. But where in linux malloc documentation is it spelled out that malloc does not touch errno on success ? AFAIK\, nowhere. And conversely\, as noted by Craig earlier in this thread
Le 08/04/2015 19:05\, Craig A. Berry a écrit : > POSIX says\, "The setting of errno after a successful call to a > function is unspecified unless the description of that function > specifies that errno shall not be modified." [1] > > The Linux man page says\, "a function that succeeds is allowed to > change errno."[2] > > [1] http://pubs.opengroup.org/onlinepubs/9699919799/functions /errno.html > [2] http://man7.org/linux/man-pages/man3/errno.3.html
The way I read [2]\, linux malloc is indeed allowed to set errno to 0xDEAD on success. Not that it does\, but it could.
Similarly\, the documentation for malloc and write in [1] make no claim that 'errno shall not be modified' (on success). So it looks like that in a conforming implementation both would be allowed to set errno randomly on success. Not that any existing implementation likely does\, but again\, one could. Or maybe I am misinterpreting the quoted excerpt (I have not found yet any mention 'errno shall not be modified' in the functions documentations I have perused)\, any tuits welcome.
In practice however\, I believe Leon has it right :
Le 09/04/2015 01:58\, Leon Timmermans a écrit : > Actually\, POSIX says «No function in this volume of IEEE Std 1003.1-2001 > shall set /errno/ to 0.» I read the emphasis rather differently. POSIX > leaves things deliberately unspecified\, but many implementations try to > be nicer than strictly required.
The uncertainty is galling though\, and has dismal consequences.
First\, most of Perl is written on the (AFAIK) undocumented and untested assumption that (at least a number of) system calls and library functions are well-behaved wrt to errno (in the sense that they don't touch it on success). Uncounted hours were lost trying to figure out what was happening when atoi() started to misbehave on windows > 6 (#116118\, #81586). Now\, malloc() starting to misbehave would be caught immediately in core tests (see below) but that might not be the case for other syscalls or lib functions.
Second\, it leads to the impression that 'errno cannot be trusted unless checked immediately after a system call' which in turn fuels (IMHO) irresponsible or overly wary attitudes - Irresponsible : 'errno is so volatile that it is not worth spending any effort trying to keep it well behaved' - Overly wary : 'errno is so volatile that any attempt to deal with its numerical value except immediately after the call is unreasonable' The same goes of course for $^E.
But is errno really that volatile ? Who modifies errno anyway ? - system calls : in pratice most\, if not all\, only modify errno on failure\, and leave it unmodified when successful. - library functions : same as system calls\, except for numerical functions such as sqrt and atan which by construction have no other way that errno to signal failure : those are special anyway. - perl and C code through explicit manipulation of errno/$!/$^E : * error codes mapping (e.g. #119857) * the implementation of some perl operators and builtin functions\, such as open (Reini's patch in this thread filters away an irrelevant syscall failure)\, require\, etc. * some perl code in modules (e.g. File::Copy) * user code\, e.g. sub cleanup_temporary_files { local ($!\, $^E); eval { ... # remove temporaries\, ignoring errors } # whatever alterations of $! and $^E were performed during the # attempted removals above are undone when leaving the block }
Wouldn't it be nice it Perl offered explicitely the guarantee that its own builtins and operators (or a large subset thereof) do not touch $! and $^E on success ? With careful coding\, that would allow inspection of $! and $^E somewhat later than immediately following the operator\, which I could use.
I understand I might be alone in wishing such. However it irks me that most of the reactions I get are 'do not do that' based of the FUD that seems to bathe $! and $^E. Yes\, those are (thread-local) globals. Like any globals they need watching and indeed I would not trust their value after executing *any* piece of code (including destructors) that I have not audited. And yes\, that may severely limit the distance at which $! and $^E can be used. And yes again\, use only at your own risks.
But this is Perl\, and however misguided my attempts might look\, I should be allowed to try them. At this point not only do I receive well meaning but irrelevant advice\, but Perl itself also gets in the way (the situation with require). And it does not help that the suggested alternatives (use exceptions instead of 'pimping up' $!) either are unready\, incomplete\, or can't cut mustard for the same reasons (#119555).
Incidentally\, I find it interesting to note that it is those possibly misguided attempts which lead to the understanding of the problem with atoi\, possibly preventing more serious bugs. So they are well in line with Gurusamy Sarathy's advice in perlhack to 'Test unrelated features (this will flush out bizarre interactions)' and 'Use non-standard idioms (otherwise you are not testing TIMTOWTDI)' ;-)
earlier in this thread :
Le 08/04/2015 19:06\, Dave Mitchell a écrit :
I think you're conflating different classes of functions:
1. Functions that\, on failure\, set errno/$! to specify what failure; but which signal success/failure through another means - typically by return value (e.g. most UNIX system calls); 2. Functions that have no other means of signalling success failure\, and instead guarantee not change errno on success; so you set errno to zero\, and see if it changes (e.g. the atan example you quoted above) 3. Functions that make no mention of errno/$! in their documentation\, and that therefore might reasonably be expected not not mess with it.
Things like atoi or lc() fall into category 3; open() is category 1.
I don't buy this 'conflation' argument. When writing code in Perl\, I program against the Perl API (understood as the set of constructs\, builtin functions and operators provided). And Perl's open() for instance is much much more than a bare proxy for the system call open(). The $! and $^E I deal with in my code are the ones reported by Perl operators and builtins\, which might not even have been raised by a system call or library function\, or might have been transmogrified before reaching me (#119857).
To me\, cases 1 and 3 above should behave identically\, in the sense that they should not mess with $! and $^E *on success*. Of course\, when they fail I expect $! and $^E to be related to the underlying syscall or library failure. But if they succeed then any temporary failure raised in their innards is *irrelevant* and should not be propagated. Here require() is a prime offender. Not only does it trash $! on success\, but it also leaves $^E in a quasi random state : questionable design on one hand\, and inconsistency on the other (#119555).
BTW\,here is perldoc -v '$!':
This means "errno"\, hence $!\, is meaningful only *immediately* after a failure:
if (open my $fh\, "\<"\, $filename) { # Here $! is meaningless. ... } else { # ONLY here is $! meaningful. ... # Already here $! might be meaningless. } # Since here we might have either success or failure\, # $! is meaningless.
Here\, *meaningless* means that $! may be unrelated to the outcome of the "open()" operator.
I understand *meaningful* above as *related to the outcome of the "open()" operator*. I believe this is the intent.
There is however a problem with the definition of *meaningless*. Because of the 'may be' it is not the opposite of meaningful. This wonkiness only illustrates the bizarre current status of $! and $^E.
That would allow for instance Perl open() to reset $! on success.
I have demonstrated (#119555) that such a reset is *BAD* (action at a distance). There is a reason why POSIX states "No function in this volume of IEEE Std 1003.1-2001 shall set /errno/ to 0."\, as it seems to be the path of least surprise and best solidity. I admit that I am extremely puzzled as to why Linux would state that "a function that succeeds is allowed to change errno." but suspect that it applies only to a select few\, and not to syscalls.
Le 09/04/2015 12:34\, Dave Mitchell a écrit :
On Thu\, Apr 09\, 2015 at 03:06:25AM +0200\, Christian Millour wrote:
I wonder how many core and CPAN tests would fail with a Perl linked against a libc that would randomly alter errno on success.
If they fail\, then they are buggy.
Not the same but I performed a quick and dirty experiment by setting #define PerlMem_malloc(size) impolite_malloc((size)) in iperlsys.h and stuffing in util.c the definition of impolite_malloc\, which simply calls malloc() and trashes errno (setting it to 0xDEAD\, aka 57005) on success. Surprise ;-) A lot of tests fail ! As does basic reporting !
blead(unx) $ ./perl -e 'eval { $! = 21; die $! }; print $@' Unknown error 57005 at -e line 1. blead(unx) $
Hmmm. It would seem that Perl actually *depends* on malloc() refraining to trash errno on success...
I am very tempted to generalize this finding to other systems calls ;-) (and further experiments with a similar impolite_write() do back it up).
IMO there are two possibly legitimate objections against having Perl offering\, as a matter of policy\, the explicit guarantee that its operators and builins do not alter $! and $^E on success : 1) runtime cost : would such not slow Perl top a crawl ? I don't think so : if syscalls and library functions are well behaved to start with\, it is likely that very few alterations (i.e. well-placed dSAVE_ERRNO and RESTORE_ERRNO) will be needed. This must be asserted though. 2) developper cost : you need to find people interested in such. Well\, I for one am interested and as my understanding of the subject (and of Perl internals) matures\, should be able to offer patches as needed over time. As a matter of fact I have already done so (could anyone please pretty please have a look at my proposed path in #119555 ?) I understand that it is not realistic to expect such a policy pronouncement at this time. But the simple recognition that it does constitute a worthwhile objective would be a very good start\, helping to cut off the FUD surrounding $! and $^E.
I'm not sure how this thread has got so long because the basics are fairly simple. Quoting from the linux man page previously linked to:
The \<errno.h> header file defines the integer variable errno\, which is set by system calls and some library functions in the event of an error to indicate what went wrong. Its value is significant only when the return value of the call indicated an error (i.e.\, -1 from most system calls; -1 or NULL from most library functions); a function that succeeds is allowed to change errno.
A common mistake is to do
if (somecall() == -1) { printf("somecall() failed\n"); if (errno == ...) { ... } }
where errno no longer needs to have the value it had upon return from somecall() (i.e.\, it may have been changed by the printf(3)). If the value of errno should be preserved across a library call\, it must be saved:
if (somecall() == -1) { int errsv = errno; printf("somecall() failed\n"); if (errsv == ...) { ... } }
That is clear\, no?
On Wed\, Apr 15\, 2015 at 06:57:55PM +0200\, Christian Millour wrote:
In practice however\, I believe Leon has it right :
Le 09/04/2015 01:58\, Leon Timmermans a écrit :
Actually\, POSIX says «No function in this volume of IEEE Std 1003.1-2001 shall set /errno/ to 0.» I read the emphasis rather differently. POSIX leaves things deliberately unspecified\, but many implementations try to be nicer than strictly required.
The uncertainty is galling though\, and has dismal consequences.
There is no uncertainty. If you look at errno other than when it is defined to be meaningful then the value has no meaning. That is what the text in the manpage says.
First\, most of Perl is written on the (AFAIK) undocumented and untested assumption that (at least a number of) system calls and library functions are well-behaved wrt to errno (in the sense that they don't touch it on success).
I don't think perl was written on that assumption\, but it may only be working on that assumption (as you have shown). That is a bug. It may be a bug that is not worth fixing.
But your assertion of what well behaved means here is incorrect.
I admit that I
am extremely puzzled as to why Linux would state that "a function that succeeds is allowed to change errno."
A system call may\, for example\, make other system calls\, some of which may fail\, but the initial system call may succeed. This call need not reset errno to its initial value in order to be well behaved.
Second\, it leads to the impression that 'errno cannot be trusted unless checked immediately after a system call'
That's not just an impression\, it's a fact - for certain definitions of immediately and also for a *failed* call. See the manpage excerpt at the top.
which in turn fuels
(IMHO) irresponsible or overly wary attitudes - Irresponsible : 'errno is so volatile that it is not worth spending any effort trying to keep it well behaved'
This is the same incorrect definition of well behaved.
- Overly wary : 'errno is so volatile that any attempt to deal with its numerical value except immediately after the call is unreasonable'
After the *failed* call - yes. This is not overly wary\, it is correct. See the manpage excerpt at the top
But is errno really that volatile ?
It could be.
Who modifies errno anyway ?
- system calls : in pratice most\, if not all\, only modify errno on failure\, and leave it unmodified when successful.
That may be true\, but it is not useful information. What are you going to do with it? Implementations will legitimately differ and change between versions in this respect.
BTW\,here is perldoc -v '$!':
This means "errno"\, hence $\!\, is meaningful only \*immediately\* after a failure​:
Wouldn't it be nice it Perl offered explicitely the guarantee that its own builtins and operators (or a large subset thereof) do not touch $! and $^E on success ? With careful coding\, that would allow inspection of $! and $^E somewhat later than immediately following the operator\, which I could use.
$! is linked to errno. errno behaviour is fixed which means that $! behaviour is fixed.
TL;DR: There are good reasons why Perl should be polite wrt to $! and $^E. PLease help make it so.
This is a valid argument\, but it is a different argument. Is this the crux of the matter? Do you really care about errno? Or do you want a way to determine in Perl code whether the previous operation failed without checking the return value? Or do you want to see the error associated with the most recent failed operation? Or something else? What does "polite" mean here?
-- Paul Johnson - paul@pjcj.net http://www.pjcj.net
On 04/16/2015 01:18 PM\, Paul Johnson via RT wrote:
I'm not sure how this thread has got so long because the basics are fairly simple. Quoting from the linux man page previously linked to:
The \<errno\.h> header file defines the integer variable errno\, which is set by system calls and some library functions in the event of an error to indicate what went wrong\. Its value is significant only when the return value of the call indicated an error \(i\.e\.\, \-1 from most system calls; \-1 or NULL from most library functions\); a function that succeeds is allowed to change errno\. A common mistake is to do if \(somecall\(\) == \-1\) \{ printf\("somecall\(\) failed\\n"\); if \(errno == \.\.\.\) \{ \.\.\. \} \} where errno no longer needs to have the value it had upon return from somecall\(\) \(i\.e\.\, it may have been changed by the printf\(3\)\)\. If the value of errno should be preserved across a library call\, it must be saved​: if \(somecall\(\) == \-1\) \{ int errsv = errno; printf\("somecall\(\) failed\\n"\); if \(errsv == \.\.\.\) \{ \.\.\. \} \}
That is clear\, no?
No. It is very surprising that a call to open a buffered file will set ENOTTY. A buffered file can never be a TTY\, so this information is clearly bogus\, even if POSIX permits nonsense. errno could at least give a hint\, even on success.
OpenBSD and then NetBSD recently even went to rewrite realloc() because of bogus corner cases and unhelpful errno in POSIX. See https://mail-index.netbsd.org/tech-userlevel/2015/02/05/msg008912.html This handles now EOVERFLOW and ENOMEM.
On Wed\, Apr 15\, 2015 at 11:57 AM\, Christian Millour \cm\.perl@​abtela\.com wrote:
Wouldn't it be nice it Perl offered explicitely the guarantee that its own builtins and operators (or a large subset thereof) do not touch $! and $^E on success ? With careful coding\, that would allow inspection of $! and $^E somewhat later than immediately following the operator\, which I could use.
You're asking Perl to make a guarantee that C library maintainers and the C and POSIX standards committees have been steadily retreating from for a decade or two. C11 goes further along this road than C99 by saying that errno is only guaranteed to be zero at program start-up but not\, in a threaded context\, at thread start-up\, where the value of errno is indeterminate.[1] I take this to mean that if you are not in the main thread and have done nothing at all (much less made any syscalls\, successful or otherwise) you have no right to expect that errno is zero.
Thus your proposal to save and restore errno for every Perl op could\, in a threaded context\, mean saving and restoring an indeterminate value\, which puts you right back in the spot you're already in of only being able to depend on errno under the narrowly-defined and well-documented conditions that have already been discussed extensively in this thread.
[1] Section 7.5 of \<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf>
On Fri\, Apr 17\, 2015 at 3:53 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
Thus your proposal to save and restore errno for every Perl op could\, in a threaded context\, mean saving and restoring an indeterminate value\, which puts you right back in the spot you're already in of only being able to depend on errno under the narrowly-defined and well-documented conditions that have already been discussed extensively in this thread.
Sounds like a straw man to me. 1) $! doesn't have to start indeterminate\, and 2) you'd usually have to use C\<\< local $! = 0; >> in the scenario anyway.
local $! = 0; open(...); print(...); close(...); die $! if $!;
On Fri\, Apr 17\, 2015 at 3:01 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
On Fri\, Apr 17\, 2015 at 3:53 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
Thus your proposal to save and restore errno for every Perl op could\, in a threaded context\, mean saving and restoring an indeterminate value\, which puts you right back in the spot you're already in of only being able to depend on errno under the narrowly-defined and well-documented conditions that have already been discussed extensively in this thread.
Sounds like a straw man to me. 1) $! doesn't have to start indeterminate\,
Sure\, we could assume the C standards committee doesn't know what it's doing\, take matters into our own hands\, and clear errno at thread start-up while we're initializing the interpreter context. What if it turns out there's a reason for the standard saying what it says? What if doing the opposite because we think we know better gets us painted into a corner where we've made a promise we can't keep long-term?
and 2) you'd usually have to use C\<\< local $! = 0; >> in the scenario anyway.
But this whole thread has been about the desire to avoid having to do that.
On Fri\, Apr 17\, 2015 at 4:55 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
On Fri\, Apr 17\, 2015 at 3:01 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
On Fri\, Apr 17\, 2015 at 3:53 PM\, Craig A. Berry \<craig.a.berry@gmail.com
wrote:
Thus your proposal to save and restore errno for every Perl op could\, in a threaded context\, mean saving and restoring an indeterminate value\, which puts you right back in the spot you're already in of only being able to depend on errno under the narrowly-defined and well-documented conditions that have already been discussed extensively in this thread.
Sounds like a straw man to me. 1) $! doesn't have to start indeterminate\,
Sure\, we could assume the C standards committee doesn't know what it's doing
I didn't suggest any such thing.
\, take matters into our own hands\, and clear errno at thread
start-up while we're initializing the interpreter context.
I didn't suggest any such thing.
and 2) you'd usually have to use C\<\< local $! = 0; >> in the scenario anyway.
But this whole thread has been about the desire to avoid having to do that.
Really? It's the only way preserving $! on success can be useful.
On Fri\, Apr 17\, 2015 at 4:55 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
What if it turns out there's a reason for the standard saying what it says?
Are you really claiming that applications shouldn't change errno?
On Fri\, Apr 17\, 2015 at 4:08 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
On Fri\, Apr 17\, 2015 at 4:55 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
On Fri\, Apr 17\, 2015 at 3:01 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
1) $! doesn't have to start indeterminate\,
Sure\, we could assume the C standards committee doesn't know what it's doing
I didn't suggest any such thing.
\, take matters into our own hands\, and clear errno at thread
start-up while we're initializing the interpreter context.
I didn't suggest any such thing.
In the context I outlined of errno being\, according to C11\, indeterminate at thread start-up you said\, "$! doesn't have to start indeterminate." The only way to make it determinate is to set it\, and setting it at thread start-up was the only meaning I could construe in context. If that's not what you meant\, I really have no idea what you did mean or why it's relevant to this discussion.
and 2) you'd usually have to use C\<\< local $! = 0; >> in the scenario anyway.
But this whole thread has been about the desire to avoid having to do that.
Really? It's the only way preserving $! on success can be useful.
You've really lost me now. Either there is some sort of communication breakdown going on or you are determined to pick a fight for reasons I can't fathom. As has been said many times in this thread\, including by you\, $! really isn't dependable on success.
On Fri\, Apr 17\, 2015 at 4:11 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
On Fri\, Apr 17\, 2015 at 4:55 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
What if it turns out there's a reason for the standard saying what it says?
Are you really claiming that applications shouldn't change errno?
Uh\, what? I actually never said anything about applications. The ticket is about trying to move Perl internals\, where the ops and built-ins make up a library\, in the opposite direction of the C library implementations and standards\, i.e.\, to make more guarantees about errno than anyone else is making. I think that puts us on the wrong side of the inevitable.
On Fri\, Apr 17\, 2015 at 5:57 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
On Fri\, Apr 17\, 2015 at 4:08 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
On Fri\, Apr 17\, 2015 at 4:55 PM\, Craig A. Berry \<craig.a.berry@gmail.com
wrote:
On Fri\, Apr 17\, 2015 at 3:01 PM\, Eric Brine \ikegami@​adaelis\.com wrote:
1) $! doesn't have to start indeterminate\,
Sure\, we could assume the C standards committee doesn't know what it's doing
I didn't suggest any such thing.
\, take matters into our own hands\, and clear errno at thread
start-up while we're initializing the interpreter context.
I didn't suggest any such thing.
In the context I outlined of errno being\, according to C11\, indeterminate at thread start-up you said\, "$! doesn't have to start indeterminate." The only way to make it determinate is to set it\, and setting it at thread start-up was the only meaning I could construe in context. If that's not what you meant\, I really have no idea what you did mean or why it's relevant to this discussion.
The conversation is about changing the behaviour of $!. I never mentioned errno.
and 2) you'd usually have to use C\<\< local $! = 0; >> in the scenario anyway.
But this whole thread has been about the desire to avoid having to do that.
Really? It's the only way preserving $! on success can be useful.
You've really lost me now. Either there is some sort of communication breakdown going on or you are determined to pick a fight for reasons I can't fathom.
Feel free to provide a counter example.
Le 16/04/2015 13:17\, Paul Johnson a écrit :
On Wed\, Apr 15\, 2015 at 06:57:55PM +0200\, Christian Millour wrote:
Wouldn't it be nice it Perl offered explicitely the guarantee that its own builtins and operators (or a large subset thereof) do not touch $! and $^E on success ? With careful coding\, that would allow inspection of $! and $^E somewhat later than immediately following the operator\, which I could use. s/the operator/a failed builtin or operator/
$! is linked to errno.
Yes but it is an implementation detail (a pretty major one\, I agree). Similarly on the platform that have it\, $^E is linked to some form of system errno. Again a major implementation detail\, but an implementation detail nonetheless.
errno behaviour is fixed which means that $\!
behaviour is fixed.
Nope. Absolutely not. We can make $! (and $^E) dance to pretty much any tune we choose. Indeed already do in a number of places.
The crux of the matter is maybe that\, at this point\, there is no stated policy as to which tune is best.
TL;DR: There are good reasons why Perl should be polite wrt to $! and $^E. PLease help make it so.
This is a valid argument\, but it is a different argument. Is this the crux of the matter?
Yes.
Do you really care about errno?
No.
Or do you want a
way to determine in Perl code whether the previous operation failed without checking the return value? Or do you want to see the error associated with the most recent failed operation? Or something else? What does "polite" mean here?
Let's try.
It is difficult to come up with a terminology completely devoid of emotional impact. Please do not take it as offensive.
- Polite : guaranteed not to alter $! (and $^E) on success. - Impolite : may alter -- whether predictably or not -- $! (and/or $^E) on success.
So\, the concept of Politeness\, and the Polite and Impolite qualifiers\, apply to Perl operators and builtins. Currently for instance Perl require() is impolite (by design for $!\, and by carelessness for $^E\, see #119555).
For the sake of precision and disambiguation let us use different qualifiers for library calls (system call wrappers and library functions).
- Courteous: guaranteed not to alter errno (and syserrno) on success. - Discourteous : may alter -- whether predictably or not -- errno (and/or syserrno) on success.
So\, the concept of Courteousness\, and the Courteous and Discourteous qualifiers\, apply to library calls.
Some of what follows is an exploration of the relationships between Politeness and Courteousness.
I'd like $! and $^E to be useable at some distance from the point of failure. In a nutshell\, I'd like that the following be true :
eval { ... die ... 1; } or do { # $! and $^E are at that point identical to what they where # at the moment die was invoked in the eval block\, ASSUMING # SUCCESSFULL *POLITE* DESTRUCTORS WHEN THE STACK GOT UNWOUND # (both $! and $^E may have been altered/set/reset/restored # countless times during unwinding\, but this is irrelevant). }
[ Why ? Because\, for programmatic error handling\, it is much safer and cheaper to deal with the numerical value of the dualvars $! and $^E than with their string counterparts\, or with whatever remains in string form in $@. And yes\, structured exceptions are an alternative. ]
Before anyone starts\, the problem of *unsuccessful* destructors is interesting but explicitely *off-topic* here.
And before you turn away in disgust yelling about the unreasonableness of the above\, please realize that we are already there. Well\, almost. The above is true with a bare die. But not with an autoloaded croak\, nor with autodie (#119555). Guess why ? Because require() is impolite.
There is no denying though that the assumption stated in CAPITALS above is potentially a *huge* one. Ensuring polite destructors may be a daunting task\, so the above should be considered only by those 1) who know what they are doing and 2) are ready to pay the cost. TIMTOWTDI.
That said\, turning any Perl code fragment into a guaranteed polite one is as easy as variations on the following :
{ my @saved_errs = ($!\, $^E); # *not* local ($!\, $^E) doit; # may die ($!\, $^E) = @saved_errs; # restored only if doit did not die } # at this point\, $! and $^E will be as they where on entry if doit # did not die. That is polite. If doit did die\, $! and $^E will be # what they were at the point of death\, which is probably correct # in most cases (exceptions left as an exercise to the reader).
This of course may be overkill if the code fragment was already polite to begin with. The problem is\, how do we know ?
And what about destructors that are not under direct control of the programmer\, such as the closing of a lexical filehandle when leaving a block ?
If Perl was polite\, or possibly mostly polite\, with documented exceptions\, most of these problems would disappear.
Now\, who manipulates $! and $^E ? - the underlying library calls (system call wrappers and library functions) that are used to implement Perl operators and builtins. When those set errno or syserrno\, these alterations are visible through $! and $^E. - perl and C/XS code. In most cases\, those manipulations are done carefully to ensure the quality/sanity of $! and $^E (e.g. mapping of syserrno to errno\, filtering out of irrelevant failures\, etc.)
Clearly the politeness of Perl depends a lot on the courteousness of library calls.
A completely courteous underlying library would probably lead automatically to a mostly polite Perl (with the notable but unrelated exception of those operators specifically (mis?)designed as impolite\, such as require).
As pointed out expertly by Craig and others\, the relevant standards and manual pages go to some length to avoid making any courteousness commitment. In practice however\, many library functions seem to be courteous (which is fortunate since Perl currently depends on some critical functions -- such as malloc -- being so). As a consequence\, most Perl operators are already polite\, but not documented so.
In addition\, and quoting perlclib :
One thing Perl porters should note is that perl doesn't tend to use that much of the C standard library internally; you'll see very little use of\, for example\, the ctype.h functions in there. This is because Perl tends to reimplement or abstract standard library functions\, so that we know exactly how they're going to operate.
So Perl has some easy means to insulate itself from some potentially discourteous library functions. For instance\, if malloc and family (which *appear* to be courteous but are *not* documented or guaranteed to be so) suddenly became discourteous in some newer and better libc\, courteous versions could be developed by wrapping the offending ones as described above\, and substituded for the latter in iperlsys.h or a platform specific version : problem solved at some unavoidable but minimum runtime cost\, and but with a single point of intervention.
These same mechanisms could be used\, with similar costs\, for non-critical discourteous library functions\, should they prove obnoxious.
Not all library functions are abtracted or reimplemented though\, as was seen with atoi. Some audit work would probably be in order there.
It might be guesswork and wishful thinking but I believe that politeness could be achived at no or negligible runtime costs\, and at very low compatibility risks with future standards (meaning that there is no need to be as cautious as the C library maintainers and the C and POSIX standards committees). The development and documentation costs on the other hand are not negligible.
For the reasons outlined previously\, I obviously believe that Politeness is desirable. Knowing how Perl operators behave on success with respect to $! and $^E is precious to me\, for programmatic error handling. Since I am apparently alone in thinking so\, that might not win the day... But\, and at the risk of boring the reader\, I cannot overemphasize how much I *loathe* the current impoliteness of require. I *have* done my homework and explored alternatives\, and got bit each and every time by this very impoliteness of require. I can hack myself a $^E-aware autodie\, but I cannot get around this behavior of require. I have tried to implement an external module/pragma that would wrap the existing require in a polite one\, with no success so far (at this point I do not believe that it can be done\, but if I'm wrong pease tell me). So maybe there is something in Politeness worth investigating.
At the very least I believe that it is useful to put a name to the concepts\, and to explicitely separate Politeness from Courteousness. It builds awareness that they are much less tightly linked that some seem to believe or fear.
Interest in Politeness provides a fresh outlook on Perl core code\, which is valuable in and of itself. That might in turn foster interest is means to assess the courteousness of some library functions\, helping preempt potential future atoi-like problems.
At present the politeness of perl operators and builtins is unspecified.
A Politeness policy might start by introducing the concepts of Politeness and Courteouness in perldoc\, by stating that Politeness is a worthy objective\, and by fixing require (whose current behavior -- IMO due more to oversight and disinterest than to wilful design -- would negate any progress made otherwise wrt politeness).
From there we might gradually turn other Perl operators and builtins into documented polite ones (or\, if justified\, documented impolite ones\, with explicit rationale).
--Christian
Le 17/04/2015 21:53\, Craig A. Berry a écrit :
On Wed\, Apr 15\, 2015 at 11:57 AM\, Christian Millour \cm\.perl@​abtela\.com wrote:
Wouldn't it be nice it Perl offered explicitely the guarantee that its own builtins and operators (or a large subset thereof) do not touch $! and $^E on success ? With careful coding\, that would allow inspection of $! and $^E somewhat later than immediately following the operator\, which I could use.
I am deeply sorry about this erroneous and misleading wording. What I intended was 'following a *failed* operator'.
You're asking Perl to make a guarantee that C library maintainers and the C and POSIX standards committees have been steadily retreating from for a decade or two.
I am slowly realizing that and thank you for your time and expertise. I do not believe the situation is quite the same for Perl. Please see my other answer (to Paul Johnson) today on this thread (http://www.nntp.perl.org/group/perl.perl5.porters/2015/04/msg227447.html).
C11 goes further along this road than C99 by
saying that errno is only guaranteed to be zero at program start-up but not\, in a threaded context\, at thread start-up\, where the value of errno is indeterminate.[1] I take this to mean that if you are not in the main thread and have done nothing at all (much less made any syscalls\, successful or otherwise) you have no right to expect that errno is zero.
I do not expect errno to ever be zero because the only time I will ever look it up (through $!) would be to handle a reported error.
Thus your proposal to save and restore errno for every Perl op could\, in a threaded context\, mean saving and restoring an indeterminate value\,
True
which puts you right back in the spot you're already in of only
being able to depend on errno under the narrowly-defined and well-documented conditions that have already been discussed extensively in this thread.
Not really because I am not using the value of errno itself as the indicator that an error as occurred.
[1] Section 7.5 of \<http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf>
On Tue Apr 07 11:27:57 2015\, davem wrote:
But those are all for a completely different situation\, where perl "unexpectedly" makes system calls\, for example
unless (open my $fh\, "\<"\, $file) { my $y = lc $x; # if $x is utf8\, may load utf8.pm and trash $! croak "open: ($y): $!\n"; }
In those cases it is of course correct for perl to use dSAVE_ERRNO etc to preserve the current value of $!.
But in the case in this thread\, we *know* we're making a system call\, and so expect $! to be meaningless on success. In fact $! is explicitly documented as such in perlvar: it even includes an example with open():
This means C\
\, hence C\<$!>\, is meaningful only I\ after a B\ : if (open my $fh\, "\<"\, $filename) { # Here $! is meaningless. }
As Dave and others have said\, we explicitly document that $! is meaningless after a successful operation.
Rejecting this patch.
Tony
@tonycoz - Status changed from 'open' to 'rejected'
Migrated from rt.perl.org#124232 (status was 'rejected')
Searchable as RT124232$