Closed p5pRT closed 12 years ago
When running into out of memory situations\, I regularly get segfaults. The backtrace always looks the same:
#9026 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c:1479
#9027 0x0000000000514b55 in PerlIOBuf_get_base (f=\
etc.
I never actually see the out of memory message because even printing it to the screen requires memory allocation.
There are likely ways around this\, but I think it would be more worthwhile to just write the error message to stderr or "something like that". It is an emergency\, and going through all perlio layers makes little sense\, if\, in the end\, the kernel kills it anyways due to stack growth.
Note that this isn't a bug\, it's just a wish for saner error reporting :)
On Wed Oct 25 02:26:58 2006\, schmorp@schmorp.de wrote:
This is a bug report for perl from schmorp@schmorp.de\, generated with the help of perlbug 1.35 running under perl v5.8.8.
----------------------------------------------------------------- [Please enter your report here]
When running into out of memory situations\, I regularly get segfaults. The backtrace always looks the same:
#9026 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c:1479 #9027 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c:3890 #9028 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c:3755 #9029 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c:1593 #9030 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c:1479 #9031 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c:3890 #9032 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c:3755 #9033 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c:1593 #9034 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c:1479 #9035 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c:3890 #9036 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c:3755 #9037 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c:1593 #9038 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c:1479 #9039 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c:3890
etc.
I never actually see the out of memory message because even printing it to the screen requires memory allocation.
There are likely ways around this\, but I think it would be more worthwhile to just write the error message to stderr or "something like that". It is an emergency\, and going through all perlio layers makes little sense\, if\, in the end\, the kernel kills it anyways due to stack growth.
Note that this isn't a bug\, it's just a wish for saner error reporting :)
Is the "Out of memory" message coming from the kernel or from Perl? If the former\, is there any way for Perl to detect that situation first so that Perl can cut through all those layers itself?
Thank you very much. Jim Keenan
The RT System itself - Status changed from 'new' to 'open'
On Sat\, Sep 01\, 2012 at 04:32:53PM -0700\, James E Keenan via RT wrote:
On Wed Oct 25 02:26:58 2006\, schmorp@schmorp.de wrote:
This is a bug report for perl from schmorp@schmorp.de\, generated with the help of perlbug 1.35 running under perl v5.8.8.
----------------------------------------------------------------- [Please enter your report here]
When running into out of memory situations\, I regularly get segfaults. The backtrace always looks the same:
#9026 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c:1479 #9027 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c:3890
etc.
I never actually see the out of memory message because even printing it to the screen requires memory allocation.
There are likely ways around this\, but I think it would be more worthwhile to just write the error message to stderr or "something like that". It is an emergency\, and going through all perlio layers makes little sense\, if\, in the end\, the kernel kills it anyways due to stack growth.
Note that this isn't a bug\, it's just a wish for saner error reporting :)
Is the "Out of memory" message coming from the kernel or from Perl? If the former\, is there any way for Perl to detect that situation first so that Perl can cut through all those layers itself?
It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.
This is a wishlist bug\, which I think could be left open.
Tony
On Sat Sep 01 16:40:15 2012\, tonyc wrote:
It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.
This is a wishlist bug\, which I think could be left open.
I wouldnât call that a wishlist bug.
Shouldnât Perl_malloc just bypass PerlIO?
--
Father Chrysostomos
On Sat\, Sep 01\, 2012 at 05:26:03PM -0700\, Father Chrysostomos via RT wrote:
On Sat Sep 01 16:40:15 2012\, tonyc wrote:
It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.
This is a wishlist bug\, which I think could be left open.
I wouldnât call that a wishlist bug.
The original report was severity wishlist.
Shouldnât Perl_malloc just bypass PerlIO?
Yes.
Or catch the recursion and then bypass PerlIO. Even if a 2GB allocation fails\, the 16KB for the PerlIO buffer will probably succeed.
Tony
On Sat\, Sep 1\, 2012 at 7:47 PM\, Tony Cook \tony@​develop\-help\.com wrote:
On Sat\, Sep 01\, 2012 at 05:26:03PM -0700\, Father Chrysostomos via RT wrote:
On Sat Sep 01 16:40:15 2012\, tonyc wrote:
It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.
This is a wishlist bug\, which I think could be left open.
I wouldnât call that a wishlist bug.
The original report was severity wishlist.
Shouldnât Perl_malloc just bypass PerlIO?
Yes.
Or catch the recursion and then bypass PerlIO. Even if a 2GB allocation fails\, the 16KB for the PerlIO buffer will probably succeed.
When the code in malloc.c was written\, PerlIO as such was still a few years in the future; I think everything with the PerlIO prefix would have just been macros pointing to stdio functions\, and thus PerlIO_puts() would not have been allocating memory. That's partially a hunch that I haven't done all the spelunking to prove\, but I think it's true. TODO: audit all uses of PerlIO_puts() in the core to see if there are any cases where memory allocation would be problematic.
The attached will probably do the trick for malloc.c\, inspired by S_write_no_mem in util.c (which we can't use because it calls exit()). I've run this through a quick run of the test suite after configuring with -Dusemymalloc=y and that looked fine\, but I haven't tested explicitly for out-of-memory conditions. And I'm not sure if MALLOC_WRITE_NOMEM is really the best name for the macro or whether this is the best place to put it; should we have something general-purpose in the API that does this?
From d75910164cbad239293e94ddac3bcde61f5b14c6 Mon Sep 17 00:00:00 2001 From: "Craig A. Berry" \craigberry@​mac\.com Date: Sun\, 2 Sep 2012 21:30:55 -0500 Subject: [PATCH] Out of memory message should not allocate memory.
This fixes [perl #40595]. When Perl_malloc reports an out of memory error\, it should not make calls to PerlIO functions that may turn around and allocate memory using Perl_malloc. A simple write() should be ok\, though. Inspired by S_write_no_mem() from util.c
malloc.c | 15 +++++++++------ 1 files changed\, 9 insertions(+)\, 6 deletions(-)
PerlLIO_write(PerlIO_fileno(PerlIO_stderr()),s,strlen(s)) + Malloc_t Perl_malloc(size_t nbytes) { @@ -1290\,14 +1293\,14 @@ Perl_malloc(size_t nbytes) dTHX; if (!PL_nomemok) { #if defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) - PerlIO_puts(PerlIO_stderr()\,"Out of memory!\n"); + MALLOC_WRITE_NOMEM("Out of memory!\n"); #else char buff[80]; char *eb = buff + sizeof(buff) - 1; char *s = eb; size_t n = nbytes;
- PerlIO_puts(PerlIO_stderr()\,"Out of memory during request for "); + MALLOC_WRITE_NOMEM("Out of memory during request for "); #if defined(DEBUGGING) || defined(RCHECK) n = size; #endif @@ -1305\,15 +1308\,15 @@ Perl_malloc(size_t nbytes) do { *--s = '0' + (n % 10); } while (n /= 10); - PerlIO_puts(PerlIO_stderr()\,s); - PerlIO_puts(PerlIO_stderr()\," bytes\, total sbrk() is "); + MALLOC_WRITE_NOMEM(s); + MALLOC_WRITE_NOMEM(" bytes\, total sbrk() is "); s = eb; n = goodsbrk + sbrk_slack; do { *--s = '0' + (n % 10); } while (n /= 10); - PerlIO_puts(PerlIO_stderr()\,s); - PerlIO_puts(PerlIO_stderr()\," bytes!\n"); + MALLOC_WRITE_NOMEM(s); + MALLOC_WRITE_NOMEM(" bytes!\n"); #endif /* defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) */ my_exit(1); } -- 1.7.7.GIT
On Sun\, Sep 2\, 2012 at 10:05 PM\, Craig A. Berry \craig\.a\.berry@​gmail\.com wrote:
On Sat\, Sep 1\, 2012 at 7:47 PM\, Tony Cook \tony@​develop\-help\.com wrote:
On Sat\, Sep 01\, 2012 at 05:26:03PM -0700\, Father Chrysostomos via RT wrote:
On Sat Sep 01 16:40:15 2012\, tonyc wrote:
It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.
This is a wishlist bug\, which I think could be left open.
I wouldnât call that a wishlist bug.
The original report was severity wishlist.
Shouldnât Perl_malloc just bypass PerlIO?
Yes.
Or catch the recursion and then bypass PerlIO. Even if a 2GB allocation fails\, the 16KB for the PerlIO buffer will probably succeed.
When the code in malloc.c was written\, PerlIO as such was still a few years in the future; I think everything with the PerlIO prefix would have just been macros pointing to stdio functions\, and thus PerlIO_puts() would not have been allocating memory. That's partially a hunch that I haven't done all the spelunking to prove\, but I think it's true. TODO: audit all uses of PerlIO_puts() in the core to see if there are any cases where memory allocation would be problematic.
The attached will probably do the trick for malloc.c\, inspired by S_write_no_mem in util.c (which we can't use because it calls exit()). I've run this through a quick run of the test suite after configuring with -Dusemymalloc=y and that looked fine\, but I haven't tested explicitly for out-of-memory conditions. And I'm not sure if MALLOC_WRITE_NOMEM is really the best name for the macro or whether this is the best place to put it; should we have something general-purpose in the API that does this?
I pushed a slightly different version as \<http://perl5.git.perl.org/perl.git/commitdiff/7cd83f6573da7fd1101fc83cb13867d52fea3d41>.
I failed in my attempt to put together a reproducer on OS X Lion. Apparently ulimit\, limit\, and launchctl limit have no effect and any process can grab as much virtual memory as it wants\, which means you make the whole machine grind to a halt before Perl ever runs out of memory.
Luckily VMS has better controls\, and I was easily able to reproduce the problem with:
$ perl -e "for (0..100_000) {unshift @a\, 'X' x (1024 * 10);}" %SYSTEM-F-ACCVIO\, access violation\, reason mask=04\, virtual address=000007FDD74D6000\, PC=FFFFFFFF800DC170\, PS=0000001B
Improperly handled condition\, image exit forced by last chance handler. Signal arguments: Number = 0000000000000005 Name = 000000000000000C 0000000000040004 000007FDD74D6000 FFFFFFFF800DC170 000000000000001B
Register dump: R0 = 0000000000000000 R1 = FFFFFFFF883C6600 R2 = 0000000001000061 R3 = 000000000051201C R4 = 000000007FFCF818 R5 = 000000007FFCF8B0 R6 = 0000000000000001 R7 = 0000000000000000 R8 = 0000000000000004 R9 = FFFFFFFFFFFFFFFF R10 = 0000000000000404 R11 = 0000000000000000 SP = 000000007AD0A000 TP = 0000000001331200 R14 = C000000000000614 R15 = 0000000000000003 R16 = FFFFFFFF88299568 R17 = 0000000000000080 R18 = 0000000000000000 R19 = FFFFF802888023A0 R20 = 000000000000000C R21 = FFFFFFFF881C9360 R22 = FFFFFFFF880040B0 R23 = 0000000000000003 R24 = 000000007693F550 R25 = 0000000000000007 R26 = 000000007B88E1B0 R27 = 000000000000006E R28 = 000000000000000F R29 = 000007FDD74D61F0 R30 = 0000000000000072 R31 = 0000000000000065 PC = FFFFFFFF800DC170 BSP/STORE = 000007FDD74D61F0 / 000007FDD74D6000 PSR = 000010130802E030 IIPA = FFFFFFFF800DC160 B0 = FFFFFFFF8019D0E0 B6 = FFFFFFFF800DC0C0 B7 = FFFFF802888023A0
Interrupted Frame RSE Backing Store\, Size = 8 registers
No access to RSE backing store
and after the patch I get:
$ perl -e "for (0..100_000) {unshift @a\, 'X' x (1024 * 10);}" Out of memory during request for 10252 bytes\, total sbrk() is 1066305536 bytes! %SYSTEM-F-ABORT\, abort
which I believe is the desired behavior. I think the ticket can be closed.
@tonycoz - Status changed from 'open' to 'resolved'
Migrated from rt.perl.org#40595 (status was 'resolved')
Searchable as RT40595$