Perl / perl5

đŸȘ The Perl programming language
https://dev.perl.org/perl5/
Other
1.93k stars 551 forks source link

out of memory message should not require memory allocation #8647

Closed p5pRT closed 12 years ago

p5pRT commented 17 years ago

Migrated from rt.perl.org#40595 (status was 'resolved')

Searchable as RT40595$

p5pRT commented 17 years ago

From schmorp@schmorp.de

Created by schmorp@schmorp.de

When running into out of memory situations\, I regularly get segfaults. The backtrace always looks the same​:

  #9026 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479   #9027 0x0000000000514b55 in PerlIOBuf_get_base (f=\) at perlio.c​:3890   #9028 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c​:3755   #9029 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c​:1593   #9030 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479   #9031 0x0000000000514b55 in PerlIOBuf_get_base (f=\) at perlio.c​:3890   #9032 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c​:3755   #9033 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c​:1593   #9034 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479   #9035 0x0000000000514b55 in PerlIOBuf_get_base (f=\) at perlio.c​:3890   #9036 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c​:3755   #9037 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c​:1593   #9038 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479   #9039 0x0000000000514b55 in PerlIOBuf_get_base (f=\) at perlio.c​:3890

etc.

I never actually see the out of memory message because even printing it to the screen requires memory allocation.

There are likely ways around this\, but I think it would be more worthwhile to just write the error message to stderr or "something like that". It is an emergency\, and going through all perlio layers makes little sense\, if\, in the end\, the kernel kills it anyways due to stack growth.

Note that this isn't a bug\, it's just a wish for saner error reporting :)

Perl Info ``` Flags: category=core severity=wishlist Site configuration information for perl v5.8.8: Configured by Marc Lehmann at Mon Aug 21 07:39:41 CEST 2006. Summary of my perl5 (revision 5 version 8 subversion 8 patch 28443) configuration: Platform: osname=linux, osvers=2.6.17.6, archname=amd64-linux uname='linux cerebro 2.6.17.6 #9 smp thu aug 3 01:04:29 cest 2006 x86_64 gnulinux ' config_args='-Duselargefiles -Dxxxxuse64bitint -Uuse64bitall -Dusemymalloc=y -Dcc=gcc -Dccflags=-DPERL_DONT_CREATE_GVSV -ggdb -Dcppflags=-DPERL_DONT_CREATE_GVSV -D_GNU_SOURCE -I/opt/include -Doptimize=-O4 -march=opteron -mtune=opteron -funroll-loops -fno-strict-aliasing -Dcccdlflags=-fPIC -Dldflags=-L/opt/perl/lib -L/opt/lib -Dlibs=-ldl -lm -lcrypt -Darchname=amd64-linux -Dprefix=/opt/perl -Dprivlib=/opt/perl/lib/perl5 -Darchlib=/opt/perl/lib/perl5 -Dvendorprefix=/opt/perl -Dvendorlib=/opt/perl/lib/perl5 -Dvendorarch=/opt/perl/lib/perl5 -Dsiteprefix=/opt/perl -Dsitelib=/opt/perl/lib/perl5 -Dsitearch=/opt/perl/lib/perl5 -Dsitebin=/opt/perl/bin -Dman1dir=/opt/perl/man/man1 -Dman3dir=/opt/perl/man/man3 -Dsiteman1dir=/opt/perl/man/man1 -Dsiteman3dir=/opt/perl/man/man3 -Dman1ext=1 -Dman3ext=3 -Dpager=/usr/bin/less -Uafs -Uusesfio -Uusenm -Uuseshrplib -Dd_dosuid -Dusethreads=undef -Duse5005threads=undef -Duseithreads=undef -Dusemultiplicity=undef -Demail=perl-binary@plan9.de -Dcf_email=perl-binary@plan9.de -Dcf_by=Marc Lehmann -Dlocincpth=/opt/perl/include /opt/include -Dmyhostname=localhost -Dmultiarch=undef -Dbin=/opt/perl/bin -des' hint=recommended, useposix=true, d_sigaction=define usethreads=undef use5005threads=undef useithreads=undef usemultiplicity=undef useperlio=define d_sfio=undef uselargefiles=define usesocks=undef use64bitint=define use64bitall=undef uselongdouble=undef usemymalloc=y, bincompat5005=undef Compiler: cc='gcc', ccflags ='-ggdb -fno-strict-aliasing -pipe -I/opt/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64', optimize='-O4 -march=opteron -mtune=opteron -funroll-loops -fno-strict-aliasing', cppflags='-DPERL_DONT_CREATE_GVSV -D_GNU_SOURCE -I/opt/include -DPERL_DONT_CREATE_GVSV -ggdb -fno-strict-aliasing -pipe -Wdeclaration-after-statement -I/opt/include' ccversion='', gccversion='4.1.2 20060729 (prerelease) (Debian 4.1.1-10)', gccosandvers='' intsize=4, longsize=8, ptrsize=8, doublesize=8, byteorder=12345678 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=16 ivtype='long', ivsize=8, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, prototype=define Linker and Libraries: ld='gcc', ldflags ='-L/opt/perl/lib -L/opt/lib -L/usr/local/lib' libpth=/usr/local/lib /lib /usr/lib libs=-ldl -lm -lcrypt perllibs=-ldl -lm -lcrypt libc=/lib/libc-2.3.6.so, so=so, useshrplib=false, libperl=libperl.a gnulibc_version='2.3.6' Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags='-Wl,-E' cccdlflags='-fPIC', lddlflags='-shared -L/opt/perl/lib -L/opt/lib -L/usr/local/lib' Locally applied patches: MAINT28213 @INC for perl v5.8.8: /root/src/sex /opt/perl/lib/perl5 /opt/perl/lib/perl5 /opt/perl/lib/perl5 /opt/perl/lib/perl5 /opt/perl/lib/perl5 . Environment for perl v5.8.8: HOME=/root LANG (unset) LANGUAGE (unset) LC_CTYPE=de_DE.UTF-8 LD_LIBRARY_PATH (unset) LOGDIR (unset) PATH=/root/s2:/root/s:/opt/bin:/opt/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/X11/bin:/usr/games:/root/src/uunet:. PERL5LIB=/root/src/sex PERL5_CPANPLUS_CONFIG=/root/.cpanplus/config PERLDB_OPTS=ornaments=0 PERL_BADLANG (unset) PERL_UNICODE=EAL SHELL=/bin/bash ```
p5pRT commented 12 years ago

From @jkeenan

On Wed Oct 25 02​:26​:58 2006\, schmorp@​schmorp.de wrote​:

This is a bug report for perl from schmorp@​schmorp.de\, generated with the help of perlbug 1.35 running under perl v5.8.8.

----------------------------------------------------------------- [Please enter your report here]

When running into out of memory situations\, I regularly get segfaults. The backtrace always looks the same​:

#9026 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479 #9027 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c​:3890 #9028 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c​:3755 #9029 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c​:1593 #9030 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479 #9031 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c​:3890 #9032 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c​:3755 #9033 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c​:1593 #9034 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479 #9035 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c​:3890 #9036 0x0000000000515b3a in PerlIOBuf_write (f=0x41180c8\, vbuf=0x546e28\, count=33) at perlio.c​:3755 #9037 0x0000000000517c9e in PerlIO_puts (f=0x412b828\, s=0x546e28 "Out of memory during request for ") at perlio.c​:1593 #9038 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479 #9039 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c​:3890

etc.

I never actually see the out of memory message because even printing it to the screen requires memory allocation.

There are likely ways around this\, but I think it would be more worthwhile to just write the error message to stderr or "something like that". It is an emergency\, and going through all perlio layers makes little sense\, if\, in the end\, the kernel kills it anyways due to stack growth.

Note that this isn't a bug\, it's just a wish for saner error reporting :)

Is the "Out of memory" message coming from the kernel or from Perl? If the former\, is there any way for Perl to detect that situation first so that Perl can cut through all those layers itself?

Thank you very much. Jim Keenan

p5pRT commented 12 years ago

The RT System itself - Status changed from 'new' to 'open'

p5pRT commented 12 years ago

From @tonycoz

On Sat\, Sep 01\, 2012 at 04​:32​:53PM -0700\, James E Keenan via RT wrote​:

On Wed Oct 25 02​:26​:58 2006\, schmorp@​schmorp.de wrote​:

This is a bug report for perl from schmorp@​schmorp.de\, generated with the help of perlbug 1.35 running under perl v5.8.8.

----------------------------------------------------------------- [Please enter your report here]

When running into out of memory situations\, I regularly get segfaults. The backtrace always looks the same​:

#9026 0x000000000042500e in Perl_malloc (nbytes=4104) at malloc.c​:1479 #9027 0x0000000000514b55 in PerlIOBuf_get_base (f=\<value optimized out>) at perlio.c​:3890

etc.

I never actually see the out of memory message because even printing it to the screen requires memory allocation.

There are likely ways around this\, but I think it would be more worthwhile to just write the error message to stderr or "something like that". It is an emergency\, and going through all perlio layers makes little sense\, if\, in the end\, the kernel kills it anyways due to stack growth.

Note that this isn't a bug\, it's just a wish for saner error reporting :)

Is the "Out of memory" message coming from the kernel or from Perl? If the former\, is there any way for Perl to detect that situation first so that Perl can cut through all those layers itself?

It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.

This is a wishlist bug\, which I think could be left open.

Tony

p5pRT commented 12 years ago

From @cpansprout

On Sat Sep 01 16​:40​:15 2012\, tonyc wrote​:

It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.

This is a wishlist bug\, which I think could be left open.

I wouldn’t call that a wishlist bug.

Shouldn’t Perl_malloc just bypass PerlIO?

--

Father Chrysostomos

p5pRT commented 12 years ago

From @tonycoz

On Sat\, Sep 01\, 2012 at 05​:26​:03PM -0700\, Father Chrysostomos via RT wrote​:

On Sat Sep 01 16​:40​:15 2012\, tonyc wrote​:

It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.

This is a wishlist bug\, which I think could be left open.

I wouldn’t call that a wishlist bug.

The original report was severity wishlist.

Shouldn’t Perl_malloc just bypass PerlIO?

Yes.

Or catch the recursion and then bypass PerlIO. Even if a 2GB allocation fails\, the 16KB for the PerlIO buffer will probably succeed.

Tony

p5pRT commented 12 years ago

From @craigberry

On Sat\, Sep 1\, 2012 at 7​:47 PM\, Tony Cook \tony@&#8203;develop\-help\.com wrote​:

On Sat\, Sep 01\, 2012 at 05​:26​:03PM -0700\, Father Chrysostomos via RT wrote​:

On Sat Sep 01 16​:40​:15 2012\, tonyc wrote​:

It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.

This is a wishlist bug\, which I think could be left open.

I wouldn’t call that a wishlist bug.

The original report was severity wishlist.

Shouldn’t Perl_malloc just bypass PerlIO?

Yes.

Or catch the recursion and then bypass PerlIO. Even if a 2GB allocation fails\, the 16KB for the PerlIO buffer will probably succeed.

When the code in malloc.c was written\, PerlIO as such was still a few years in the future; I think everything with the PerlIO prefix would have just been macros pointing to stdio functions\, and thus PerlIO_puts() would not have been allocating memory. That's partially a hunch that I haven't done all the spelunking to prove\, but I think it's true. TODO​: audit all uses of PerlIO_puts() in the core to see if there are any cases where memory allocation would be problematic.

The attached will probably do the trick for malloc.c\, inspired by S_write_no_mem in util.c (which we can't use because it calls exit()). I've run this through a quick run of the test suite after configuring with -Dusemymalloc=y and that looked fine\, but I haven't tested explicitly for out-of-memory conditions. And I'm not sure if MALLOC_WRITE_NOMEM is really the best name for the macro or whether this is the best place to put it; should we have something general-purpose in the API that does this?

From d75910164cbad239293e94ddac3bcde61f5b14c6 Mon Sep 17 00​:00​:00 2001 From​: "Craig A. Berry" \craigberry@&#8203;mac\.com Date​: Sun\, 2 Sep 2012 21​:30​:55 -0500 Subject​: [PATCH] Out of memory message should not allocate memory.

This fixes [perl #40595]. When Perl_malloc reports an out of memory error\, it should not make calls to PerlIO functions that may turn around and allocate memory using Perl_malloc. A simple write() should be ok\, though. Inspired by S_write_no_mem() from util.c


malloc.c | 15 +++++++++------ 1 files changed\, 9 insertions(+)\, 6 deletions(-)

Inline Patch ```diff diff --git a/malloc.c b/malloc.c index f658489..24a298c 100644 --- a/malloc.c +++ b/malloc.c @@ -1259,6 +1259,9 @@ S_ajust_size_and_find_bucket(size_t *nbytes_p) return bucket; } +/* Don't use PerlIO buffered writes as they allocate memory. */ +#define MALLOC_WRITE_NOMEM(s) ```

PerlLIO_write(PerlIO_fileno(PerlIO_stderr()),s,strlen(s)) + Malloc_t Perl_malloc(size_t nbytes) { @​@​ -1290\,14 +1293\,14 @​@​ Perl_malloc(size_t nbytes)   dTHX;   if (!PL_nomemok) { #if defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) - PerlIO_puts(PerlIO_stderr()\,"Out of memory!\n"); + MALLOC_WRITE_NOMEM("Out of memory!\n"); #else   char buff[80];   char *eb = buff + sizeof(buff) - 1;   char *s = eb;   size_t n = nbytes;

- PerlIO_puts(PerlIO_stderr()\,"Out of memory during request for "); + MALLOC_WRITE_NOMEM("Out of memory during request for "); #if defined(DEBUGGING) || defined(RCHECK)   n = size; #endif @​@​ -1305\,15 +1308\,15 @​@​ Perl_malloc(size_t nbytes)   do {   *--s = '0' + (n % 10);   } while (n /= 10); - PerlIO_puts(PerlIO_stderr()\,s); - PerlIO_puts(PerlIO_stderr()\," bytes\, total sbrk() is "); + MALLOC_WRITE_NOMEM(s); + MALLOC_WRITE_NOMEM(" bytes\, total sbrk() is ");   s = eb;   n = goodsbrk + sbrk_slack;   do {   *--s = '0' + (n % 10);   } while (n /= 10); - PerlIO_puts(PerlIO_stderr()\,s); - PerlIO_puts(PerlIO_stderr()\," bytes!\n"); + MALLOC_WRITE_NOMEM(s); + MALLOC_WRITE_NOMEM(" bytes!\n"); #endif /* defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) */   my_exit(1);   } -- 1.7.7.GIT

p5pRT commented 12 years ago

From @craigberry

0001-Out-of-memory-message-should-not-allocate-memory.patch ```diff From d75910164cbad239293e94ddac3bcde61f5b14c6 Mon Sep 17 00:00:00 2001 From: "Craig A. Berry" Date: Sun, 2 Sep 2012 21:30:55 -0500 Subject: [PATCH] Out of memory message should not allocate memory. This fixes [perl #40595]. When Perl_malloc reports an out of memory error, it should not make calls to PerlIO functions that may turn around and allocate memory using Perl_malloc. A simple write() should be ok, though. Inspired by S_write_no_mem() from util.c --- malloc.c | 15 +++++++++------ 1 files changed, 9 insertions(+), 6 deletions(-) diff --git a/malloc.c b/malloc.c index f658489..24a298c 100644 --- a/malloc.c +++ b/malloc.c @@ -1259,6 +1259,9 @@ S_ajust_size_and_find_bucket(size_t *nbytes_p) return bucket; } +/* Don't use PerlIO buffered writes as they allocate memory. */ +#define MALLOC_WRITE_NOMEM(s) PerlLIO_write(PerlIO_fileno(PerlIO_stderr()),s,strlen(s)) + Malloc_t Perl_malloc(size_t nbytes) { @@ -1290,14 +1293,14 @@ Perl_malloc(size_t nbytes) dTHX; if (!PL_nomemok) { #if defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) - PerlIO_puts(PerlIO_stderr(),"Out of memory!\n"); + MALLOC_WRITE_NOMEM("Out of memory!\n"); #else char buff[80]; char *eb = buff + sizeof(buff) - 1; char *s = eb; size_t n = nbytes; - PerlIO_puts(PerlIO_stderr(),"Out of memory during request for "); + MALLOC_WRITE_NOMEM("Out of memory during request for "); #if defined(DEBUGGING) || defined(RCHECK) n = size; #endif @@ -1305,15 +1308,15 @@ Perl_malloc(size_t nbytes) do { *--s = '0' + (n % 10); } while (n /= 10); - PerlIO_puts(PerlIO_stderr(),s); - PerlIO_puts(PerlIO_stderr()," bytes, total sbrk() is "); + MALLOC_WRITE_NOMEM(s); + MALLOC_WRITE_NOMEM(" bytes, total sbrk() is "); s = eb; n = goodsbrk + sbrk_slack; do { *--s = '0' + (n % 10); } while (n /= 10); - PerlIO_puts(PerlIO_stderr(),s); - PerlIO_puts(PerlIO_stderr()," bytes!\n"); + MALLOC_WRITE_NOMEM(s); + MALLOC_WRITE_NOMEM(" bytes!\n"); #endif /* defined(PLAIN_MALLOC) && defined(NO_FANCY_MALLOC) */ my_exit(1); } -- 1.7.7.GIT ```
p5pRT commented 12 years ago

From @craigberry

On Sun\, Sep 2\, 2012 at 10​:05 PM\, Craig A. Berry \craig\.a\.berry@&#8203;gmail\.com wrote​:

On Sat\, Sep 1\, 2012 at 7​:47 PM\, Tony Cook \tony@&#8203;develop\-help\.com wrote​:

On Sat\, Sep 01\, 2012 at 05​:26​:03PM -0700\, Father Chrysostomos via RT wrote​:

On Sat Sep 01 16​:40​:15 2012\, tonyc wrote​:

It's Perl_malloc() trying to report an error by calling into perlIO\, which attempts to allocate memory with Perl_malloc() which fails and attempts to report an error and so on - hence 9000 stack frames.

This is a wishlist bug\, which I think could be left open.

I wouldn’t call that a wishlist bug.

The original report was severity wishlist.

Shouldn’t Perl_malloc just bypass PerlIO?

Yes.

Or catch the recursion and then bypass PerlIO. Even if a 2GB allocation fails\, the 16KB for the PerlIO buffer will probably succeed.

When the code in malloc.c was written\, PerlIO as such was still a few years in the future; I think everything with the PerlIO prefix would have just been macros pointing to stdio functions\, and thus PerlIO_puts() would not have been allocating memory. That's partially a hunch that I haven't done all the spelunking to prove\, but I think it's true. TODO​: audit all uses of PerlIO_puts() in the core to see if there are any cases where memory allocation would be problematic.

The attached will probably do the trick for malloc.c\, inspired by S_write_no_mem in util.c (which we can't use because it calls exit()). I've run this through a quick run of the test suite after configuring with -Dusemymalloc=y and that looked fine\, but I haven't tested explicitly for out-of-memory conditions. And I'm not sure if MALLOC_WRITE_NOMEM is really the best name for the macro or whether this is the best place to put it; should we have something general-purpose in the API that does this?

I pushed a slightly different version as \<http​://perl5.git.perl.org/perl.git/commitdiff/7cd83f6573da7fd1101fc83cb13867d52fea3d41>.

I failed in my attempt to put together a reproducer on OS X Lion. Apparently ulimit\, limit\, and launchctl limit have no effect and any process can grab as much virtual memory as it wants\, which means you make the whole machine grind to a halt before Perl ever runs out of memory.

Luckily VMS has better controls\, and I was easily able to reproduce the problem with​:

$ perl -e "for (0..100_000) {unshift @​a\, 'X' x (1024 * 10);}" %SYSTEM-F-ACCVIO\, access violation\, reason mask=04\, virtual address=000007FDD74D6000\, PC=FFFFFFFF800DC170\, PS=0000001B

  Improperly handled condition\, image exit forced by last chance handler.   Signal arguments​: Number = 0000000000000005   Name = 000000000000000C   0000000000040004   000007FDD74D6000   FFFFFFFF800DC170   000000000000001B

  Register dump​:   R0 = 0000000000000000 R1 = FFFFFFFF883C6600 R2 = 0000000001000061   R3 = 000000000051201C R4 = 000000007FFCF818 R5 = 000000007FFCF8B0   R6 = 0000000000000001 R7 = 0000000000000000 R8 = 0000000000000004   R9 = FFFFFFFFFFFFFFFF R10 = 0000000000000404 R11 = 0000000000000000   SP = 000000007AD0A000 TP = 0000000001331200 R14 = C000000000000614   R15 = 0000000000000003 R16 = FFFFFFFF88299568 R17 = 0000000000000080   R18 = 0000000000000000 R19 = FFFFF802888023A0 R20 = 000000000000000C   R21 = FFFFFFFF881C9360 R22 = FFFFFFFF880040B0 R23 = 0000000000000003   R24 = 000000007693F550 R25 = 0000000000000007 R26 = 000000007B88E1B0   R27 = 000000000000006E R28 = 000000000000000F R29 = 000007FDD74D61F0   R30 = 0000000000000072 R31 = 0000000000000065 PC = FFFFFFFF800DC170   BSP/STORE = 000007FDD74D61F0 / 000007FDD74D6000 PSR = 000010130802E030   IIPA = FFFFFFFF800DC160   B0 = FFFFFFFF8019D0E0 B6 = FFFFFFFF800DC0C0 B7 = FFFFF802888023A0

  Interrupted Frame RSE Backing Store\, Size = 8 registers

No access to RSE backing store

and after the patch I get​:

$ perl -e "for (0..100_000) {unshift @​a\, 'X' x (1024 * 10);}" Out of memory during request for 10252 bytes\, total sbrk() is 1066305536 bytes! %SYSTEM-F-ABORT\, abort

which I believe is the desired behavior. I think the ticket can be closed.

p5pRT commented 12 years ago

@tonycoz - Status changed from 'open' to 'resolved'