burner / bugzilla_migration_test

0 stars 0 forks source link

memory allocated for arrays in CTFE functions during compilation is not released #16

Open burner opened 17 years ago

burner commented 17 years ago

kamm-removethis reported this on 2007-07-27T11:08:28Z

Transfered from https://issues.dlang.org/show_bug.cgi?id=1382

CC List

Description

This problem is encountered when large arrays are manipulated during CTFE. Symtoms are dmd allocating a lot of memory, eating up all swap and finally terminating with an out of memory error.

The core of the problem can be seen in this code

char[] make_empty_string(int n) { char[] result;

for(int i = 0; i < n; ++i) result ~= " "; return result; }

const char[] largestring = make_empty_string(100000); void main() {}

This snippet will require 5 gb of memory to compile instead of the mere 100 kb the final string will require. It is caused by the intermediate strings stored in result during the iteration never being discarded. The problem is not limited to concatenation, modifications of a single element of the array will also cause the whole array to be duplicated.

While this particular piece of code can be rewritten to consume less memory, that's not generally possible. An example where reduction is not possible is splitting a string into substrings.

It does come up in practice: Someone wanted to generate a function to get the Unicode general catergory for a dchar from the textfiles from the Unicode Character Database and ran into this issue. I wanted to parse the D BNF to generate a parser at compile time and had dmd exit with an out of memory error.

It seems CTFE needs a compile time garbage collector.

burner commented 16 years ago

clugdbug commented on 2008-04-29T05:30:31Z

I'm raising the severity of this to blocker, since it makes CTFE metaprogramming libraries unusable in practice. Fortunately development of such libraries is still possible, although both BCS and I are experiencing some horrific compilation times, even after inserting workarounds where possible. We are having to reduce the complexity of our test code.

burner commented 16 years ago

kamm-removethis commented on 2008-07-22T00:52:43Z

Has DMD stopped using boehm-gc? If I try this very example on LLVMDC (which does use boehm-gc), memory usage never exceeds a certain level (< 1 MB).

So maybe re-enabling the garbage collector for DMD will fix all CTFE related memory issues?

burner commented 15 years ago

kamm-removethis commented on 2008-12-02T12:31:38Z

We've had some success with reenabling boehm-gc: http://www.dsource.org/projects/ldc/ticket/49 .

"Another test with USE_BOEHM_GC=0, REDIRECT_MALLOC=GC_malloc and IGNORE_FREE seemed to yield good results, with no segfaults and collecting CTFE memory properly."

burner commented 15 years ago

clugdbug commented on 2009-08-03T05:18:25Z

I don't think Boehm gc is the answer. Note that this is very closely related to bug#1330. I think the CTFE implementation of arrays needs (a) reference semantics and (b) reference counting. Here's an example of a terrible case, which allocates several Gb of RAM:

int junk(int n) { int[] result = new int[10000];

for(int i = 0; i < n; ++i) { result[0]= i; } return 0; }

const int bad = junk(100000); void main() {}

This particular case could be solved by adding a reference-based system for storing array values, instead of doing copy-on-write -- and that's required for bug #1330 anyway. Once that's in place, the array values could be allocated in a controlled manner (eg, retain a list of all allocated CTFE arrays). A dedicated precise GC can then be simple and fast, since it only needs to check for array references in the current function, and they can only be in the local variables which are arrays or structs.

burner commented 15 years ago

clugdbug commented on 2009-08-31T23:59:27Z

Reducing severity back to critical, since the voting system takes care of the importance.

burner commented 14 years ago

bearophile_hugs commented on 2010-06-04T15:19:11Z

A partially artificial test case. Faster and better versions are quite possible, but I expect dmd to be able to run this quickly.

import std.stdio: writeln;

ubyte[1 << NPOW] setBits(int NPOW)() { nothrow pure uint setBits8(uint n) { uint result; foreach (i; 0 .. 8) if (n & (1 << i)) result++; return result; }

nothrow pure uint setBits16(uint n) {
    enum uint FIRST_UBYTE =  0b0000_0000_1111_1111;
    enum uint SECOND_UBYTE = 0b1111_1111_0000_0000;
    return setBits8(n & FIRST_UBYTE) + setBits8((n & SECOND_UBYTE) >> 8);
}

typeof(return) result;
foreach (i; 1 .. result.length)
    result[i] = cast(typeof(result[0]))setBits16(i);
return result;

}

enum nbits = setBits!16(); // currently 12 is about the max

void main() { writeln(nbits); }

burner commented 13 years ago

sandford commented on 2010-12-06T12:09:04Z

I just came across this bug while working on improving std.variant: the combination of templates + ctfe + unittests resulted in out of memory errors. I've also traced down another issue (I don't know if it should be filed separately or not):

It appears that any access of an array variable allocates ram, resulting in drastically slower compile times (+55 seconds) and excess memory usage (30+ mb in this case using DMD 2.050)

string ctfeTest() { char[] result; result.length = ushort.max; char c; for(size_t i = 0; i < result.length; i++) {} // Allocates for(size_t i = 0; i < ushort.max; i++) {} // Doesn't allocate

for(size_t i = 0; i < ushort.max; i++) {     // Allocates 
    c = result[i];
}
for(size_t i = 0; i < ushort.max; i++) {     // Doesn't allocate
    c = cast(ubyte)('A' + i%26);
}
return cast(string)result;

}

burner commented 13 years ago

clugdbug commented on 2011-09-02T00:21:14Z

(In reply to comment #7)

It appears that any access of an array variable allocates ram, resulting in drastically slower compile times (+55 seconds) and excess memory usage (30+ mb in this case using DMD 2.050)

This was fixed in 2.054. There were several cases where reading or writing a single array element could cause the entire array to be copied! These cases have now been fixed, giving an order of magnitude improvement in memory use and compilation time. (The original test case (concatenation) hasn't changed; it's simply caused by absence of a compile-time gc). This bug is now a far less serious problem than bug 6498.

burner commented 12 years ago

clugdbug commented on 2012-01-21T01:26:54Z

Please don't set milestones without consultation (unless you plan to fix the bug yourself). This bug is still open because it is HARD. I've been slowly making progress on it for the last year. It's not going to be fixed soon -- the remaining work to be done is still about the equivalent of 30 avarage bugs. However, 90% of the symptoms were fixed in 2.049.

burner commented 12 years ago

leandro.lucarella (@leandro-lucarella-sociomantic) commented on 2012-01-23T02:41:48Z

Is there any technical reason not to use the Bohem GC as a temporary workaround until this can get properly fixed? I'm just curious.

burner commented 12 years ago

bugzilla (@WalterBright) commented on 2012-01-23T11:23:06Z

I made an experimental build of dmd that uses a gc. The compiler slowed down quite a bit.

burner commented 12 years ago

leandro.lucarella (@leandro-lucarella-sociomantic) commented on 2012-01-24T02:19:04Z

(In reply to comment #11)

I made an experimental build of dmd that uses a gc. The compiler slowed down quite a bit.

In which cases did you tried it? For files that allocates a lot of "CTFE memory" it should be the other way around, as the memory consumption is so high that the system is using most of the time moving things around between the memory and the swap.

Do you have a patch that I can try (for D1)? Thanks.

As bad as it sounds, maybe a good tradeoff would be to add a command line option (as obscure an undocumented as you want) to activate the GC for cases where not using it is not really an option. Being that it seems that this bug is really hard, I think it might deserve looking for a workaround to be able to use the compiler in this extreme cases in the short term.

burner commented 12 years ago

bugzilla (@WalterBright) commented on 2012-01-24T02:29:36Z

I tried it by building the library and running its unittests, and running the test suite. It was considerably slower.

The GC used was the old C++ version of the D runtime GC.

You can build it by switching the GCOBJS macro in win32.mak.

burner commented 12 years ago

leandro.lucarella (@leandro-lucarella-sociomantic) commented on 2012-01-24T02:37:46Z

(In reply to comment #13)

I tried it by building the library and running its unittests, and running the test suite. It was considerably slower.

The GC used was the old C++ version of the D runtime GC.

You can build it by switching the GCOBJS macro in win32.mak.

Oh, I was talking about the Bohem GC, the one tried by Christian Kamm, which is a pretty good state of the art collector AFAIK. I think LDC used it (I don't know if it still does) with pretty good results (see comment 3).

Maybe Christian can give us some more information about it :)

burner commented 12 years ago

clugdbug commented on 2012-01-24T06:21:47Z

An important thing to realize about this bug is that it is not the primary cause of slow performance and high memory consumption in CTFE. Fixing this bug would make very little difference, except in cases involving concatenation.

I think it's had a lot of votes because people think it's the key CTFE performance issue, but actually the bad guy is bug 6498. Which is easier to fix.

burner commented 5 years ago

bugzilla (@WalterBright) commented on 2019-08-28T21:40:15Z

Some progress:

https://github.com/dlang/dmd/pull/10343