Closed ghost closed 11 years ago
I suspect this is the same issue as #2915
I have tried to mitigate the issue by adding a loop
if i % 1000 == 0 gc() end
which effectively keeps the used RAM down to a few hundred Mb in case of calling LLtest( BigInt( 216091 ) ). The whole thing, however, then becomes very ( very ( very ) ) slow. And explicitly calling gc() each 1000th iteration is, IMHO, a dreadful hack.
Doesn't fix the issue, but gmp has a special operation for a^b mod c, see the powermod
function. This should save a temporary and make the operation faster.
I think we should merge this code fragment into #2915 and close this one.
Got it. Don't care, as long as you guys fix this. Please. I want to press ahead with Julia searching Mersenne primes ( as a matter of fact, I am looking to build something faster than GIMPS. I know, I know, quite the challenge, their code is blindingly fast - but then again, without challenges, life is boring :-)
closing as dup. fix on the way.
@exercitussolus Could you try now? Looking forward to what you do with Mersenne prime search.
Given that GIMPS seems to use AVX, do you think you will need intrinsics (#2299)?
@ViralBShah Thanks for asking :-) Yes, I would need intrinsics, more particularly for a fast modulo operation. Multiplication can be parallelized ( Karatsuba, Toom-Cook, Schönhage-Strasse, Montgomery... ), and I am going to implement these myself in the next days - but a modulo operation is almost impossible to parallelize. I know, in the other thread, kk49 scared the bejeezus out of @StefanKarpinski by asking for multithreading with SIMD :-) Yet, when we are talking high-performance computing, such things are exactly what we need.
@ViralBShah The issue is fixed now, after your patch, in so far that LLtest( BigInt( 216091 ) ) now only consumes about 200 Mb of RAM.
@mlubin using powermod( L, 2, mp ) instead of ( ( L * L ) % mp - 2 ) results in execution times 3.25 worse... Prolly a nice gimmick for Int64 / Int 128, but unusable for BigInt :-(
Credit for the fix is to @JeffBezanson
I guess using 2
as the power might be a special case where L*L
is so much faster than power that powermod can't help. But I would expect GMP's powermod to be really good since that's such an important function in number theory. Oh well.
Although new to Julia ( just 3 days ago I built Julia from source and started playing around with it, I am already very enthusiastic.
Yet, when running a "classical" Lucas-Lehmer test on Mersenne prime candidates, I confront a ginormous memory leak. When running the code below with an argument of, let's say, 216091, Julia on my system ( Ubuntu 12 server, 1x 8-Core Xeon E5620, 8 Gb RAM ) uses up several tens of Gigabytes of RAM ! With an argument > 500,000 even running this function with 32 G of swap space ( !!! ) will lead Linux to terminate the Julia process. Am I doing something wrong here ? The goal is, of coarse, to test prime candidates with an exponent > 57,000,000 ;-)
[pao: formatting] [Viral: Changed title in line with #2915]