Closed GoogleCodeExporter closed 9 years ago
What size of message where you pushing through? What are the hardware specs of
your RedHat box?
You can increase the max heap size using appropriate JMV options in the
phoenex.sh file.
Original comment by abarea...@gmail.com
on 11 Jan 2012 at 1:37
If possible, I would also recommend upgrading to the lastest versions of the
agent and gateway.
Original comment by abarea...@gmail.com
on 11 Jan 2012 at 1:38
UPDATE:
Problem is related to CRL processing and can be recreated by sending a message
to a destination with a certificate that has a large CRL (such as a VA or DOD
issued certificate). The VA PKI certificate CRL is currently 20MB.
Since the agent caches the CRL into memory (Hashmap) and the Java security
class is also loading it into memory for parsing serial numbers, any available
java heap is quickly exhausted.
To compound the problem, if a certificate has more than one HTTP distribution
point, the agent does not stop after loading the first CRL and continutes to
load it again from the next CRL dist point. Refer to:
CRLRevocationManager.loadCRLS(...)
Also, it can take anywhere from 2-4 minutes to cache a 20M CRL!
Work around is to increase heap size to minimum of 1280M (-Xmx1280m). Anything
less and the agent/James crashes with "out of memory".
Original comment by aperf...@gmail.com
on 20 Jan 2012 at 3:43
New CRL caching is implemented in the agent. Very large CRLs may still require
larger heap sizes, but the agent now uses off line caching and weak reference
hash to protect against out of memory errors.
Original comment by gm2...@cerner.com
on 17 Apr 2012 at 5:18
updating to fixed
Original comment by gm2...@cerner.com
on 17 Apr 2012 at 5:18
Original issue reported on code.google.com by
aperf...@gmail.com
on 5 Jan 2012 at 8:58Attachments: