github / codeql

CodeQL: the libraries and queries that power security researchers around the world, as well as code scanning in GitHub Advanced Security
https://codeql.github.com
MIT License
7.18k stars 1.45k forks source link

CPP SimpleRangeAnalysis::getTruncatedUpperBounds NegativeArraySizeException #16437

Open ropwareJB opened 1 week ago

ropwareJB commented 1 week ago

Description of the issue Upon execute of cpp Security\CWE\CWE-120\OverrunWrite.ql against a 1.2GB compressed snapshot, the CodeQL CLI throws the following exception:

Starting evaluation of ...\Security\CWE\CWE-120\OverrunWrite.ql.
Oops! A fatal internal error occurred. Details:
com.semmle.util.exception.CatastrophicError: An error occurred while evaluating _SimpleRangeAnalysis::getTruncatedUpperBounds/1#0cf8e137_SimpleRangeAnalysis::getTruncatedUpperBound__#shared/2@9cada6je
java.lang.NegativeArraySizeException: -2147483648
The RA to evaluate was:

    {2} r1 = AGGREGATE `SimpleRangeAnalysis::getTruncatedUpperBounds/1#0cf8e137`, `SimpleRangeAnalysis::getTruncatedUpperBounds/1#0cf8e137_011#max_term` ON In.2 WITH MAX<0 ASC> OUTPUT In.0, Agg.0
    return r1

(eventual cause: NegativeArraySizeException "-2147483648")
        at com.semmle.inmemory.pipeline.PipelineInstance.wrapWithRaDump(PipelineInstance.java:168)
        at com.semmle.inmemory.pipeline.PipelineInstance.exceptionCaught(PipelineInstance.java:152)
        at com.semmle.inmemory.scheduler.execution.ThreadableWork.handleAndLog(ThreadableWork.java:549)
        at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:373)
        at com.semmle.inmemory.scheduler.IntensionalLayer$IntensionalWork.evaluate(IntensionalLayer.java:70)
        at com.semmle.inmemory.scheduler.SimpleLayerTask$SimpleLayerWork.doWork(SimpleLayerTask.java:69)
        at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:359)
        at com.semmle.inmemory.scheduler.execution.ExecutionScheduler.runnerMain(ExecutionScheduler.java:601)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
        at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NegativeArraySizeException: -2147483648
        at java.base/java.util.Arrays.copyOf(Unknown Source)
        at com.semmle.inmemory.eval.aggregate.TupleListList.prepareForAdd(TupleListList.java:28)
        at com.semmle.inmemory.eval.aggregate.TupleListList.addList(TupleListList.java:43)
        at com.semmle.inmemory.eval.aggregate.AggregateEvaluator.commitCurrentRun(AggregateEvaluator.java:463)
        at com.semmle.inmemory.eval.aggregate.AggregateEvaluator$GroupAndJoin.addTuple(AggregateEvaluator.java:512)
        at com.semmle.inmemory.eval.CancelCheckingSink.addTuple(CancelCheckingSink.java:18)
        at com.semmle.inmemory.relations.BaseGeneralIntArrayRelation.map(BaseGeneralIntArrayRelation.java:84)
        at com.semmle.inmemory.caching.PagedRelation.map(PagedRelation.java:156)
        at com.semmle.inmemory.relations.AbstractRelation.deduplicateMap(AbstractRelation.java:130)
        at com.semmle.inmemory.eval.aggregate.AggregateEvaluator.evaluate(AggregateEvaluator.java:256)
        at com.semmle.inmemory.pipeline.AggregateStep.generateTuples(AggregateStep.java:36)
        at com.semmle.inmemory.pipeline.SimpleHeadStep.lambda$forwardInitialize$0(SimpleHeadStep.java:29)
        at com.semmle.inmemory.pipeline.HeadEndDispatcher.headEndWork(HeadEndDispatcher.java:75)
        at com.semmle.inmemory.pipeline.PipelineState.doSomeWork(PipelineState.java:78)
        at com.semmle.inmemory.pipeline.PipelineInstance.doWork(PipelineInstance.java:117)
        at com.semmle.inmemory.scheduler.execution.ThreadableWork.doSomeWork(ThreadableWork.java:359)
        ... 7 more
aibaars commented 1 week ago

Thanks for reporting! I'll ask the team to have a look.

aibaars commented 6 days ago

The team confirmed the problem and need to improve overflow handling for the TupleListList class. I'm afraid there isn't any short term workaround, it looks like some intermediate result simply gets too large.

ropwareJB commented 5 days ago

Thank you for the clarification and update @aibaars