Closed slupczynskim closed 4 years ago
Merging #7 into RMI-fy-service-success will decrease coverage by
0.11%
. The diff coverage isn/a
.
@@ Coverage Diff @@
## RMI-fy-service-success #7 +/- ##
============================================================
- Coverage 21.68% 21.56% -0.12%
Complexity 103 103
============================================================
Files 25 25
Lines 1674 1674
Branches 202 202
============================================================
- Hits 363 361 -2
- Misses 1265 1267 +2
Partials 46 46
Impacted Files | Coverage Δ | Complexity Δ | |
---|---|---|---|
...uccessModeling/MonitoringDataProvisionService.java | 41.59% <0%> (-0.41%) |
62% <0%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 820ec94...82b88aa. Read the comment docs.
I think it might be connected to the way RMI executions are invoked in las2peer core. Investigated into Context.get().invoke(...) which lead me to ExecutionContext.java which does have an unclosed ObjectInputStream... I'll continue investigating if that's actually the issue...
After ~ 3 days of running, the success-modeling container runs out of memory (2Gb provided).
[pool-10-thread-1] ERROR org.web3j.protocol.core.filters.Filter - Error sending request java.lang.OutOfMemoryError: Java heap space
Possibly related, was nearby in the logs:
WARNING: A connection to http://las2peer-ethnet:8545/ was leaked. Did you forget to close a response body? To see where this was allocated, set the OkHttpClient logger level to FINE: Logger.getLogger(OkHttpClient.class.getName()).setLevel(Level.FINE);
see the following for details:
without running some sort of init process, we cannot create a heapdump of java to see where the memory leak occurs.