Open GlebMaltsev opened 4 years ago
During debugging I was able to reproduce the issue. Issue occur only in cases when I use the following method to download raw bitmap images from the url:
val uri = Uri.parse("url_to_png_image_resource in the WWW")
val imageRequest = ImageRequest.fromUri(uri)
Fresco
.getImagePipeline()
.fetchDecodedImage(imageRequest, null)
.subscribe(object : BaseBitmapDataSubscriber() {
override fun onNewResultImpl(bitmap: Bitmap?) {
// Bitmap size = 87 KB
// Send item to create a Texture in current thread
onBitmapReady(bitmap)
}
}
override fun onFailureImpl(dataSource: DataSource<CloseableReference<CloseableImage>>)
= onBitmapReady(null)
},
CallerThreadExecutor.getInstance())
It's enough to make only 1 call of this method for small (> 100KB) bitmap to reproduce the crash. After I downloaded bitmap and successfully display it on the UI, I move app to background and then restore it. After that crash appear (with attempting to allocate 400MB memory during trimming memory)
Please note that adding .setResizeOptions(ResizeOptions(50, 50))
will not give any results (except decreasing bitmap size to 9KB) - after I recover app from foreground, crash appear.
After commenting out onBitmapReady(bitmap)
call inside onNewResultImpl()
callback crash is gone. It seems that the problem is actually in the wrong work with bitmaps in onBitmapReady(bitmap)
.
After moving from BaseBitmapDataSubscriber
to BaseDataSubscriber
problem is gone. I was able to rework logic in my app and now I'm using Closeable References for passing bitmaps across the app.
However, I still have a one last question: how it's possible that passing single 100 KB Bitmap out from BaseBitmapDataSubscriber.onNewResultImpl(Bitmap)
to the app in 100% cases will cause OOM due to attempt of allocating hundreds of megabytes of memory during trimming memory in Fresco cache when user moves app to background and then recover it?
Thanks for the report and investigation. This indeed looks strange. Are you sure there are no huge bitmaps getting cached? Could you please check with Flipper plugin (or by debugging CountingMemoryCache#mCachedEntries) that cached bitmaps are of the reasonable size.
Here's a result of executing the following code at the moment right before call CountingMemoryCache.trim()
which would lead to crash:
Log.e("memory cache amount: " + trimmable.count)
Log.e("memory cache size: " + trimmable.sizeInBytes)
Log.e("memory cache evictionQueueCount: " + trimmable.evictionQueueCount)
Log.e("memory cache evictionQueueSizeInBytes: " + trimmable.evictionQueueSizeInBytes)
memory cache amount: 73
memory cache size: 25025040
memory cache evictionQueueCount: 73
memory cache evictionQueueSizeInBytes: 25025040
And crash message is Failed to allocate a 420546432 byte allocation with 8388608 free bytes and 232MB until OOM, target footprint 300947288, growth limit 536870912
I also debugged CountingMemoryCache#mCachedEntries
and I wasn't find any suspicious bitmaps - regular bitmaps as we use in our app.
Just to be more precise, I reproduced crash again and printed out dimens for each bitmap in cache before crash appear.
Hey there, it looks like there has been no activity on this issue recently. Has the issue been fixed, or does it still require the community's attention? This issue may be closed if no further activity occurs. You may also label this issue as "bug" or "enhancement" and I will leave it open. Thank you for your contributions.
I guess we can mark this as "bug"
I have encounter this bug,how to resolve it? @GlebMaltsev
this can fix this bug,but need to modify source code @oprisnik @GlebMaltsev
Maybe It cause by a Dead cycle,In a dead cycle,ceaselessly add data to arraylist,It will cause oom by allocation > 100M memory。so In this code if mExclusiveEntries.getSizeInBytes() > size but mExclusiveEntries is empty,then it will goto dead cycle to add null data to arraylist。 why or when mExclusiveEntries getSizeInBytes has wrong value,I dont know
if CloseableImage getSizeInBytes method,when add to cache getSizeInBytes return real value,when remove to getSizeInBytes return 0,It will cause mExclusiveEntries is empty but mExclusiveEntries.mSizeInBytes is a big value。
why don't use setRegisterLruBitmapPoolAsMemoryTrimmable method set lrubitmappool
Description
We receiving OOM crashes from production app. Their appear while we're trying to trim memory. Our implementation are the same as described here: https://github.com/facebook/fresco/issues/2136#issuecomment-397608264
Stack trace:
MyApp.java
is our Application class with override ofonTrimMemory()
method in the same way as in comment I posted above.FrescoMemoryTrimmableRegistry.java
:So crash appear sometimes when app in background (46%) and sometimes when in foreground (54%). According to the message of the crash, system need an insane amount of memory to.. trim itself. For example:
Samsung Galaxy J7 Prime (3 GB RAM total):
Fatal Exception: java.lang.OutOfMemoryError: Failed to allocate a 420546432 byte allocation with 8388608 free bytes and 231MB until OOM, max allowed footprint 302209776, growth limit 536870912
Huawei Y7 Prime (2019) (3 GB RAM total):
Failed to allocate a 420546432 byte allocation with 8339456 free bytes and 236MB until OOM, max allowed footprint 296859888, growth limit 536870912
HUAWEI Nova 2 Lite (3 GB RAM total):
Failed to allocate a 420546432 byte allocation with 25165824 free bytes and 234MB until OOM, max allowed footprint 316307200, growth limit 536870912
Motorola Moto G (5th gen) (2 GB RAM total):
Failed to allocate a 280364296 byte allocation with 8388608 free bytes and 197MB until OOM, max allowed footprint 204217504, growth limit 402653184
Motorola Moto E5 Play (1 GB RAM total):
Failed to allocate a 186909536 byte allocation with 25149440 free bytes and 125MB until OOM, max allowed footprint 162280960, growth limit 268435456
One more thing I found that as you can see above number of required memory is the same across devices with the same memory: ~178 MB for 1 GB RAM devices, 267 MB for 2 GB RAM devices and 401 MB for 3+ GB RAM devices. Probably it's coincidence: app attempts to increase the size of the array and due to growth factor it growth with such big steps. And low memory devices give up on step N, 2-GB devices give up on step N+1, and so on.
Reproduction
We're not able to reproduce this bug on our devices. However it production we have 5% of users who faced with this issue.Able to reproduce in 100% cases. Please check first comment below for more details.
Additional Information
Could be related: issues start to occur in our latest app release. We didn't change anything related to Fresco (like lib version on way of using Fresco API) except adding one more line:Fresco.getImagePipeline().evictFromMemoryCache(uri)We call this in very rare cases (for current moment we have only 92 calls of this).(Not related since I able to reproduce crash without this line)