Closed T8RIN closed 1 year ago
How to add this libs to buildscript to be able to build those from gradle?
You may want to create a custom gradle task that will execute the scripts with required work dir and env vars.
I can't suggest you to try build dependencies directly in gradle, due to quite complex configurations and lack of direct support of meson in gradle which is requirement of dav1d
build too
Got it, sadly because of that my app can't be published to f-droid anymore :(
Also i added some enchantments to your code, including the quality option, you could see it in ImageToolbox i created PR #5 to add this into that repo too 👀
Also i changed coil decoder to be chained with another ones
I had a look on your ImageToolbox
build F-droid pipelane fail, and doesn't see any issues with avif. It failed due to gpu image libyuv-decoder.so
Hm, then what can i do with that?
Hm, then what can i do with that?
Did you try to build your production apk with key locally?
As I see the problem is your signed APK contains different libyuv-decoder.so
after APK decryption.
I never met this error by myself.
I may suggest you few ways:
-Wl,--build-id=none
and from you CMake filelibyuv-decoder.so
via settings packagingOptions { doNotStrip '**/libyuv-decoder.so' }
Hope, it will be a great start to fix the problem
Thanks, i'll try and send an answer later :)
Omitting this question, it seems to me that you are well versed in Bitmaps and JNI, so how can i f.e. save the scaled bitmap of 20k*20k to file avoiding OOM error?
I create an enlarged bitmap via createScaledBitmap, then apply other options to it, and of course Android can't load such a big image into memory, and app falls, so what can i do with this?
Is there any workaround to save scaled version directly to the file avoiding storing into heap?
I create an enlarged bitmap
Actually, you can't save image 20kx20k in any way because of codec limitations. Most of the image codecs simply don't support image dimensions of that exceeds 16k, 8k, 6k etc.
Straight to your question:
Create class in C++/C that holds a reference to allocated original extra large image buffer, and display the sampled image ( via libyuv or another image buffer operating library ), and apply all operations that was applied to the sample, properly scaled to an original buffer, and, obviously, save this buffer to a disk.
Native buffers doesn't have a memory limits, so you may scale native buffer to any size until memory is enough on the device.
In last versions android set a strictly usage of Storage Access Framework
so saving from JNI code to a file that external to your application ( user gallery etc ) may become a challenge, but you may always stream your native buffer to Java Buffers.
So the main algo is always display sampled version in Android, create proper transform to all your actions on android sample, and propagate all the actions to an original image buffer stored in JNI.
Another possible way is use OpenGL Surface, this way will give benefits of OpenGL and operating only native code and shaders, but it requires a knoweledge of GPU shader graphics
I fear no man, but this thing... It scares me
Btw sounds like a good deal, but I don't ever know how to operate with NDK :(
Also if using your way, can i open 16k image without creating a bitmap (because OOM happens) and then apply transformations to it, how does this even work? My entire application is chained using bitmap class, so JNI is hard for button painter man :)
Subsampling is not a problem at the moment, the issue is only in saving, because I don't know how to convert uri reference to jni buffer and then operate with it like a usual bitmap
I fear no man, but this thing... It scares me
Btw sounds like a good deal, but I don't ever know how to operate with NDK :(
Also if using your way, can i open 16k image without creating a bitmap (because OOM happens) and then apply transformations to it, how does this even work? My entire application is chained using bitmap class, so JNI is hard for button painter man :)
Subsampling is not a problem at the moment, the issue is only in saving, because I don't know how to convert uri reference to jni buffer and then operate with it like a usual bitmap
Yes, you may just store projection matrix, and after all, apply scale transform to matrix, load on image in jni, apple all transformations and save.
There are no ways to open an URI
in JNI
. You may open InputStream
and pass a reference to an input stream, and read all java input stream into vector
. Another way is open a FileDescriptor
since Android still linux system, pass a file descriptor and read data via linux pread
. But I think the way as in avif/jxl is read in Java ByteArray and just send a bytearray the most simple of them
All the methods of course brings some limitations and benefits
Thanks... How did you learn NDK?
Thanks... How did you learn NDK?
Try to do something. It’s actually not so hard as it’s look like. Practice is well enough to get skilled
Also, there is an option to manipulate large images is using pre-built libraries like OpenCV
for android that has awesome Java bindings. Ofc you'll lose possibility to save in avif and etc this way, and some skills for matrix math is requirement to do that
How to add this libs to buildscript to be able to build those from gradle?