Closed BounharAbdelaziz closed 3 years ago
Hi, Thanks for your interest, however, I am sorry that we have no plans to release the code yet. Note the process in the app is very similar to the step 3, but with a parallel execution for speedup. Ours-S model is implemented in the app. Thanks.
On Tue, Jul 13, 2021 at 2:00 AM ABDELAZIZ BOUNHAR @.***> wrote:
Hi,
Thank you for sharing your code with the community! I wanted to ask you if it's possible for you to share the implementation of the mobile application? Also which model is used in this later?
Many thanks in advance! With best,
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/yhjo09/SR-LUT/issues/1, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIFT6UVPZOPYSF33CC6UDH3TXMNRLANCNFSM5AHHQMIA .
Thank you for your quick response!
In fact I tried Model_S_x4_3bit_int8.npy on a server CPU and it takes arround 7 seconds to upscale x4 (from 128² to 512²) but it was really fast when done on a Samsung S10 using the provided APK, I get arround 70ms which is really quick!
Do you get the same inference time if you run it on a CPU ?
However, parallel execution explains the fact that it's faster surelly, can you please give some hints how it's done (parallel execution) in the mobile app?
Thanks again for your valuable time! With best,
Hi, A naive implementation (ex. the code of the step 3) shows a slow runtime, but it can be faster with parallel execution applied. In the Android app, the Stream API is used to iterate over input pixels and you can get a faster runtime. Hope this helps. Best, Younghyun.
On Thu, Jul 15, 2021 at 7:31 AM ABDELAZIZ BOUNHAR @.***> wrote:
Thank you for your quick response!
In fact I tried Model_S_x4_3bit_int8.npy on a server CPU and it takes arround 7 seconds to upscale x4 (from 128² to 512²) but it was really fast when done on a Samsung S10 using the provided APK, I get arround 70ms which is really quick!
Do you get the same inference time if you run it on a CPU ?
However, parallel execution explains the fact that it's faster surelly, can you please give some hints how it's done (parallel execution) in the mobile app?
Thanks again for your valuable time! With best,
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/yhjo09/SR-LUT/issues/1#issuecomment-880252671, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIFT6UW3LDNXVEEJKLJVRKTTXYF57ANCNFSM5AHHQMIA .
Surely! That's the way I thaught of it too, you parralelize the computation by taking multiple patches of 2x2. Thanks again for your valuable time!
With best, Abdelaziz
Hi,
Thank you for sharing your code with the community! I wanted to ask you if it's possible for you to share the implementation of the mobile application? Also which model is used in this later?
Many thanks in advance! With best,