Open xiaoxiongli opened 3 years ago
Hi, Our code is run on CPU, and I expect it could be faster with GPU implementation when the operation of each pixel is parallelized (but not validated). Thanks!
On Fri, Aug 6, 2021 at 10:42 AM xiaoxiongli @.***> wrote:
Hi @yhjo09 https://github.com/yhjo09 I wanna to know that this LUT model(.npy) is run on CPU or GPU? if it is run on CPU, it is possiable to run on GPU for fast inference?
Thank you very much! ^_^
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/yhjo09/SR-LUT/issues/2, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIFT6URZYYTZ5RQ6EGGZZ3DT3M4Y7ANCNFSM5BU6WPEQ . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email .
Hi @yhjo09 I wanna to know that this LUT model(.npy) is run on CPU or GPU?
if it is run on CPU, it is possiable to run on GPU for fast inference?