android / codelab-mlkit-android

Other
179 stars 101 forks source link

Use an LRU cache for translators #28

Closed sheepmaster closed 4 years ago

sheepmaster commented 4 years ago

Translator instances are cached internally by the ML Kit SDK, but using the LRU cache allows observing evictions to close translators when they're not needed anymore.

sheepmaster commented 4 years ago

@ulukaya @gkaldev Another one. Thanks :)

sheepmaster commented 4 years ago

@gkaldev @ulukaya Friendly ping :)

gkaldev commented 4 years ago

I actually have had a CL in flight for a while now that adds an Lru Cache and some other tweaks, but I haven't had a chance to finish it as I'm still refining the hysteresis to make the real time translations less finicky. In the meantime we can just submit this and I'll merge later :)

Btw just curious, why only cache 1 translator? From my testing, it seems most devices should be able to cache around 3 to 6 translators without exceeding app memory allocations.

sheepmaster commented 4 years ago

I actually have had a CL in flight for a while now that adds an Lru Cache and some other tweaks, but I haven't had a chance to finish it as I'm still refining the hysteresis to make the real time translations less finicky. In the meantime we can just submit this and I'll merge later :)

Awesome, thanks for helping to make this better :)

Btw just curious, why only cache 1 translator? From my testing, it seems most devices should be able to cache around 3 to 6 translators without exceeding app memory allocations.

No particular reason -- I didn't find translation particularly slow when caching only one translator, so I decided to go with that. But ultimately, that's what the constant is for, so that developers can play with it. For that matter, I'm also happy to increase the value.

gkaldev commented 4 years ago

Sounds good, let's go with this for now. Thanks!