larq / compute-engine

Highly optimized inference engine for Binarized Neural Networks
https://docs.larq.dev/compute-engine
Apache License 2.0
243 stars 35 forks source link

:arrow_up: tensorflow@2.6.2 #687

Closed lgeiger closed 2 years ago

lgeiger commented 3 years ago

What do these changes do?

This PR updates LCE to TensorFlow 2.6.1.

How Has This Been Tested?

This has not been tested yet, let's see what CI thinks about it.

Related issue number

Fixes #680

lgeiger commented 3 years ago

Looks good to me. Have you ran some basic benchmarks to make sure there are no speed regressions?

Not yet. I am also seeing some test failures locally which do not seem to reproduce on CI. Though I haven't had time to investigate yet.

lgeiger commented 2 years ago

Not yet. I am also seeing some test failures locally which do not seem to reproduce on CI. Though I haven't had time to investigate yet.

The conversion crash I was experiencing locally was in this test: https://github.com/larq/compute-engine/blob/06d6ecefae07b294ab459949906dbe226bbd018b/larq_compute_engine/tests/end2end_test.py#L231-L257 with the toy_model, convert_keras_model and experimental_default_int8_ranges. In the past we've seen that the experimental_default_int8_ranges flag can be buggy and given that this only happens locally and in the fallback convert_keras_model path and not using the saved model based converter, I am happy to call this PR ready for review.

I will run some basic benchmarks at some point to ensure that we haven't introduced any regressions at which point we should be able to merge this PR.

luciengaitskell commented 2 years ago

@lgeiger Might also want to rename the PR 😄

lgeiger commented 2 years ago

I benchmarked the quicknet family models on my Pixel 5 phone with 1 and 4 threads and didn't see a performance regression. So I think this PR is good to go from my side.