larq / compute-engine

Highly optimized inference engine for Binarized Neural Networks
https://docs.larq.dev/compute-engine
Apache License 2.0
242 stars 34 forks source link

Pin flatbuffers to <2.0 #648

Closed Tombana closed 3 years ago

Tombana commented 3 years ago

What do these changes do?

The converter is incompatible with flatbuffers==2.0 (specifically the re-serialization of a flatbuffer is broken, when we remove the Quantize/Dequantize ops in python) so in this PR we pin flatbuffers<2.0 just as tensorflow has.