Xilinx / finn-base

Open Source Compiler Framework using ONNX as Frontend and IR
https://finn-base.readthedocs.io/
BSD 3-Clause "New" or "Revised" License
29 stars 17 forks source link

Faster&smaller shape inference #51

Closed maltanar closed 2 years ago

maltanar commented 2 years ago

Use RandomNormal instead of Const for shape inf with custom ops, which removes the requirement to allocate a dummy tensor with filled values. This caused slow runtimes and ONNX out-of-memory errors for networks that use large activations.