Xilinx / Vitis-AI

Vitis AI is Xilinx’s development stack for AI inference on Xilinx hardware platforms, including both edge devices and Alveo cards.
https://www.xilinx.com/ai
Apache License 2.0
1.49k stars 633 forks source link

DPU floating point output data trasnformation #326

Closed JerrySciangula closed 3 years ago

JerrySciangula commented 3 years ago

Hi everyone, I'm a master graduating student at Scuola Superiore Sant'Anna in Pisa. For my thesis, I'm trying to accelerate a custom network model on a ZCU102 ultrascale+ using the new Vitis-AI tools v1.3.

Using as base template the VART demo on resnet50, I built my own application. Now, I'm interested in using the hardware implementation of the Softmax, but the API to triggered it needs an input array of INT8_T data values. However, from the DPU operations I can get FloatingPoint32 data values. I think that the input INT8_T data values needs to be in fixed point representation, but I can't figure out how to convert FP32 into INT8_T fixed point data.

Is there a way to get just INT8_T values from the DPU? Alternatively, is there some function or method that I can use to do the transformation?

Thanks in Advance

guohaot-Xlnx commented 3 years ago

Hello, JerrySciangula Please look at this line of the code in the link below https://github.com/Xilinx/Vitis-AI/blob/01cc45caab5244932f0896998b087296f8c3c7d8/demo/VART/resnet50/src/main.cc#L237

    batchTensors.push_back(std::shared_ptr<xir::Tensor>(xir::Tensor::create(
        outputTensors[0]->get_name(), out_dims,
        xir::DataType{xir::DataType::FLOAT, sizeof(float) * 8u})));

You can specify the type in the xir::DataType and this is the enum of the type https://github.com/Xilinx/Vitis-AI/blob/01cc45caab5244932f0896998b087296f8c3c7d8/tools/Vitis-AI-Runtime/VART/xir/include/xir/util/data_type.hpp#L25 enum Type { INT, UINT, XINT, XUINT, FLOAT, UNKNOWN }; Regards

JerrySciangula commented 3 years ago

Hi @levent9402, thank you very much for your reply.