GPUOpen-LibrariesAndSDKs / RadeonML

https://gpuopen.com/radeon-prorender-suite/
Other
84 stars 11 forks source link

RadeonML: AMD Inference Library

1. C/C++ API

2. Code examples

2.1. List of supported models for load_model sample

To inference supported models just substitute 'path/model', 'path/input' and 'path/output' with correct paths in load_model sample. Additional information about supported models: https://github.com/onnx/models

3. System requirements

3.1 Features supported

For more information, see documentation at this link https://radeon-pro.github.io/RadeonProRenderDocs/rml/about.html

3.2 Features supported by OS

DIRECTML MIOPEN MPS
Inception V1 Yes Yes No
Inception V2 Yes Yes No
Inception V3 Yes Yes No
Inception V4 Yes Yes No
MobileNet V1 Yes Yes No
MobileNet V2 Yes Yes No
ResNet V1 50 Yes No No
ResNet V2 50 Yes No No
VGG 16 Yes No No
VGG 19 Yes No No
UNet(denoiser) Yes Yes Yes
ESRGAN Yes Yes Yes
RTUnet Yes Yes Yes

Others models may work as they will have similar operators, but we haven't checked them

3.3 DirectML and Directx12 interop

4. Building and running the samples

You will need CMake 3.10 at least to build the sample.

The input must contain contiguous data of a tensor with specified dimensions. The input (.bin files) in the repo don't necessarily represent real data at the moment, but just show to format the data

5. Future

We aim at providing the same level of feature for every back end and will provide updates monthly for that