MicrosoftDocs / azure-docs

Open source documentation of Microsoft Azure
https://docs.microsoft.com/azure
Creative Commons Attribution 4.0 International
10.2k stars 21.34k forks source link

Deploy to FPGA #51328

Closed 7xuanlu closed 4 years ago

7xuanlu commented 4 years ago

I've been trying to deploy Azure Custom Vision model, which is exported as TensorFlow fortmat on the portal, to FPGA on Azure Stack Edge. Is it necessary to use the default pretrained models to be able to convert it to ONNX format and successfully to FPGA? Could I use my custom models like the ones I exported from Custom Vision?

By referring to pretrained models, I mean:


Document Details

Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

ram-msft commented 4 years ago

@h164654156465 Thanks for the question. We are investigating the issue and will update you shortly.

ram-msft commented 4 years ago

@h164654156465 Please follow the below guide for bringing vision models to IOT edge. guide is for you!

And it happens to be on GitHub, so you can get and check the samples and the guide itself!

Can you please add more details about the use case that you are trying?

@jpe316 can you please check on this.

7xuanlu commented 4 years ago

Hi @ram-msft , thank you for your GitHub link, I'll check that out. What I was doing is:

I'm wondering that if I can use the model.pb file in this azure doc. In Load featurizer section, it says I can replace QuantizedResnet50 with other deep neural networks:

So, maybe I can replace it with the model exported on Azure Custom Vision portal?

If this approach is not feasible, is there another way to deploy Azure Custom Vision model to FPGA in Azure Stack Edge?

csteegz commented 4 years ago

You need to use the models from the SDK. Unfortunately, these services are not integrated and I do not believe there is going to be a way to deploy them at this time.

If you're insistent on trying, there is a possible approach. I am not familiar with the structure of the model produced by custom vision. If custom vision service is using transfer learning, it is possible you could take split out the classifier produced by the custom vision service and graft it onto the quantized model. I don't have an example of this or really the ability to provide more guidance on how to do it.

PeterCLu commented 4 years ago

@h164654156465 , unfortunately this isn't supported. From a member of the team: " FPGA on Azure only support those accelerated models included in SDK, i.e. ResNet50, ResNet152, etc, see documentation here. https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-fpga-web-service

It does not support models exported from Azure Custom Vision today " You're more than welcome to open a feature request here. Since this is not a supported feature and cannot be supported by this document issue, I will proceed to #please-close this issue.