Closed kenilaivid closed 1 year ago
Hi @kenilaivid, could you please share more details: which OpenVINO version you use, share the model if possible?
Hi @sshlyapn, I am using openvino 2022.3.0.
Thanks @kenilaivid As of now, GPU plugin supports tensor's rank size equal to 6 or less and that could be the reason. Could you please check your model's IR if it has some layers with rank > 6? Otherwise it may be some bug in GPU Plugin and we need to investigate it.
@sshlyapn How can I check the rank of layer in models IR file?
@kenilaivid, you can open your model (/home/aivid12/Desktop/kenil/Shoplifting/model.xml) in any text editor and check if there are any layers with more than 6 dimensions in inputs or outputs shape description. For example, this shape is of rank 3:
<port id="3" precision="FP32" names="223">
<dim>1</dim>
<dim>128</dim>
<dim>768</dim>
</port>
@sshlyapn yes I do have layers with more than 6 dimensions in output shape description. So what can be the solution to this?
<port id="2" precision="FP32" names="Reshape_output_0">
<dim>1</dim>
<dim>5</dim>
<dim>16</dim>
<dim>3</dim>
<dim>16</dim>
<dim>20</dim>
<dim>16</dim>
<dim>20</dim>
</port>
@kenilaivid unfortunately, there is no simple solution. We have plans to extend the range of supported ranks in the further releases to align the behavior with CPU Plugin. As work around you can try to use hetero plugin if it is okay for you (HETERO:GPU,CPU), which will try to sort out unsupported operations for GPU and execute them on CPU. But even with such solution you may face the similar problems. And then the single option will be to wait for the wider range of ranks support from GPU Plugin.
@kenilaivid Could you try to run with latest master? We've added support of 7d and 8d tensors for several primitives (eltwise, transpose, reshape, reduce) here: https://github.com/openvinotoolkit/openvino/pull/16810 Please let us know if that fixes your model or some issues still exist.
Ref. 101932
Closing this as referenced PR has been merged to master branch. I hope previous responses were sufficient to help you proceed, feel free to reopen and ask additional questions related to this topic.
I want to run a FP32 model on GPU. Here is my code which I am using for inference:
But I am getting this error
Whereas it is working. When I am running it on CPU.
Here is clinfo