yasenh / libtorch-yolov5

A LibTorch inference implementation of the yolov5
MIT License
376 stars 114 forks source link

GTX 1660 SUPER not detect #65

Open YangSangWan opened 2 years ago

YangSangWan commented 2 years ago

hello! thank you for your repository

I try image detector at GeForce RTX 3060, cuda 11.1 ->>>> result is good

However...

GeForce GTX 1660 SUPER, cuda 11.1 or 11.8 --- no detect , no errors.... but in pythorch(cu113) result is good...

and I debuging source code

In Geforce RTX 3060

auto det = torch::masked_select(detections[batch_i], conf_mask[batch_i]).view({-1, num_classes + item_attr_size});
qDebug() << "det.sizes().size() == " << det.sizes().size();
qDebug() << "det.size(0) == " << det.size(0);
qDebug() << "det.size(1) == " << det.size(1);

det.sizes().size() is 2 det.size(0) is 157 det.size(1) is 20

But GeForce GTX 1660 SUPER

auto det = torch::masked_select(detections[batch_i], conf_mask[batch_i]).view({-1, num_classes + item_attr_size});
qDebug() << "det.sizes().size() == " << det.sizes().size();
qDebug() << "det.size(0) == " << det.size(0);
qDebug() << "det.size(1) == " << det.size(1);

det.sizes().size() is 2 det.size(0) is 0 det.size(1) is 20

why "det.size(0) is 0" in "GTX 1660 SUPER" ??????

xmcchv commented 1 year ago

I also meet this. In pytorch, i add "torch.backends.cudnn.enabled = False" to fix, but i don't konw how to make it out in libtorch. what should i set in cmakelist or code? my env is 1660super torch 1.12.0+cu116 cuda11.6 cudnn 8.6