yasenh / libtorch-yolov5

A LibTorch inference implementation of the yolov5
MIT License
372 stars 114 forks source link

如何进行多GPU推理 #57

Closed nobody-cheng closed 2 years ago

nobody-cheng commented 2 years ago

你好,如何进行多GPU推理 devices指定任一个卡均是默认在0号卡运行

nobody-cheng commented 2 years ago
#include <cuda_runtime_api.h>
for (int j = 0; j < MAX_CUDA_NUM; j++)
{
torch::jit::script::Module module_;
try 
{// Deserialize the ScriptModule from a file using torch::jit::load().
cudaSetDevice(j);
module_ = torch::jit::load(weights, torch::Device(torch::DeviceType::CUDA, j));
}
catch (const c10::Error& e) 
{
std::cerr << "Error loading the model!\n";
std::exit(EXIT_FAILURE);
}
torch::Device tempDevice = torch::Device(torch::kCUDA, j+ INDEX_START);
module_.to(tempDevice);
module_.to(torch::kHalf);
module_.eval();
g_moduleVec.push_back(module_);
}
Ellohiye commented 1 year ago
#include <cuda_runtime_api.h>
for (int j = 0; j < MAX_CUDA_NUM; j++)
{
    torch::jit::script::Module module_;
    try 
    {// Deserialize the ScriptModule from a file using torch::jit::load().
        cudaSetDevice(j);
        module_ = torch::jit::load(weights, torch::Device(torch::DeviceType::CUDA, j));
    }
    catch (const c10::Error& e) 
    {
        std::cerr << "Error loading the model!\n";
        std::exit(EXIT_FAILURE);
    }
    torch::Device tempDevice = torch::Device(torch::kCUDA, j+ INDEX_START);
    module_.to(tempDevice);
    module_.to(torch::kHalf);
    module_.eval();
    g_moduleVec.push_back(module_);
}

您好,请问您解决了这个问题吗?可以用多gpu推理吗?