ChuRuaNh0 / FastSam_Awsome_TensorRT

108 stars 11 forks source link

[TRT] [E] 1: [defaultAllocator.cpp::deallocate::35] Error Code 1: Cuda Runtime (invalid argument) & solution #9

Open fdap opened 1 year ago

fdap commented 1 year ago

Hi, thx for your awesome code. I have built the engine file fast_sam_1024.plan successfully using your code after modifying some parameters such as the image size. Additionally, I can get the result running the inference_trt.py script, but with the error below

[TRT] [E] 1: [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument)
Segmentation fault (core dumped)

Borrow the solution from [1], I get to address this error by moving the below variable outside of the function allocate_buffers_nms() in script trt_loader.py

inputs = []
outputs = []
bindings = []
stream = cuda.Stream()
out_shapes = []
input_shapes = []
out_names = []

According to [2], the variables might need to be in the same domain as the engine for this to be true, which could be the cause. I am quite new to Tensorrt, so I wonder if my environmental setting induced my problem or if there is something more I need to notice about Tensorrt.

It will be very nice for your reply. 😃

reference: [1] https://github.com/NVIDIA/TensorRT/issues/2852 [2] https://github.com/NVIDIA/TensorRT/issues/2052

ChuRuaNh0 commented 1 year ago

I think you need select cuda version, may be 11.0 or 10.0 version. You're first person happen this error.

tianjiahao commented 1 year ago

I meet same problem when I running the inference_trt.py script: [TRT] [E] 1: [defaultAllocator.cpp::deallocate::43] Error Code 1: Cuda Runtime (invalid argument) CUDA version:11.6 Do you have any suggestions how to resolve this

tianjiahao commented 1 year ago

Solved it by moving below variable outside of the function inputs = [] outputs = [] bindings = [] stream = cuda.Stream() out_shapes = [] input_shapes = [] out_names = []

CvBokchoy commented 7 months ago

我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。

My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument)

ChuRuaNh0 commented 7 months ago

You're using tensor plan, did you check your cuda's version? Did you get any results? Because allocate error sometime not affect

Vào Th 4, 26 thg 7, 2023 vào lúc 15:57 Fengdp @.***> đã viết:

Hi, thx for your awesome code. I have built the engine file fast_sam_1024.plan successfully using your code after modifying some parameters such as the image size. Additionally, I can get the result running the inference_trt.py script, but with the error below

[TRT] [E] 1: [defaultAllocator.cpp::deallocate::42] Error Code 1: Cuda Runtime (invalid argument) Segmentation fault (core dumped)

Borrow the solution from [1], I get to address this error by moving the below variable outside of the function allocate_buffers_nms() in script trt_loader.py

inputs = [] outputs = [] bindings = [] stream = cuda.Stream() out_shapes = [] input_shapes = [] out_names = []

According to [2], the variables might need to be in the same domain as the engine for this to be true, which could be the cause. I am quite new to Tensorrt, so I wonder if my environmental setting induced my problem or if there is something more I need to notice about Tensorrt.

It will be very nice for your reply. 😃

reference: [1] NVIDIA/TensorRT#2852 https://github.com/NVIDIA/TensorRT/issues/2852 [2] NVIDIA/TensorRT#2052 https://github.com/NVIDIA/TensorRT/issues/2052

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSSLXPLQQG25JU4WSYLXSDLXFANCNFSM6AAAAAA2YIAOXQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

ChuRuaNh0 commented 7 months ago

you convert onnx or tensorRT?

Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @.***> đã viết:

我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。

My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument)

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014701363, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.***>

CvBokchoy commented 7 months ago

你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @.> đã viết: ... 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.>

现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool)

ChuRuaNh0 commented 7 months ago

did you check tensorRT's version? C++ tensor calculate on the buffer, it sensitive with variables, so may be cuda's version conflict with tensor

Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @.***> đã viết:

你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @.

> đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014701363>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.>

现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool)

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014723008, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.***>

ChuRuaNh0 commented 7 months ago

Can i check your code inference?

Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @.***> đã viết:

你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @.

> đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014701363>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.>

现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool)

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014723008, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.***>

CvBokchoy commented 7 months ago

我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @.> đã viết: ... 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 (comment) <#9 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.>

这是我的代码(this is my tensorRT inference code)::

include "FastSAM.h"

//超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256;

float loadImage(string path, float mean, float* std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB);

float* data = new float[size];
int index = 0;
for (int c = 0; c < INPUT_CHANNEL; ++c)
{
    for (int h = 0; h < INPUT_HEIGHT; ++h)
    {
        for (int w = 0; w < INPUT_WIDTH; ++w)
        {
            data[index++] = (img.at<cv::Vec3b>(h, w)[c] / 255.f - mean[c]) / std[c];
        }
    }
}

return data;

}

int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext());

// Load input image
float* image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE * INPUT_CHANNEL * INPUT_WIDTH * INPUT_HEIGHT);

// Create input and output buffers on device
void* buffers[7];
cudaMalloc(&buffers[0], BATCH_SIZE * INPUT_CHANNEL * INPUT_WIDTH * INPUT_HEIGHT * sizeof(float)); // Input buffer
cudaMalloc(&buffers[1], 49728 * sizeof(float)); // Output buffer
cudaMalloc(&buffers[2], 107520 * sizeof(float)); // Output buffer
cudaMalloc(&buffers[3], 26880 * sizeof(float)); // Output buffer
cudaMalloc(&buffers[4], 6720 * sizeof(float)); // Output buffer
cudaMalloc(&buffers[5], 43008 * sizeof(float)); // Output buffer
cudaMalloc(&buffers[6], 131072 * sizeof(float)); // Output buffer

// Copy input data to device buffer
cudaMemcpy(buffers[0], image_data, BATCH_SIZE * INPUT_CHANNEL * INPUT_WIDTH * INPUT_HEIGHT * sizeof(float), cudaMemcpyHostToDevice);

// Run inference
context->executeV2(buffers);

return 0;

}

这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron):

![Uploading 微信截图_20240322175654.png…]()

ChuRuaNh0 commented 7 months ago

can you re-up your image?

Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @.***> đã viết:

我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @.

> đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 (comment) <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014701363>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014723008>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.>

这是我的代码(this is my tensorRT inference code)::

include "FastSAM.h"

//超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256;

float loadImage(string path, float mean, float* std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB);

float* data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.at(h, w)[c] / 255.f - mean[c]) / std[c]; } } }

return data;

}

int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext());

// Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT);

// Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 * sizeof(float)); // Output buffer

// Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice);

// Run inference context->executeV2(buffers);

return 0;

}

这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron):

[image: Uploading 微信截图_20240322175654.png…]

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014739340, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: @.***>

CvBokchoy commented 7 months ago

can you re-up your image? Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @.> đã viết: 我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @. > đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 <#9> (comment) <#9 (comment) <#9 (comment)>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 (comment) <#9 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.> 这是我的代码(this is my tensorRT inference code):: #include "FastSAM.h" //超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256; float loadImage(string path, float mean, float std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB); float data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.at(h, w)[c] / 255.f - mean[c]) / std[c]; } } } return data; } int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext()); // Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT); // Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 sizeof(float)); // Output buffer // Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice); // Run inference context->executeV2(buffers); return 0; } 这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron): [image: Uploading 微信截图_20240322175654.png…] — Reply to this email directly, view it on GitHub <#9 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: **@.***>

微信截图_20240322175654

CvBokchoy commented 7 months ago

can you re-up your image? Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @.> đã viết: 我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @. > đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 <#9> (comment) <#9 (comment) <#9 (comment)>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 (comment) <#9 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.> 这是我的代码(this is my tensorRT inference code):: #include "FastSAM.h" //超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256; float loadImage(string path, float mean, float std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB); float data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.at(h, w)[c] / 255.f - mean[c]) / std[c]; } } } return data; } int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext()); // Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT); // Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 sizeof(float)); // Output buffer // Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice); // Run inference context->executeV2(buffers); return 0; } 这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron): [image: Uploading 微信截图_20240322175654.png…] — Reply to this email directly, view it on GitHub <#9 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: **@.***>

方便加个微信吗,沟通更加方便点(18576742393)Is it convenient to add a wechat, communication is more convenient (18576742393)

ChuRuaNh0 commented 7 months ago

you are using CUDA functions such as cudamaloc() and cudamem() to allocate and copy data to device memory. This error suggests that one of these CUDA functions is receiving an invalid argument. You can refer my code, it's a bit the same: https://github.com/ChuRuaNh0/TensorRT_CPP/blob/64b1192f37c1b296f9abe45342e2ea52ad17d6e7/TensorRT-CPP/YOLOv7/C%2B%2B/yolov7.cpp#L94

Vào Th 6, 22 thg 3, 2024 vào lúc 17:01 李佳霖 @.***> đã viết:

can you re-up your image? Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @ .

> đã viết: … <#m3624173613870515998> 我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @. > đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9> (comment) <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 (comment) <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014701363>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 (comment) <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014723008>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.> 这是我的代码(this is my tensorRT inference code):: #include "FastSAM.h" //超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256; float loadImage(string path, float mean, float std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB); float data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.atcv::Vec3b(h, w)[c] / 255.f - mean[c]) / std[c]; } } } return data; } int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext()); // Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE * INPUT_CHANNEL

  • INPUT_WIDTH INPUT_HEIGHT); // Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 sizeof(float)); // Output buffer // Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice); // Run inference context->executeV2(buffers); return 0; } 这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron): [image: Uploading 微信截图_20240322175654.png…] — Reply to this email directly, view it on GitHub <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014739340>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: @.***>

_20240322175654.png (view on web) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/assets/116130427/5253920e-fb3d-4629-b975-c1f71c2d67c5

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014745203, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSSV74VCD7DMAKBLAP3YZP6RDAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG42DKMRQGM . You are receiving this because you commented.Message ID: @.***>

ChuRuaNh0 commented 7 months ago

My code C++ for this section is used in a product, so I'm going to release it soon.

Vào Th 6, 22 thg 3, 2024 vào lúc 17:01 李佳霖 @.***> đã viết:

can you re-up your image? Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @ .

> đã viết: … <#m3624173613870515998> 我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @. > đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9> (comment) <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 (comment) <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014701363>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9 (comment) <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014723008>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.> 这是我的代码(this is my tensorRT inference code):: #include "FastSAM.h" //超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256; float loadImage(string path, float mean, float std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB); float data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.atcv::Vec3b(h, w)[c] / 255.f - mean[c]) / std[c]; } } } return data; } int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext()); // Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE * INPUT_CHANNEL

  • INPUT_WIDTH INPUT_HEIGHT); // Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 sizeof(float)); // Output buffer // Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice); // Run inference context->executeV2(buffers); return 0; } 这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron): [image: Uploading 微信截图_20240322175654.png…] — Reply to this email directly, view it on GitHub <#9 (comment) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014739340>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: @.***>

_20240322175654.png (view on web) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/assets/116130427/5253920e-fb3d-4629-b975-c1f71c2d67c5

— Reply to this email directly, view it on GitHub https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/issues/9#issuecomment-2014745203, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSSV74VCD7DMAKBLAP3YZP6RDAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG42DKMRQGM . You are receiving this because you commented.Message ID: @.***>

CvBokchoy commented 7 months ago

您正在使用 CUDA 函数(例如 cudamaloc() 和 cudamem() 来分配 并将数据复制到设备内存。此错误表明这些 CUDA 之一 函数收到无效参数。你可以参考我的代码,它是一个 bit the same: https://github.com/ChuRuaNh0/TensorRT_CPP/blob/64b1192f37c1b296f9abe45342e2ea52ad17d6e7/TensorRT-CPP/YOLOv7/C%2B%2B/yolov7.cpp#L94 Vào Th 6, 22 thg 3, 2024 vào lúc 17:01 李佳霖 @.> đã viết: ... can you re-up your image? Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @ . > đã viết: … <#m3624173613870515998> 我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @. > đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 <#9> <#9 <#9>> (comment) <#9 <#9> (comment) <#9 (comment) <#9 (comment)>>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 <#9> (comment) <#9 (comment) <#9 (comment)>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.> 这是我的代码(this is my tensorRT inference code):: #include "FastSAM.h" //超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256; float loadImage(string path, float mean, float std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB); float data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.atcv::Vec3b(h, w)[c] / 255.f - mean[c]) / std[c]; } } } return data; } int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext()); // Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT); // Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 sizeof(float)); // Output buffer // Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice); // Run inference context->executeV2(buffers); return 0; } 这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron): [image: Uploading 微信截图_20240322175654.png…] — Reply to this email directly, view it on GitHub <#9 (comment) <#9 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: @.> _20240322175654.png (view on web) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/assets/116130427/5253920e-fb3d-4629-b975-c1f71c2d67c5 — Reply to this email directly, view it on GitHub <#9 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSSV74VCD7DMAKBLAP3YZP6RDAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG42DKMRQGM . You are receiving this because you commented.Message ID: @.***>

以前我用同样的方式部署分类,并没有发现报错(I used to deploy categories in the same way and didn't see any errors) 我用您的方法再试试看 (I'll try again in your way) thsnks

CvBokchoy commented 7 months ago

我本节的代码 C++ 用于产品中,因此我将发布 它很快。 Vào Th 6, 22 thg 3, 2024 vào lúc 17:01 李佳霖 @.> đã viết: ... can you re-up your image? Vào Th 6, 22 thg 3, 2024 vào lúc 16:58 李佳霖 @ . > đã viết: … <#m3624173613870515998> 我可以检查你的代码推理吗? Vào Th 6, 22 thg 3, 2024 vào lúc 16:47 李佳霖 @. > đã viết: ... <#m-680878650520633706> 你转换 onnx 还是 tensorRT? Vào Th 6, 22 thg 3, 2024 vào lúc 16:35 李佳霖 @. > đã viết: ... <#m5293755312269137428> 我认为您需要选择 cuda 版本,可能是 11.0 或 10.0 版本。你是第一个发生此错误的人。 My error is this. Do you have any suggestions ERROR: 1: [genericReformat.cu::genericReformat::executeMemcpy::1583] Error Code 1: Cuda Runtime (invalid argument) — Reply to this email directly, view it on GitHub <#9 <#9> <#9 <#9>> (comment) <#9 <#9> (comment) <#9 (comment) <#9 (comment)>>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM https://github.com/notifications/unsubscribe-auth/AT75TSQCTNOYRI4XGNKPJV3YZP3MPAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4YDCMZWGM . You are receiving this because you commented.Message ID: @.> 现在我是用C++ tensorRT推理时候显示这个错误,我输入的数据形状和用netron工具看的是一致的(Now I am using C++ tensorRT inference to show this error, and the shape of the data I entered is the same as what I saw with the netron tool) — Reply to this email directly, view it on GitHub <#9 <#9> (comment) <#9 (comment) <#9 (comment)>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA https://github.com/notifications/unsubscribe-auth/AT75TSTMD76WVUGW7EYNKKLYZP44RAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZDGMBQHA . You are receiving this because you commented.Message ID: @.> 这是我的代码(this is my tensorRT inference code):: #include "FastSAM.h" //超参数 const string engine_path = "fast256.engine"; const string image_file = "1.jpg"; const int BATCH_SIZE = 1; const int INPUT_CHANNEL = 3; const int INPUT_WIDTH = 256; const int INPUT_HEIGHT = 256; float loadImage(string path, float mean, float std, int size) { cv::Mat img = cv::imread(path); cv::resize(img, img, cv::Size(INPUT_WIDTH, INPUT_HEIGHT)); cv::cvtColor(img, img, cv::COLOR_BGR2RGB); float data = new float[size]; int index = 0; for (int c = 0; c < INPUT_CHANNEL; ++c) { for (int h = 0; h < INPUT_HEIGHT; ++h) { for (int w = 0; w < INPUT_WIDTH; ++w) { data[index++] = (img.atcv::Vec3b(h, w)[c] / 255.f - mean[c]) / std[c]; } } } return data; } int main() { // Create inference runtime and deserialize the engine Logger gLogger; unique_ptr runtime(nvinfer1::createInferRuntime(gLogger)); ifstream engine_stream(engine_path, ios::binary); if (!engine_stream) { cerr << "Failed to open engine file: " << engine_path << endl; return -1; } engine_stream.seekg(0, ios::end); const size_t size = engine_stream.tellg(); engine_stream.seekg(0, ios::beg); vector engine_data(size); engine_stream.read(engine_data.data(), size); unique_ptr engine(runtime->deserializeCudaEngine(engine_data.data(), size, nullptr)); unique_ptr context(engine->createExecutionContext()); // Load input image float image_data = loadImage(image_file, new float[3] {0.485f, 0.456f, 0.406f}, new float[3] {0.229f, 0.224f, 0.225f}, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT); // Create input and output buffers on device void buffers[7]; cudaMalloc(&buffers[0], BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float)); // Input buffer cudaMalloc(&buffers[1], 49728 sizeof(float)); // Output buffer cudaMalloc(&buffers[2], 107520 sizeof(float)); // Output buffer cudaMalloc(&buffers[3], 26880 sizeof(float)); // Output buffer cudaMalloc(&buffers[4], 6720 sizeof(float)); // Output buffer cudaMalloc(&buffers[5], 43008 sizeof(float)); // Output buffer cudaMalloc(&buffers[6], 131072 sizeof(float)); // Output buffer // Copy input data to device buffer cudaMemcpy(buffers[0], image_data, BATCH_SIZE INPUT_CHANNEL INPUT_WIDTH INPUT_HEIGHT sizeof(float), cudaMemcpyHostToDevice); // Run inference context->executeV2(buffers); return 0; } 这是我用netron查看onnx时候的形状(This is the shape when I looked at onnx with netron): [image: Uploading 微信截图_20240322175654.png…] — Reply to this email directly, view it on GitHub <#9 (comment) <#9 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSUS7F775GJ4VJEZFQ3YZP6DLAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG4ZTSMZUGA . You are receiving this because you commented.Message ID: @.> _20240322175654.png (view on web) https://github.com/ChuRuaNh0/FastSam_Awsome_TensorRT/assets/116130427/5253920e-fb3d-4629-b975-c1f71c2d67c5 — Reply to this email directly, view it on GitHub <#9 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AT75TSSV74VCD7DMAKBLAP3YZP6RDAVCNFSM6AAAAAA2YIAOXSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJUG42DKMRQGM . You are receiving this because you commented.Message ID: @.***>

最近几天可以发布吗 大佬 (Can it be published in the next few days, boss)