Closed lixinye666 closed 11 months ago
为啥用俩模型
The preprocessing here is approximately the embedding in SAM.
Segment Anything involves several preprocessing steps, like this:
sam.to(device='cuda') predictor = SamPredictor(sam) predictor.set_image(image) image_embedding = predictor.get_image_embedding().cpu().numpy()
The export_pre_model script exports these operations as an ONNX model to enable execution independent of the Python environment.
Progress: 0% 2023-07-26 17:04:50.4252918 [E:onnxruntime:test, cuda_call.cc:119 onnxruntime::CudaCall] CUDNN failure 4: CUDNN_STATUS_INTERNALERROR ; GPU=0 ; hostname=MSI ; expr=cudnnFindConvolutionForwardAlgorithmEx( GetCudnnHandle(context), s.xtensor, s.xdata, s.wdesc, s.wdata, s.convdesc, s.ytensor, s.y_data, 1, &algo_count, &perf, algo_search_workspace.get(), max_ws_size); 2023-07-26 17:04:50.4346841 [E:onnxruntime:, sequential_executor.cc:494 onnxruntime::ExecuteKernel] Non-zero status code returned while running Conv node. Name:'/mask_downscaling/mask_downscaling.0/Conv' Status Message: CUDNN failure 4: CUDNN_STATUS_INTERNALERROR ; GPU=0 ; hostname=MSI ; expr=cudnnFindConvolutionForwardAlgorithmEx( GetCudnnHandle(context), s.xtensor, s.xdata, s.wdesc, s.wdata, s.convdesc, s.ytensor, s.y_data, 1, &algo_count, &perf, algo_search_workspace.get(), max_ws_size);
sam_preprocess.onnx