Closed Tristesse-stk closed 2 years ago
@Tristesse-stk modify the power mode to 2 core 15W, and turbo the cpu
Great!Your solution is helpful and the inference speed is as fast as yours.I just gave the star to your project.
Hi! Can you say how to "modify the power mode to 2 core 15W, and turbo the cpu". ? RTX 2080 Ti TensorRT 8.0.6 CUDA 11.3 cuDNN 8.2.1
Thank you!
In the upper right corner of the interface.
@denred0 but it works only for Nvidia Jetson
@Nuzhny007 Yes, I understood
Hi, thank you for sharing a great open source project about TensorRT. There is a question after I run the sample_detector with yolov4 successfully. The inference time I tested is longer than the time you mentioned in your benchmark. My environment: Jetson XAVIER NX TensorRT7.1.3 cuda10.2 cudnn8.0.0 opencv4.1.1
My result table is below.
精度 | 图像尺寸 | 速度 -- | -- | -- FP32 | 416*416 | 180ms FP16 | 416*416 | 90msIt is seen that it is slower than yours.![Image](https://user-images.githubusercontent.com/55178320/153114220-b79db3d0-5e4d-4976-a8e2-2ff4e1bc12fd.png)
If you can help me suggest some possible solutions, I will be very grateful and give you a star.