Closed 564192234 closed 1 year ago
jetson 内存显存一起算的吧。
不加opencv的显示,只做推理,占用是什么情况呢?
import cv2
import time
import numpy as np
from mmdeploy_python import Detector
detector = Detector("/path/to/model", "cuda", 0)
img = cv.imread("/path/to/img")
while True:
result = detector(img)
import cv2 import time import numpy as np from mmdeploy_python import Detector
detector = Detector("/path/to/model", "cuda", 0) img = cv.imread("/path/to/img")
while True: result = detector(img) 应该是我看错了,前面的代码我也是把cv2的显示都注释了跑的代码,运行您发的代码,cpu加了1.20g,gpu加了0.45g,和原来占用的差不多end2end.engine只有13m,是正常的吗,有办法改善减少占用吗,。
jetson 设备上不是统一内存么,你怎么看的cpu和gpu各加了多少?
你可以用trtexec来测一下模型,看看测试的时候加了多少,如果差不多,那估计就这样了。
我是用jtop看的 这样算是正常的话,我就暂时先这样咯 对了,我还想问问用使用 Model Converter ,cpu加了3g、gpu加了1g,比sdk推理几乎翻倍了,这也是正常的吗。所以使用要最省资源需要用sdk是推理,是这样吗
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.
from mmdeploy_python import Detector @irexyc @564192234 想问下我按照官方教程给的安装的mmdeploy 1.3版本,没有安装这个mmdeploy_runtime的包,我该通过什么途径来安装这个包,我的设备是jetson nano 4.6.1版本,请问你是怎么安装的这个包在jetsonanno设备上
from mmdeploy_python import Detector @irexyc @564192234 想问下我按照官方教程给的安装的mmdeploy 1.3版本,没有安装这个mmdeploy_runtime的包,我该通过什么途径来安装这个包,我的设备是jetson nano 4.6.1版本,请问你是怎么安装的这个包在jetsonanno设备上
您好 请问解决了吗 ? 我最近打算在jetson上实时推理分割任务,发现找不到安装mmdeploy_runtime的地方,如果您有解决方案,希望得到您的回复。
Checklist
Describe the bug
在用官方提供的mmdet2d手部检测,ssdlite_mobilenetv2_scratch_600e_onehand-4f9f8686_20220523.pth,ssdlite_mobilenetv2_scratch_600e_onehand.py,在jetson把模型用detection_tensorrt_static-300x300.py转为了trt格式,文件end2end.enigne只有13m左右。我想用摄像头进行实时检测在推理时先是采用了使用 Model Converter 的推理 api内的inference_model更改后进行推理,启动以后发现显存占了差不多1.2个g。后更换为使用推理 SDK更改后进行推理,显存占用差不多1g,目前的推理速度能跟上摄像头的帧率30帧,准确率也很高。感觉15m不到的模型,cpu和显存占用有些高了,是我哪个地方出现问题了,请问有什么好的方法能够减少显存和cpu的占用。
Reproduction
/////////inference_model更改后,运行以后差不多占用1-2g显存 import cv2 import mmcv import numpy as np import torch from typing import Any, Sequence, Union from mmdeploy.utils import get_input_shape, load_config from mmdeploy.apis.utils import build_task_processor
cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
deploy_cfg = "mmdeploy/configs/mmdet/detection/detection_tensorrt_static-300x300.py" model_cfg = "mmdet_hand/ssdlite_mobilenetv2_scratch_600e_onehand.py" backend_files = ["mmdet_hand/trt_hand/end2end.engine"]
deploy_cfg, model_cfg = load_config(deploy_cfg, model_cfg) task_processor = build_task_processor(model_cfg, deploy_cfg, "cuda:0") model = task_processor.init_backend_model(backend_files) input_shape = get_input_shape(deploy_cfg)
设置显示窗口
cv2.namedWindow("video") cv2.resizeWindow("video", 640, 480) while True:
读取摄像头的图像
cv2.destroyAllWindows()
///////////////////////推理 SDK更改后进行推理,运行以后差不多占用1g显存 import cv2 import time import numpy as np from mmdeploy_python import Detector
设置显示窗口
cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FRAME_WIDTH, 640) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
detector = Detector("trt_hand", "cuda", 0)
while True:
读取摄像头的图像
cv2.destroyAllWindows()
Environment
Error traceback
No response