Closed lingzijian closed 1 year ago
Hi @lingzijian, what is the command line you are running?
Can the video-viewer
utility view the video stream that you are using?
I am getting similar error when launching the video source with flip method none, with other flip methods it works fine (tried horizontal, verical, rotate-180).
Using latest master (9bf5549), JetPack 4.6 on Jetson Nano Devkit and raspberry HQ camera.
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, framerate=10/1, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw(memory:NVMM) ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video] created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: csi://0
- protocol: csi
- location: 0
-- deviceType: csi
-- ioType: input
-- codec: raw
-- width: 3840
-- height: 2160
-- frameRate: 10.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: false
-- flipMethod: none
-- loop: 0
-- rtspLatency 2000
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3840 x 2160 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1920 x 1080 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 22.250000; Exposure Range min 13000, max 683709000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 0
Output Stream W = 3840 H = 2160
seconds to Run = 0
Frame Rate = 29.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: dmabuf_fd 1030 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed
[gstreamer] gstDecoder -- failed to retrieve next image buffer
I have same problem
` [gstreamer] initialized gstreamer, version 1.14.5.0 [gstreamer] gstCamera -- attempting to create device csi://0 [gstreamer] gstCamera pipeline string: [gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=2 ! video/x-raw(memory:NVMM) ! appsink name=mysink [gstreamer] gstCamera successfully created device csi://0 [video] created gstCamera from csi://0
gstCamera video options:
-- URI: csi://0
[OpenGL] failed to open X11 server connection. [OpenGL] failed to create X11 Window. jetson.utils -- no output streams, creating fake null output
detectNet -- loading detection network model from: -- prototxt NULL -- model ./models/ssd-mobilenet.onnx -- input_blob 'input_0' -- output_cvg 'scores' -- output_bbox 'boxes' -- mean_pixel 0.000000 -- mean_binary NULL -- class_labels ./models/labels.txt -- threshold 0.500000 -- batch_size 1
[TRT] TensorRT version 7.1.3 [TRT] loading NVIDIA plugins... [TRT] Registered plugin creator - ::GridAnchor_TRT version 1 [TRT] Registered plugin creator - ::NMS_TRT version 1 [TRT] Registered plugin creator - ::Reorg_TRT version 1 [TRT] Registered plugin creator - ::Region_TRT version 1 [TRT] Registered plugin creator - ::Clip_TRT version 1 [TRT] Registered plugin creator - ::LReLU_TRT version 1 [TRT] Registered plugin creator - ::PriorBox_TRT version 1 [TRT] Registered plugin creator - ::Normalize_TRT version 1 [TRT] Registered plugin creator - ::RPROI_TRT version 1 [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1 [TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1 [TRT] Registered plugin creator - ::CropAndResize version 1 [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1 [TRT] Registered plugin creator - ::Proposal version 1 [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1 [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1 [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1 [TRT] Registered plugin creator - ::Split version 1 [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1 [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1 [TRT] detected model format - ONNX (extension '.onnx') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file ./models/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine [TRT] loading network plan from engine cache... ./models/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine [TRT] device GPU, loaded ./models/ssd-mobilenet.onnx [TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. [TRT] Deserialize required 5459341 microseconds. [TRT] [TRT] CUDA engine context initialized on device GPU: [TRT] -- layers 104 [TRT] -- maxBatchSize 1 [TRT] -- workspace 0 [TRT] -- deviceMemory 20657152 [TRT] -- bindings 3 [TRT] binding 0 -- index 0 -- name 'input_0' -- type FP32 -- in/out INPUT -- # dims 4 -- dim #0 1 (SPATIAL) -- dim #1 3 (SPATIAL) -- dim #2 300 (SPATIAL) -- dim #3 300 (SPATIAL) [TRT] binding 1 -- index 1 -- name 'scores' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1 (SPATIAL) -- dim #1 3000 (SPATIAL) -- dim #2 6 (SPATIAL) [TRT] binding 2 -- index 2 -- name 'boxes' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1 (SPATIAL) -- dim #1 3000 (SPATIAL) -- dim #2 4 (SPATIAL) [TRT] [TRT] binding to input 0 input_0 binding index: 0 [TRT] binding to input 0 input_0 dims (b=1 c=3 h=300 w=300) size=1080000 [TRT] binding to output 0 scores binding index: 1 [TRT] binding to output 0 scores dims (b=1 c=3000 h=6 w=1) size=72000 [TRT] binding to output 1 boxes binding index: 2 [TRT] binding to output 1 boxes dims (b=1 c=3000 h=4 w=1) size=48000 [TRT] [TRT] device GPU, ./models/ssd-mobilenet.onnx initialized. [TRT] detectNet -- number object classes: 6 [TRT] detectNet -- maximum bounding boxes: 3000 [TRT] detectNet -- loaded 6 class info entries [TRT] detectNet -- number of object classes: 6 [gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer message new-clock ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0 [gstreamer] gstreamer message stream-start ==> pipeline0 GST_ARGUS: Creating output stream CONSUMER: Waiting until producer is connected... GST_ARGUS: Available Sensor modes : GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: Running with following settings: Camera index = 0 Camera mode = 5 Output Stream W = 1280 H = 720 seconds to Run = 0 Frame Rate = 120.000005 GST_ARGUS: Setup Complete, Starting captures for 0 seconds GST_ARGUS: Starting repeat capture requests. CONSUMER: Producer has connected; continuing. [gstreamer] gstCamera -- onPreroll [gstreamer] gstBufferManager recieve caps: video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1 [gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1008 [gstreamer] gstBufferManager -- recieved NVMM memory [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink [gstreamer] gstreamer message async-done ==> pipeline0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0 [cuda] resource already mapped (error 208) (hex 0xD0) [cuda] /home/cp/jetson-inference/utils/codec/gstBufferManager.cpp:288 [gstreamer] gstDecoder -- failed to retrieve next image buffer Exception in thread Thread-3: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/usr/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "init_new.py", line 58, in thread_recognition_function _recognition.start() File "/home/cp/detection/recognition.py", line 66, in start self.detection.start() File "/home/cp/detection/detection.py", line 356, in start img_a = self.input_a.Capture() Exception: jetson.utils -- videoSource failed to capture image
[gstreamer] gstCamera -- stopping pipeline, transitioning to GST_STATE_NULL GST_ARGUS: Cleaning up CONSUMER: Done Success GST_ARGUS: Done Success [gstreamer] gstCamera -- pipeline stopped`
For me the problem it's on csi connector, with usb camera all works
Hi @SpsProjectNet, please see my reply on the forums about trying to disable NVMM memory in the build:
Hi @dusty-nv ,When I run "cmake -DDISABLE_NVMM ../",The following error has occurred
Hi @lingzijian, I've updated the repo in https://github.com/dusty-nv/jetson-utils/commit/0d3f59f5c0967a108ec1cd58a518ee5ad817d35b to support proper CMake option for this instead.
Can you try pulling the latest and using cmake -DENABLE_NVMM=OFF ../
instead?
cd /path/to/your/jetson-inference/build
cmake -DENABLE_NVMM=OFF ../
make
sudo make install
https://github.com/dusty-nv/jetson-inference/issues/1255#issuecomment-973154399
Hi I am having the same issues. I have even tried the above mentioned solution. It even show on terminal the detection result summary.
I changed the camera and now still getting this error [cuda] invalid OpenGL or DirectX context (error 219) (hex 0xDB) [cuda] /home/xavier/jetson-inference/utils/display/glTexture.cpp:360 @dusty-nv
@faheemasi do you have a physical display attached to your Jetson?
What is the issue you are facing? The application runs and prints out detection results to the terminal, but doesn't show the window?
@dusty-nv I have same problem on my jetson nano, I tried two cameras raspberry pi v2 and csi-camera but nothing happens when I run " video-viewer csi://0" or just " video-viewer" I just got this error. I changed the jetson nano and I tried both csi and raspberry pi camera works. Also, I tried "gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e " and a window came and went instantly. this error I got when I run code above Can you help me please?
Hi, it is not able to detect your camera - are you sure you have it plugged in correctly?
Hi, it is not able to detect your camera - are you sure you have it plugged in correctly?
Hi yes I am sure, I did plugged correctly because I plugged the camera in 2 different jetson nano and in this jetson nano camera does not work, in other it works well.
Are you able to view the camera with nvgstcapture-1.0
program?
Have you tried shutting off this Nano rebooting recently to reset the nvargus camera daemon?
I tried to reset nvargus camera daemon with this common sudo systemctl restart nvargus-daemon
and nothing changed. Also, nvgstcapture-1.0
does not work. I have no idea to fix this issue
If you have confirmed that your camera still works and is indeed plugged in correctly, I would re-flash your Nano's SD card with a fresh JetPack image. Otherwise it starts to look like a hardware issue.
Hi @lingzijian, I've updated the repo in dusty-nv/jetson-utils@0d3f59f to support proper CMake option for this instead.
Can you try pulling the latest and using
cmake -DENABLE_NVMM=OFF ../
instead?cd /path/to/your/jetson-inference/build cmake -DENABLE_NVMM=OFF ../ make sudo make install
Hey @dusty-nv
I have recently had this issue as well using an RTSP source, fixed by building with NVMM disabled. Do you have any comments around why this is occuring? As i understand it, we are losing possible performance gains by not making use of NVMM.
In my case I solved it by removing the camera rotate parameter
I have the same problem, i cant rotate the camera. Any solution? Using jetpack 4.6.2
Hi @lingzijian, I've updated the repo in dusty-nv/jetson-utils@0d3f59f to support proper CMake option for this instead.
Can you try pulling the latest and using
cmake -DENABLE_NVMM=OFF ../
instead?cd /path/to/your/jetson-inference/build cmake -DENABLE_NVMM=OFF ../ make sudo make install
#1255 (comment) jetson-utils have the same problem,how to solve it?
when run posenet demo