dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.55k stars 2.94k forks source link

Jetson Nano CSI Raspberry Pi Camera V2 upside down 180 degree #571

Closed tanueihorng closed 4 years ago

tanueihorng commented 4 years ago

I tried out the below youtube tutorial for Real-Time Object Detection. Everything runs well except the detector frame is upside down (180degree). How can I fix this?

https://www.youtube.com/watch?v=bcM5AQSAzUY

I have tried out the solution below but it won't work! https://github.com/dusty-nv/jetson-inference/pull/336/commits/225d19b4514cb18801875b74515e3b72a5cc6f0e

felipevw commented 4 years ago

Well, what about declaring the frame object in a cv::Mat format (from OpenCV) and flipping it?

Hope it helps

dusty-nv commented 4 years ago

You probably need to do a sudo make install also. Changing the flip-method argument in gstCamera.cpp should have worked.

tanueihorng commented 4 years ago

You probably need to do a sudo make install also. Changing the flip-method argument in gstCamera.cpp should have worked.

Thanks a lot ! @dusty-nv

ARLunan commented 4 years ago

Same upside-down problem with Collecting your own Datasets using the camera-capture.cpp. The screen output log clearly shows a flip-method=2 option defined in the streamer command. Where is that option set? A command line option "~/camera-capture flip-method=0" is ignored. Please provide the edit to correct to flip-method=0 code text.

dusty-nv commented 4 years ago

Hi @ARLunan, you need to change the flip-method in the source here:

https://github.com/dusty-nv/jetson-utils/blob/798c416c175d509571859c9290257bd5cce1fd63/camera/gstCamera.cpp#L416

In the next release, I have made flip-method modifiable from the command line.

ARLunan commented 4 years ago

Tnx for the response, but I would appreciate the specific location of the gstCamera.cpp file to edit and the installation process on my Nano. I note the source is in your jetson-utils github repository but not in the jetson-inference repository cloned to and compiled on my Nano. Was it downloaded and included during the jetson-inference compile? Do I clone the jetson-utils repository to my Nano, make the edit and compile? Or what is the best action to take. Thanks

dusty-nv commented 4 years ago

Jetson-utils is submodule, you can find it under jetson-inference/utils

The gstCamera.cpp line is 416 (linked to above)

After making the change, run make and sudo make install.


From: RossBots notifications@github.com Sent: Saturday, June 6, 2020 11:59:48 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Jetson Nano CSI Raspberry Pi Camera V2 upside down 180 degree (#571)

Tnx for the response, but I would appreciate the specific location of the gstCamera.cpp file to edit and the installation process on my Nano. I note the source is in your jetson-utils github repository but not in the jetson-inference repository cloned to and compiled on my Nano. Was it downloaded and included during the jetson-inference compile? Do I clone the jetson-utils repository to my Nano, make the edit and compile? Or what is the best action to take. Thanks

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/571#issuecomment-640081954, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGKY25IXKQWI3JVXB6SDRVJRXJANCNFSM4NAMQQXA.

ARLunan commented 4 years ago

The editing described in your post works (sorry didn't realize the location of the camera folder on my Nano) and huge appreciation to you (and your other NVidia developers). I have 2 RPi CSI cameras and USB C270 connected to my Nano & its great to easily use each one. I look forward to following the Inference Tutorial and showing what I am doing to colleagues in my community developing DIY Autonomous small robots & other vehicles. Any plans to add any stereo/depth image tutorials? I'm working with available image processing ROS code to generate PointCloud from the stereo camera pair to use with my onboard Nano Turtlebot Create. May I also ask if you have read my post #38 on the ros_deep_learning concerning error compiling the repository clone?

samaujs commented 3 years ago

Hi, I have the inverted problem as well with the live CSI camera on Jetson Nano B01 dev kit. How can I rebuild the "imageNet" at "~/jetson-inference/build/aarch64/bin" or fixed the "imagenet.py" file? I have commented the following lines in the "utils/camera/gstCamera.cpp" file : // if( mOptions.flipMethod == videoOptions::FLIP_NONE ) // mOptions.flipMethod = videoOptions::FLIP_ROTATE_180; // else if( mOptions.flipMethod == videoOptions::FLIP_ROTATE_180 ) // mOptions.flipMethod = videoOptions::FLIP_NONE;

Is it correct?

dusty-nv commented 3 years ago

Can you run this command: imagenet.py --input-flip=rotate-180 csi://0


From: Sam Au notifications@github.com Sent: Saturday, October 31, 2020 11:00:38 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Jetson Nano CSI Raspberry Pi Camera V2 upside down 180 degree (#571)

Hi, I have the inverted problem as well with the live CSI camera on Jetson Nano B01 dev kit. How can I rebuild the "imageNet" at "~/jetson-inference/build/aarch64/bin" or fixed the "imagenet.py" file? I have commented the following lines in the "utils/camera/gstCamera.cpp" file : // if( mOptions.flipMethod == videoOptions::FLIP_NONE ) // mOptions.flipMethod = videoOptions::FLIP_ROTATE_180; // else if( mOptions.flipMethod == videoOptions::FLIP_ROTATE_180 ) // mOptions.flipMethod = videoOptions::FLIP_NONE;

Is it correct?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/571#issuecomment-719945287, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK24IG4IHOS3CSVSZATSNQRBNANCNFSM4NAMQQXA.

dusty-nv commented 3 years ago

You also want to undo your changes to gstCamera.cpp - you should no longer need to modify that source file, because i added it as command line option.

For more info, see here: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#command-line-arguments

samaujs commented 3 years ago

Thanks; I used to do "./imagenet --input-flip=horizontal --headless csi://0 video_output.mp4".

May I also know if I like to modify your files and re-compile the imagenet/detectnet, which directory should I to run the "make" and any thing I need to be watchful?

dusty-nv commented 3 years ago

May I also know if I like to modify your files and re-compile the imagenet/detectnet, which directory should I to run the "make" and any thing I need to be watchful?

You should be running make from your jetson-inference/build directory. You will also want to run sudo make install:

$ cd jetson-inference/build
$ make
$ sudo make install
samaujs commented 3 years ago

Thank you

bigwhitebird29 commented 2 years ago

You probably need to do a sudo make install also. Changing the flip-method argument in gstCamera.cpp should have worked.

Ok. So when I reversed the FLIP_NONE with FLIP_ROTATE_180, my camera shows up upright, but freezes instantly. if I reverse both FLIP_NONE with FLIP_ROTATE_180 and FLIP_ROTATE_180 with FLIP_NONE, nothing happens. its as if I never edited the file. So... what am I missing here? I attempted to use "jetson.utils.videoSource("csi://0", argv=["--input-flip=rotate-180"]) ", in place of "jetson.utils.gstCamera(width,height,'0')", and while code-oss accepted that, then "videoSource" doesn't recognize "CaptureRGBA()" for showing the frame. How do I show the frame using videoSource instead of gstCamera? I've been researching all day, trying all kinds of methods to get my camera in the upright position. Short of flipping the physical cameras over, which I'd rather not, I'm fresh out of ideas.

dusty-nv commented 2 years ago

Hi @bigwhitebird29, can you try using the video-viewer / video-viewer.py tool first to confirm that your camera is working?

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#input-options

video-viewer.py --input-flip=rotate-180 csi://0

If there is a problem there, please post the console log.

Here are examples of using the more recent videoSource interface:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#python https://github.com/dusty-nv/jetson-utils/blob/0d3f59f5c0967a108ec1cd58a518ee5ad817d35b/python/examples/video-viewer.py

Franckevicius commented 2 years ago

Hi @bigwhitebird29, can you try using the video-viewer / video-viewer.py tool first to confirm that your camera is working?

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#input-options

video-viewer.py --input-flip=rotate-180 csi://0

If there is a problem there, please post the console log.

Here are examples of using the more recent videoSource interface:

https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md#python https://github.com/dusty-nv/jetson-utils/blob/0d3f59f5c0967a108ec1cd58a518ee5ad817d35b/python/examples/video-viewer.py

jetbot@jetbot:~/jetson-inference/build$ video-viewer --input-flip=rotate-180
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device csi://0
[gstreamer] gstCamera pipeline string:
[gstreamer] nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, framerate=30/1, format=(string)NV12 ! nvvidconv flip-method=0 ! video/x-raw(memory:NVMM) ! appsink name=mysink
[gstreamer] gstCamera successfully created device csi://0
[video]  created gstCamera from csi://0
------------------------------------------------
gstCamera video options:
------------------------------------------------
  -- URI: csi://0
     - protocol:  csi
     - location:  0
  -- deviceType: csi
  -- ioType:     input
  -- codec:      raw
  -- width:      1280
  -- height:     720
  -- frameRate:  30.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1280x720
[OpenGL] glDisplay -- X window resolution:    1280x720
[OpenGL] glDisplay -- display device initialized (1280x720)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- codec:      raw
  -- width:      1280
  -- height:     720
  -- frameRate:  0.000000
  -- bitRate:    0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- rtspLatency 2000
------------------------------------------------
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvarguscamerasrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvarguscamerasrc0
[gstreamer] gstreamer message stream-start ==> pipeline0
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 5 
   Output Stream W = 1280 H = 720 
   seconds to Run    = 0 
   Frame Rate = 120.000005 
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstBufferManager recieve caps:  video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)30/1
[gstreamer] gstBufferManager -- recieved first frame, codec=raw format=nv12 width=1280 height=720 size=1008
[gstreamer] gstBufferManager -- recieved NVMM memory
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (2764800 bytes each, 11059200 bytes total)
video-viewer:  captured 1 frames (1280 x 720)
[OpenGL] creating 1280x720 texture (GL_RGB8 format, 2764800 bytes)
[cuda]   registered openGL texture for interop access (1280x720, GL_RGB8, 2764800 bytes)
video-viewer:  captured 2 frames (1280 x 720)
nvbuf_utils: dmabuf_fd 1052 mapped entry NOT found
nvbuf_utils: Can not get HW buffer from FD... Exiting...
NvBufferGetParams failed for dst_dmabuf_fd
nvbuffer_transform Failed
video-viewer:  captured 3 frames (1280 x 720)
[gstreamer] gstDecoder -- failed to retrieve next image buffer
video-viewer:  failed to capture video frame
[gstreamer] gstDecoder -- failed to retrieve next image buffer
video-viewer:  failed to capture video frame
[gstreamer] gstDecoder -- failed to retrieve next image buffer
video-viewer:  failed to capture video frame
Franckevicius commented 2 years ago

Forcing gstCamera.cpp to use NvBufferTransform_None, i.e. flip-method = 0 (replace flag with 0) also fails (JetPack 4.6)

Franckevicius commented 2 years ago

@dusty-nv After further inspection it may be an issue with your custom mysink, calling gstreamer directly with gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM), width=1280, height=720, framerate=30/1', format=NV12 ! nvvidconv flip-method=0 ! nvoverlaysink shows correct orientations without errors, works with ximagesink as well.

CamRoni1339 commented 7 months ago

Hello @dusty-nv

I know this project is a bit old, but I was trying to make this object detection program and I ran across the same problem as the people above when testing. When running the detectnet program with my imx219 csi camera, the display is flipped upside down and I can't seem to get it right back up. I have tried many different methods such as commenting out the flip methods in gstcamera.cpp, changing around the values, and running the program with added code such as --input-flip=rotate-180 csi://0. When I run this command (imagenet.py --input-flip=rotate-180 csi://0), as you said to do, it displays the right way up, but I can't get it to do the same with detectnet. I may have also done some things wrong when trying to solve the problem since I am new to all of this.

Any help (preferably with detailed steps) would be much appreciated

Thank you so much

CamRoni1339 commented 7 months ago

@dusty-nv

I have also noticed that I do not have the same gstCamera.cpp code as some of the people in the beginning part of this thread. They talk about changing "const int flipMethod = 2", but I do not have that line of code in mine. I'm not sure why this is the case, but if you could also shed some light on that too that would be great.

Than you so much

CamRoni1339 commented 7 months ago

UPDATE + SOLUTION

To fix the problem of the video being upside down you have to go to the gstCamera.cpp file located in jetson-inference/utils/camera. Then go to line 158 where you will see an if and else if statement. There will be outputs that say videoOptions :: FLIP_NONE or FLIP_ROTATE_180. Change all of the outputs that say FLIP_ROTATE_180 to FLIP_NONE. Then cd to jetson-inference/build and execute "make" and then "sudo make install". Then the camera should be right ways up.