Open jayben-2022 opened 1 year ago
Hi jayben-2022,
Thanks for opening this issue. It sounds like the cause of your problem is that the IMX390 requires thousands of I2C register writes to set the operating mode, which takes a little longer than Argus would like. The solution you linked suggests adding the set_mode_delay_ms
property to the device tree. You could do so by adding a line like this in hardware/d3/templated-cameras/d3-cam-imx390-modes.dtsi:
#define IMX390_COMMON \
mclk_khz = "25000"; \
num_lanes = STR(CSI_LANES); \
tegra_sinterface = STR(TEGRA_SINTERFACE); \
discontinuous_clk = "no"; \
dpcm_enable = "false"; \
cil_settletime = "0"; \
csi_pixel_bit_depth = "12"; \
pixel_phase = "rggb"; \
active_w = "1936"; \
active_h = "1096"; \
readout_orientation = "0"; \
inherent_gain = "1"; \
serdes_pix_clk_hz = "500000000"; \
pix_clk_hz = "148500000"; \
embedded_metadata_height = "1"; \
min_hdr_ratio = "64.0"; \
max_hdr_ratio = "64.0"; \
+ set_mode_delay_ms = "500" \
vc_id = STR(PORT_VCID)
I should also mention that there is a bug in JetPack 4.6.1 that affects starting multiple Argus streams simultaneously. About 1-5% of the time, streams will fail if started too soon after another stream, regardless of how long it takes to set the mode. NVIDIA recommended adding some delay between starting each stream and we determined that delaying .125 seconds between each start stream call will avoid the race condition. You may not be seeing this bug yet, but keep it in mind in case you see it in the future.
Thanks, Cody
Sorry if my question is too basic, actually i'm a newbie with gstreamer stuff.
Thanks in advance, -Shams ..
Hello @jayben-2022,
That's a good question. As written, you are starting 7 pipelines with one gst-launch-1.0
command. There's nothing wrong with doing it this way, except that gst-launch-1.0
decides when each pipe is constructed. I scanned through the manpage, and I didn't see any options that control when pipelines are launched.
You could break up that command so that gst-launch-1.0
is called once per pipeline. Then you could introduce delay by running sleep
between each pipeline. Here is an example:
for i in {0..7}; do
gst-launch-1.0 -e nvarguscamerasrc sensor-id=${i} tnr-strength=1 tnr-mode=2 ! 'video/x-raw(memory:NVMM), width=1920, height=1080, framerate=30/1' ! queue ! nvv4l2vp9enc maxperf-enable=true bitrate=4000000 ! rtpvp9pay mtu=7200 ! udpsink host=$host-ip port=500${i} sync=false async=false;
sleep 0.125;
done
I hope this helps!
Hi @d3-cburrows,
I have tried the above mentioned method of starting pipelines manually (using for loop as well as entering manually in terminal). But the issue is unresolved. I have found that on bootup the nvargus-daemon doesn't starts/load up 100%. So, even if we use the command argus_camera command and goto multi-session tab, we don't see all cameras being initialized properly. But when i restart the daemon using sudo service nvargus-daemon restart, all the cameras get initialized instantly. I was wondering about Is there any way to restart the daemon on bootup automatically, before initializing the deepstream app. Thanks in advance, -Shams
@jayben-2022,
Restarting the Argus daemon at startup seems like it shouldn't be necessary because it gets started fresh by systemd during each boot. It may be possible for automated scripts or even users with good muscle memory to attempt to start Argus streams before the daemon has fully initialized. It sounds like some of the streams could be failing for this reason. A script to restart the Argus daemon could be automated with systemd, but I don't think that's going to be the most reliable solution.
If you wait an absurd amount of time (like a minute :smile:) after booting up without restarting the daemon, do more of your streams initialize the first time? Does a power cycle vs a warm reboot change the behavior?
How is your deepstream app started? If it's automated with systemd, make sure it requires the argus daemon to be started first.
Thanks, Cody
Hi team,
I am still facing issues with streaming 7 cameras using deepstream. I open a topic on Nvidia forum and concluded that sensor is unstable when they are launched togather. Details can be accessed here https://forums.developer.nvidia.com/t/errors-starting-7-nvarguscamersrc-pipeline-in-r32-7-1-jetpack-4-6-1/262848/30
I flashed a Xavier with JetPack 4.6.1 and installed BSP 5.0.0 to try to reproduce the camera timeout in v4l2-ctl, but I was unable to.
Here's my setup (active_overlays=imx390rcm_0,imx390rcm_1,imx390rcm_2,imx390rcm_3,imx390rcm_4,imx390rcm_5,imx390rcm_6
):
I ran v4l2-ctl --stream-mmap -d [video0 - video7]
simultaneously for 30 minutes and they kept streaming at 30.03 fps. They did have a few seconds between streams starting since I had to open a new window and enter the command.
I'm wondering if the camera is timing out from a hardware issue. When the camera times out, is it always the same camera? If so, can you isolate the issue to a specific camera module/FAKRA cable/port by swapping things around and trying other ports?
Some other things to check is the amp draw/voltage on the power supply to the 16x card, it may be browning out after a few minutes of constant load or exceeding its current rating. Also check the power mode for the Xavier, setting it to MAXN will enable all CPU cores among other things and rule out any throughput issues.
Hi @d3-jshaffer, Thanks for your quick response. I noticed that fps reading your case is exactly constant at 30.03, whereas in my case it keeps on changing at each step for every camera. Does it have to do anything with device tree configuration as you have mentioned (active_overlays=imx390rcm_0,imx390rcm_1,imx390rcm_2,imx390rcm_3,imx390rcm_4,imx390rcm_5,imx390rcm_6). In my case, i'm unaware of how to do this configuration so, i haven't done it on my end.
The camera configuration is stored in /boot/extlinux/extlinux.conf
and it is specified through the active_overlays=XXX parameter. You likely have already done it by using a separate tool like d3-select-cameras-boot
, otherwise the cameras wouldn't be loaded or available for v4l2-ctl or Argus. I would check to verify it is imx390rcm_X
instead of imx390_X
, they do load the same driver but there are minor differences in how the driver configures the sensors.
Typically, when I've seen fluctuating FPS readings, it was due to a bottleneck on the system somewhere. Try setting the power mode to MAXN (it's on the top bar). You can also do it via the command line with nvpmodel
.
This will boost the clocks on the SoM and enable all 8 CPU cores. Typically, only 4 are enabled with the default mode. After setting the power mode and rebooting, you can also run jetson_clocks
which will boost the clocks further to the absolute maximum of the power mode.
For further debugging, I would look at the output of top
(or install htop
for more user-friendly output) while starting the streams. It'll show you CPU usage, memory usage, and other performance indicators.
Hi @d3-jshaffer,
Following your guidelines i launched all cameras.
After about 40 minutes, one camera (mostly sensor_id=4) got stuck
I tried to measure the current consumption of D3- board while launching each camera and the readings are as under <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:x="urn:schemas-microsoft-com:office:excel" xmlns="http://www.w3.org/TR/REC-html40">
no. of cameras | Cuurent in Amps -- | -- 0 (Idle) | 0.22 1 | 0.27 2 | 0.33 3 | 0.39 4 | 0.44 5 | 0.5 6 | 0.55 7 | 0.61
Hi, Hardware setup -IMX390RCM sunex DSL239 192 HFOV (1000843-R) with DesignCore® NVIDIA® Jetson AGX Xavier FPD-Link™ III Interface Card. -BSP using 5.0 version on Jetpack 4.6.1. -Jetson AGX Xavier Development Kit. Problem Statement: I am using Deepstream to live stream 7 cameras and successfully received them on the receiving end. But whenever i launch the pipeline on freshly booted system, it throws an error. Upon consulting with Nvidia(https://forums.developer.nvidia.com/t/nvarguscamerasrc-timeout-error/241339), they suggested a solution(https://forums.developer.nvidia.com/t/jetson-nano-nvarguscamerasrc-capture-can-not-get-hw-buffer-error/190884/7?u=vnawani), which requires your guidance in implementation, as i couldn't find the streaming initialisation time taken by the sensor or how to set the property in camera sensor device-tree. Any help will be highly appreciated. Best Regards, -Shams