Open jackshencn opened 2 years ago
Thanks for letting us know about the issue. I too have noticed that the camera frame rendering is cpu intensive. Short of fixing this in PHD2, I do have a couple suggestions that might help make it a bit more usable.
@agalasso Thanks for getting back to me!
Regarding point 1, are you referring to Region of Interest grabbing from the imaging sensor? At this moment, my v4l2 driver consumes 20~40% because I'm using linux pipe. This will create lots of context switching and memory copy between two processes. If I implement v4l2 native ioctl with mmep method within phd2, there will zero CPU usage and zero memory copy. The device driver will handover the virtual address without any processing. All transfer is done by DMA without CPU involvement.
Is there documentation in the software artitecture of the display routine. I could disable some of those to make it fast.
are you referring to Region of Interest grabbing from the imaging sensor?
yes, exactly. If the driver supports it, PHD2 will request a ROI from the camera when PHD2's Enable Subframes option is selected.
Is there documentation in the software artitecture of the display routine.
Well, there's this: https://github.com/OpenPHDGuiding/phd2/blob/master/PHD_2.0_Architecture.docx . But IMO your best bet would be to look at the code. Feel free to ask questions if you get stuck or need any tips.
Hello,
Maybe related to your problem, I pushed some improvement to the rendering code, for the gamma part (save ~30% of cpu consumption for me).
@agalasso I recently implemented a VGA center crop in the kernel driver of my IMX334 module. It does speed up during the preview. The v4l2 reported frame rate matches the setting (~10FPS from my setting)
But once I turn on the guide mode, RA/DEC correction begin to overshoot due to image lag.
I looked at the delay between each of the Camera::Capture
call and the delay is significantly increased.
From top I do not see any thread from the phd2 hitting 100% CPU time slice. So it really appears to me PHD2 intentionally increase the delay between frame request?
PHD2 do not take an exposure when the mount is moving. Here you have series of 2500ms move, this is why delay between frame is increased by 2500. You need to fix the guiding calibration so it converge instead of diverging to the maximum pulse duration.
@pchev I think I know what's going on. The v4l2 cannot use actual triggered method PHD2 is doing. Most v4l2 is only a streaming based approach. Thus if PHD2 is not requesting a frame. An earlier frame before the mount movement in the v4l2 memmap buffer will pop after 2.5 second. This cause PHD2 to further add another movement thinking the image is new. But I do not think v4l2 can handle these type of action. Does PHD2 support some kind of streaming video mechanism?
The calibration error message is also due to this lag. The last returning step of RA movement back to origin is a long movement as you describe. So frame gets lagged and causing a out of 90 degree angle.
I see, trying to guide on old buffered frame cannot work.
I guide with a TIS DMK-21 camera using the INDI v4l2 driver and it support single long exposure. Maybe you can add that to your driver?
Also cam_indi.cpp support camera that can only send a video stream. In this case it stack the received frame to the duration of the exposure time. See CameraINDI::newBLOB and CameraINDI::StackStream
@pchev The problem here is that neither indi v4l2 driver nor PHD2 native v4l2 driver support advanced DMA engines in recent SoCs. These recent ISPs unlike ras Pi are multiplanar. The V4L2 device capability returns V4L2_CAP_VIDEO_CAPTURE_MPLANE instead of V4L2_CAP_VIDEO_CAPTURE by a USB webcam. At this moment I was relying on the v4l2-ctl pipe to capture data.
I successfully implemented v4l2 mplane driver under my own fork. Tested with a 190mm guide scope and 2um pixel IMX334 custom designed module and works pretty well. ~0.6" RMS.
Hi,
I'm migrating phd2 onto a embedded platform with relative high performance (RK3399 4x A53 and 2x A72). I also created a new camera driver within to call V4l2-ctl and grab images through pipe. This driver thread only consumes 20% CPU. But the main thread consumes 100% with significant lag.
The frame size is 8MP (a custom designed sensor board with IMX334) at 5Hz (0.2s exposure) but frames are typically delayed by more than two seconds. (And apparently dropping frames) Is there an option to disable the viewing processing to reduce the CPU usage. I guess the auto gamma and resizing is consuming way too much computation power. I tried using a VGA webcam through the indi-v4l2-ccd framework. This will consumes 50% CPU.
Any thoughts on making this suitable for a non-x86 SoC?