UbiquityRobotics / raspicam_node

ROS node for camera module of Raspberry Pi
BSD 3-Clause "New" or "Revised" License
292 stars 162 forks source link

Want both raw and compressed topics #39

Closed bthanos closed 5 years ago

bthanos commented 6 years ago

In my project I need to have the raw data from the pi camera sampled and processed using CV. I find the republish function starts to really jump cpu utilization (and power and heat) at larger frame sizes. I need to record high res images at 10 fps, and downscale for image analysis.

Its really useful to have the hardware jpeg encoding, but it would be better to just make a separate jpeg encoder node that takes raw in and uses the pi hardware to encode.

Another very good thing would be to add the pi's h.264 hardware encoder to the node.

or maybe a new node needs to be developed for that.

good stuff, but im going to have to try out dbangolds raspicam_node which offers raw output.

rohbotics commented 6 years ago

I think adding a parameter to support raw publishing would be pretty useful, I will try to figure out the complexity involved.

I would have to look into having seperate encoder nodes. For h.264, it relies on interframe compression, so I don't think it would work with the standard image_transport in ROS. Maybe a node that can record a ROS image topic to a file, but that is getting a bit out of scope for us.

bthanos commented 6 years ago

Rohan, Thanks, sounds good. I think there is a way to make AVI wrapped motion jpegs using video_record so I can work with that. Just need to get a raw image topic I can use for the CV. Much appreciated. If you need to split apart the raw and compressed outs into two nodes that will work as well.

bthanos commented 6 years ago

Rohan Have been able to assess this? Thanks Bill

rohbotics commented 6 years ago

Looking into this, a separate raw publishing node would have to pull data directly from camera_video_port and not setup the encoder at all. This could easily take quite some time to develop/test/debug.

Currently we are focused on tasks critical to the launch of our robots in a few months. Because of this we will not really be able to work on this issue for some time.

If you want to work on this and submit a PR, we would be happy to review and merge it in.

davecrawley commented 6 years ago

Please do try to look at the source code yourself, and consider issuing a Pull Request through github. If this mode can be put in and works properly without degrading performance (as I'd expect) we'll promptly merge it in to the main branch - which we obviously intend to continue to support.

David

On Thu, Jan 18, 2018 at 4:49 PM, bthanos notifications@github.com wrote:

Rohan Have been able to assess this? Thanks Bill

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/UbiquityRobotics/raspicam_node/issues/39#issuecomment-358831705, or mute the thread https://github.com/notifications/unsubscribe-auth/ALDeVtmZINxMDu8pa5KQS9qT1xt6eDayks5tL-a3gaJpZM4Rdguu .

dimatura commented 6 years ago

For what it's worth, ROS does support protocols with interframe compression, the theora_image_transport plugin for image_transport being an example. That said, it's relatively annoying to work with this option, as decoding messages becomes dependent on past messages.

rohbotics commented 5 years ago

As part of the refactor I have been looking into this. It may be possible to use a splitter on the camera_component output, with 1 port going to the encoding pipeline, and another to a raw image publisher.

Note that this requires a yuv (I420) to rgb conversion step, which the splitter may be able to perform.

https://www.raspberrypi.org/forums/viewtopic.php?t=189004