While developing the distance calculator I am getting lots of error, and I have a strong suspicion it's due to the quality of the initial frame. I took the time to do some research and after the whole day of messing around with sockets and python code, I think I have working solution 🎉
In this PR I add a way to take screenshots in the PiCamera and send them to the topside computer through a TCP file receiver. This allows the software to take screenshots with over twice as many pixels ((1088*1920)/(720*1296) ~= 2.23), and get images which haven't been compressed using the H264 encoding. This results in images with about 5 times the amount of information (650KB -> 3MB).
How was this achieved?
TcpFileReceiver is a class which hosts a server port and listens to incoming clients. Clients first send a single string to the TcpFileReceiver stating the desired path of a file. The sender then sends the file using the same connection. The TcpFileReceiver saves the file to the specified path and then closes the connection. It will then begin waiting for the next file.
The eer-camera-feed script has been given a method that, when signaled using SIGUSR2, the script will read a path name from a predetermined location and send a screenshot (using the camera.capture method) to the topside using a TCP connection, and the protocol defined for TcpFileReceiver. It is able to produce a higher resolution image, because the base stream has been increased from (720*1296) to (1088*1920). The capture_continuous method uses the resize option to inject a resize block in the H264 pipeline. The bottleneck for image transfer is the topside, so although the PiCamera is really decoding a (1088*1920) stream the effects should hopefully be negligible on the topside. When taking a screenshot the videofeed will pause while the image is taken. After the image is taken, the capture_continuous continues to stream the H264 data.
The port for the receiver has been added to the launch config so any device can access it.
To trigger a file to be saved, new CameraCaptureValueA and CameraCaptureValueB objects have been added. These objects contain a path which the Picamera classes will subscribe to and send this information to the eer-camera-feed script via runtime file and systemd service kill signal SIGUSR2.
Now that the VideoDecoder is no longer required when taking screenshots, the dependencies between the VideoDecoder and the CameraCalibration object have been removed.
Finally when requesting a screen shot, the developer can use the DirectoryUtil#observe method to wait for the file downloaded by the TcpFileReceiver; allowing blocking file requests with low overhead in code.
Extra note: I removed SourceController#manageMultiViewModel from the MainViewController. I got getting carried away with what I can do with it and have replaced it with more readable/ sane solution (very similar to how the Rov object waits for it's heartbeats).
While developing the distance calculator I am getting lots of error, and I have a strong suspicion it's due to the quality of the initial frame. I took the time to do some research and after the whole day of messing around with sockets and python code, I think I have working solution 🎉
In this PR I add a way to take screenshots in the PiCamera and send them to the topside computer through a TCP file receiver. This allows the software to take screenshots with over twice as many pixels (
(1088*1920)/(720*1296) ~= 2.23
), and get images which haven't been compressed using the H264 encoding. This results in images with about 5 times the amount of information (650KB -> 3MB).How was this achieved?
TcpFileReceiver
is a class which hosts a server port and listens to incoming clients. Clients first send a single string to theTcpFileReceiver
stating the desired path of a file. The sender then sends the file using the same connection. TheTcpFileReceiver
saves the file to the specified path and then closes the connection. It will then begin waiting for the next file.The
eer-camera-feed
script has been given a method that, when signaled usingSIGUSR2
, the script will read a path name from a predetermined location and send a screenshot (using thecamera.capture
method) to the topside using a TCP connection, and the protocol defined forTcpFileReceiver
. It is able to produce a higher resolution image, because the base stream has been increased from(720*1296)
to(1088*1920)
. Thecapture_continuous
method uses theresize
option to inject a resize block in the H264 pipeline. The bottleneck for image transfer is the topside, so although the PiCamera is really decoding a(1088*1920)
stream the effects should hopefully be negligible on the topside. When taking a screenshot the videofeed will pause while the image is taken. After the image is taken, thecapture_continuous
continues to stream the H264 data.The port for the receiver has been added to the launch config so any device can access it.
To trigger a file to be saved, new
CameraCaptureValueA
andCameraCaptureValueB
objects have been added. These objects contain a path which thePicamera
classes will subscribe to and send this information to theeer-camera-feed
script via runtime file and systemd service kill signal SIGUSR2.Now that the
VideoDecoder
is no longer required when taking screenshots, the dependencies between theVideoDecoder
and theCameraCalibration
object have been removed.Finally when requesting a screen shot, the developer can use the
DirectoryUtil#observe
method to wait for the file downloaded by theTcpFileReceiver
; allowing blocking file requests with low overhead in code.Extra note: I removed
SourceController#manageMultiViewModel
from theMainViewController
. I got getting carried away with what I can do with it and have replaced it with more readable/ sane solution (very similar to how theRov
object waits for it's heartbeats).