Open CT83 opened 6 years ago
It’s late at night on the West Coast right now, so I’m just going to post a short comment for now.
Very short answer: my data collection process really sucks (because it’s unitinuitive) and I’m currently refactoring it based on best practices from here: http://docs.donkeycar.com/guide/get_driving/. If you scroll down about half way on that page you’ll see a really slick mobile web UI that you can use to gather training data. The full refactoring will take another 5-6 week’s most likely. I highly recommend looking at the donkeycar repo in general.
The long answer is sort of in the main readme of my repo, but if that’s insuifficent then a better answer will take me too long to type over my phone now. I’ll reply back to this issue with better instructions once I’ve finished he refactoring.
Okay, I am now reading more into the code and the readme. Respond with a thumbs up, so I know I am going in the right direction.
Use stream_mjpeg_video.py
and display it using OpenCV streamer from util.py
to confirm it is working.save_streaming_video_data.py
saves streamed video on local pc and the server logs the key strokes.Not really relevant here but adding it incase someone else stumbled on this problem, I was using the code here, to capture the frames and stream back to the server.
I had used
for _ in camera.capture_continuous(stream, 'jpeg'):
instead of
for _ in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
That was the source of the problem!! Getting smooth 20-30 FPS now!
Thanks a bunch for all your help @RyanZotti
Could you please give me a clearer overview of how the training data was collected? A brief run down of your tech stack for streaming and the Python files which were would greatly help. I am currently using
raspivid -n -t 0 -rot 270 -w 960 -h 720 -fps 30 -b 6000000 -o - | gst-launch-1.0 -e -vvvv fdsrc ! h264parse ! rtph264pay pt=96 config-interval=5 ! udpsink host=192.168.1.2 port=5000
to stream videoThen view it using gstreamer.
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false
I am also trying Hamuchiwa's Approach no luck yet. Which collection method would be better?
That seems to work fine but I don't know how I would pipe/send this to OpenCV? An clearer explanation on your method would greatly help me.