Open mslavescu opened 7 years ago
Has anyone played with DeepAnomaly? It seems to handle generic scenes not only pretrained ones:
https://twitter.com/GTARobotics/status/853598625162817536?s=09
Excellent presentation!
Passive stereo vision with deep learning https://www.slideshare.net/mobile/yuhuang/passive-stereo-vision-with-deep-learning
Must try for OSSDC Stereo Smart Camera!
https://twitter.com/GTARobotics/status/853615370342674433
This will help a lot the OSSDC LKAS implementation.
A very interesting approach: https://arxiv.org/abs/1703.10631 Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention Jinkyu Kim, John Canny (Submitted on 30 Mar 2017)
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers etc., can understand what triggered a particular behavior. Here we explore the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). Our approach is two-stage. In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. We demonstrate the effectiveness of our model on three datasets totaling 16 hours of driving. We first show that training with attention does not degrade the performance of the end-to-end network. Then we show that the network causally cues on a variety of features that are used by humans while driving.
Must extend this project, it covers the following features for LKAS, would be also good to port these to Android for an easier to deploy #4-mvp-dash-camera :
https://github.com/OSSDC/DAPrototype
This project is an attempt to create a standalone, windshield mounted driver assist unit with the following functionality:
LDW (Lane Departure Warning)
FCW (Forward Collision Warning)
Tailgate warning
Driver pull-ahead warning
Dashcam functionality (with GPS & timestamp overlay)
The idea in this MVP is to provide state of the art and reliable free space detection and lane keeping methods, even for situations when lane markings are not available. Part of the computation may be moved to the edges, like in smart cameras powered by FPGAs. We cover smart cameras in this project: https://github.com/OSSDC/OSSDC-SmartCamera/issues/1
Here are some videos to get the discussion started, we assume the use of stereo cameras, but we may be able to do it also with pseudo stereo or multi camera setup:
Computer Games Empower Deep Learning Research | Two Minute Papers https://m.youtube.com/watch?v=QkqNzrsaxYc
SegNet: Road Scene Segmentation https://m.youtube.com/watch?v=CxanE_W46ts
Stixels: Free Space and Object Segmentation In Traffic Environments https://m.youtube.com/watch?v=7BtlB8rEqrY
Free-space Computation on Bad Weather https://m.youtube.com/watch?v=e6O-Gul3LzQ
Real-Time Stereo Vision For ADAS : Stixel 160311 https://m.youtube.com/watch?v=0KUAfZqiT-w
Sixtel, good weather (2010-07-27_111204) https://m.youtube.com/watch?v=VPvW81tnaFc
Sixtel, bad weather (2010-07-27_105634) https://m.youtube.com/watch?v=mmKeTGxFUcA https://m.youtube.com/watch?v=DUCsR24TAbs
This is from psychology, to understand how we see depth:
Monocular Depth Cues https://m.youtube.com/watch?v=tbhTHaPKM5I