-
Notifications
You must be signed in to change notification settings - Fork 19
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Sergey Morozov
committed
Jan 28, 2019
1 parent
61bf056
commit d4111bf
Showing
3 changed files
with
196 additions
and
196 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,74 +1,122 @@ | ||
This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction [here](https://classroom.udacity.com/nanodegrees/nd013/parts/6047fe34-d93c-4f50-8336-b70ef10cb4b2/modules/e1a23b06-329a-4684-a717-ad476f0d8dff/lessons/462c933d-9f24-42d3-8bdc-a08a5fc866e4/concepts/5ab4b122-83e6-436d-850f-9f4d26627fd9). | ||
|
||
Please use **one** of the two installation options, either native **or** docker installation. | ||
|
||
### Native Installation | ||
|
||
* Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. [Ubuntu downloads can be found here](https://www.ubuntu.com/download/desktop). | ||
* If using a Virtual Machine to install Ubuntu, use the following configuration as minimum: | ||
* 2 CPU | ||
* 2 GB system memory | ||
* 25 GB of free hard drive space | ||
|
||
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this. | ||
|
||
* Follow these instructions to install ROS | ||
* [ROS Kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu) if you have Ubuntu 16.04. | ||
* [ROS Indigo](http://wiki.ros.org/indigo/Installation/Ubuntu) if you have Ubuntu 14.04. | ||
* [Dataspeed DBW](https://bitbucket.org/DataspeedInc/dbw_mkz_ros) | ||
* Use this option to install the SDK on a workstation that already has ROS installed: [One Line SDK Install (binary)](https://bitbucket.org/DataspeedInc/dbw_mkz_ros/src/81e63fcc335d7b64139d7482017d6a97b405e250/ROS_SETUP.md?fileviewer=file-view-default) | ||
* Download the [Udacity Simulator](https://github.com/udacity/CarND-Capstone/releases). | ||
|
||
### Docker Installation | ||
[Install Docker](https://docs.docker.com/engine/installation/) | ||
|
||
Build the docker container | ||
```bash | ||
docker build . -t capstone | ||
``` | ||
|
||
Run the docker file | ||
```bash | ||
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone | ||
``` | ||
|
||
### Port Forwarding | ||
To set up port forwarding, please refer to the [instructions from term 2](https://classroom.udacity.com/nanodegrees/nd013/parts/40f38239-66b6-46ec-ae68-03afd8a601c8/modules/0949fca6-b379-42af-a919-ee50aa304e6a/lessons/f758c44c-5e40-4e01-93b5-1a82aa4e044f/concepts/16cf4a78-4fc7-49e1-8621-3450ca938b77) | ||
|
||
### Usage | ||
|
||
1. Clone the project repository | ||
```bash | ||
git clone https://github.com/udacity/CarND-Capstone.git | ||
``` | ||
|
||
2. Install python dependencies | ||
```bash | ||
cd CarND-Capstone | ||
pip install -r requirements.txt | ||
``` | ||
3. Make and run styx | ||
```bash | ||
cd ros | ||
catkin_make | ||
source devel/setup.sh | ||
roslaunch launch/styx.launch | ||
``` | ||
4. Run the simulator | ||
|
||
### Real world testing | ||
1. Download [training bag](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/traffic_light_bag_file.zip) that was recorded on the Udacity self-driving car. | ||
2. Unzip the file | ||
```bash | ||
unzip traffic_light_bag_file.zip | ||
``` | ||
3. Play the bag file | ||
```bash | ||
rosbag play -l traffic_light_bag_file/traffic_light_training.bag | ||
``` | ||
4. Launch your project in site mode | ||
```bash | ||
cd CarND-Capstone/ros | ||
roslaunch launch/site.launch | ||
``` | ||
5. Confirm that traffic light detection works on real life images | ||
# Self-Driving Car using ROS | ||
#### Udacity Self-Driving Car Engineer Nanodegree --- Capstone Project | ||
|
||
|
||
## Team 4Tzones | ||
 | ||
|
||
| Team Member | Email | LinkedIn | | ||
| :---: | :---: | :---: | | ||
| Mohamed Elgeweily | [email protected] | https://www.linkedin.com/in/mohamed-elgeweily-05372377 | | ||
| Jerry Tan Si Kai | [email protected] | https://www.linkedin.com/in/thejerrytan | | ||
| Karthikeya Subbarao | [email protected] | https://www.linkedin.com/in/karthikeyasubbarao | | ||
| Pradeep Korivi | [email protected] | https://www.linkedin.com/in/pradeepkorivi | | ||
| Sergey Morozov | [email protected] | https://www.linkedin.com/in/aoool | | ||
|
||
All team members contributed equally to the project. | ||
|
||
*4Tzones* means "Four Time Zones," indicating that team members were located in 4 different time zones | ||
while working on this project. The time zones range from UTC+1 to UTC+8. | ||
|
||
|
||
## Software Architecture | ||
 | ||
|
||
Note that obstacle detection is not implemented for this project. | ||
|
||
|
||
## Traffic Light Detection Node | ||
|
||
A large part of the project is to implement a traffic light detector/classifier that recognizes | ||
the color of nearest upcoming traffic light and publishes it to /waypoint_updater node so it can prepare | ||
the car to speed up or slow down accordingly. Because the real world images differ substantially from simulator images, | ||
we tried out different approaches for both. The approaches which worked best are described below. | ||
|
||
### Simulator (Highway) --- OpenCV Approach | ||
In this approach we used the basic features of OpenCV to solve the problem, the steps are described below. | ||
* Image is transformed to HSV colorspace, as the color feature can be extracted easily in this colorspace. | ||
* Mask is applied to isolate red pixels in the image. | ||
* Contour detection is performed on the masked image. | ||
* For each contour, area is checked, and, if it falls under the approximate area of traffic light, | ||
polygon detection is performed and checked if the the number of sides is more than minimum required closed loop polygon. | ||
* If all the above conditions satisfy there is a red sign in the image. | ||
|
||
#### Pros | ||
* This approach is very fast. | ||
* Uses minimum resources. | ||
|
||
#### Cons | ||
* This is not robust enough, the thresholds need to be adjusted always. | ||
* Doesnt work properly on real world data as there is lot of noise. | ||
|
||
### Real World (Test Lot) --- YOLOv3-tiny (You Only Look Once) | ||
We used this approach for real world. | ||
TODO:write about it | ||
|
||
### Real World (Test Lot) --- SSD (Single Shot Detection) | ||
We need to solve both object detection - where in the image is the object, | ||
and object classification --- given detections on an image, classify traffic lights. | ||
While there are teams who approached it as 2 separate problems to be solved, | ||
recent advancements in Deep Learning has developed models that attempt to solve both at once. | ||
For example, SSD (Single Shot Multibox Detection) and YOLO (You Only Look Once). | ||
|
||
We attempted transfer learning using the pre-trained SSD_inception_v2 model trained on COCO dataset, | ||
and retrain it on our own dataset for NUM_EPOCHS, achieving a final loss of FINAL_LOSS. | ||
|
||
Here is a sample of the dataset. | ||
 | ||
|
||
Sample dataset for simulator images | ||
 | ||
|
||
Here are the results of our trained model. | ||
(Insert image here!) | ||
|
||
|
||
### Dataset | ||
|
||
#### Image Collection | ||
We used images from 3 ROS bags provided by Udacity. | ||
* [traffic_lights.bag](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/traffic_light_bag_file.zip) | ||
* [just_traffic_light.bag](https://drive.google.com/file/d/0B2_h37bMVw3iYkdJTlRSUlJIamM/view?usp=sharing) | ||
* [loop_with_traffic_light.bag](https://drive.google.com/file/d/0B2_h37bMVw3iYkdJTlRSUlJIamM/view?usp=sharing) | ||
|
||
|
||
### Other approaches for traffic light detection | ||
|
||
We experimented with few other (unsuccessful) approaches to detect traffic light. | ||
|
||
#### Idea | ||
|
||
The idea is to use the entire image with a given traffic light color as an individual class. This means we will have 4 classes | ||
|
||
1. Entire image showing `yellow` traffic sign | ||
2. Entire image showing `green` traffic sign | ||
3. Entire image showing `red` traffic sign | ||
4. Entire image showing `no` traffic sign | ||
|
||
#### Dataset | ||
|
||
We created a dataset combining two other datasets that was already made available [here](https://github.com/alex-lechner/Traffic-Light-Classification) and [here](https://github.com/coldKnight/TrafficLight_Detection-TensorFlowAPI#get-the-dataset). | ||
|
||
The combined dataset can be found [here](https://www.dropbox.com/s/k8l0aeopw544lud/simulator.tgz?dl=0). | ||
|
||
#### Models | ||
|
||
We trained couple of models: | ||
|
||
1. A simple CNN with two convolutional layers, a fully connected layer and an output layer. The initial results looked promising with `training accuracy > 97%` and `test accuracy > 90%`. However when we deployed and tested the model, the results were not consistent. The car did not always stop at red lights and sometimes it did not move even when the lights were green. Efforts to achieve higher accuracies were in vain. | ||
|
||
2. Used transfer learning for multi-class classification approach using `VGG19` and `InceptionV3` models, using `imagenet` weights. The network did not learn anything after `1-2` epochs and hence the training accuracy never exceeded `65%`. | ||
|
||
|
||
### Learning Points | ||
|
||
|
||
### Future Work | ||
|
||
|
||
### Acknowledgements | ||
|
||
- We would like to thank Udacity for providing the instructional videos and learning resources. | ||
- We would like to thank Alex Lechner for his wonderful tutorial on how to do transfer learning on TensorFlow Object Detection API research models and get it to run on older tensorflow versions, as well as providing datasets. You can view his readme here: https://github.com/alex-lechner/Traffic-Light-Classification/blob/master/README.md#1-the-lazy-approach |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,74 @@ | ||
This is the project repo for the final project of the Udacity Self-Driving Car Nanodegree: Programming a Real Self-Driving Car. For more information about the project, see the project introduction [here](https://classroom.udacity.com/nanodegrees/nd013/parts/6047fe34-d93c-4f50-8336-b70ef10cb4b2/modules/e1a23b06-329a-4684-a717-ad476f0d8dff/lessons/462c933d-9f24-42d3-8bdc-a08a5fc866e4/concepts/5ab4b122-83e6-436d-850f-9f4d26627fd9). | ||
|
||
Please use **one** of the two installation options, either native **or** docker installation. | ||
|
||
### Native Installation | ||
|
||
* Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. [Ubuntu downloads can be found here](https://www.ubuntu.com/download/desktop). | ||
* If using a Virtual Machine to install Ubuntu, use the following configuration as minimum: | ||
* 2 CPU | ||
* 2 GB system memory | ||
* 25 GB of free hard drive space | ||
|
||
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this. | ||
|
||
* Follow these instructions to install ROS | ||
* [ROS Kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu) if you have Ubuntu 16.04. | ||
* [ROS Indigo](http://wiki.ros.org/indigo/Installation/Ubuntu) if you have Ubuntu 14.04. | ||
* [Dataspeed DBW](https://bitbucket.org/DataspeedInc/dbw_mkz_ros) | ||
* Use this option to install the SDK on a workstation that already has ROS installed: [One Line SDK Install (binary)](https://bitbucket.org/DataspeedInc/dbw_mkz_ros/src/81e63fcc335d7b64139d7482017d6a97b405e250/ROS_SETUP.md?fileviewer=file-view-default) | ||
* Download the [Udacity Simulator](https://github.com/udacity/CarND-Capstone/releases). | ||
|
||
### Docker Installation | ||
[Install Docker](https://docs.docker.com/engine/installation/) | ||
|
||
Build the docker container | ||
```bash | ||
docker build . -t capstone | ||
``` | ||
|
||
Run the docker file | ||
```bash | ||
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstone | ||
``` | ||
|
||
### Port Forwarding | ||
To set up port forwarding, please refer to the [instructions from term 2](https://classroom.udacity.com/nanodegrees/nd013/parts/40f38239-66b6-46ec-ae68-03afd8a601c8/modules/0949fca6-b379-42af-a919-ee50aa304e6a/lessons/f758c44c-5e40-4e01-93b5-1a82aa4e044f/concepts/16cf4a78-4fc7-49e1-8621-3450ca938b77) | ||
|
||
### Usage | ||
|
||
1. Clone the project repository | ||
```bash | ||
git clone https://github.com/udacity/CarND-Capstone.git | ||
``` | ||
|
||
2. Install python dependencies | ||
```bash | ||
cd CarND-Capstone | ||
pip install -r requirements.txt | ||
``` | ||
3. Make and run styx | ||
```bash | ||
cd ros | ||
catkin_make | ||
source devel/setup.sh | ||
roslaunch launch/styx.launch | ||
``` | ||
4. Run the simulator | ||
|
||
### Real world testing | ||
1. Download [training bag](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/traffic_light_bag_file.zip) that was recorded on the Udacity self-driving car. | ||
2. Unzip the file | ||
```bash | ||
unzip traffic_light_bag_file.zip | ||
``` | ||
3. Play the bag file | ||
```bash | ||
rosbag play -l traffic_light_bag_file/traffic_light_training.bag | ||
``` | ||
4. Launch your project in site mode | ||
```bash | ||
cd CarND-Capstone/ros | ||
roslaunch launch/site.launch | ||
``` | ||
5. Confirm that traffic light detection works on real life images |