The aim of the semantic segmentation project is to label the pixels of a road in images with the use of neural networks. In contrast to the "Vehicle Detection" or "Traffic Sign Classifier" projects in Term 1, pixel-by-pixel spatial information should be kept and written to the images after inference.
This project builds a Fully-Convolutional network (FCN). FCNs consist of several (convolutional) encoder layers that extract certain information, just as in a convolutional neural network (CNN). In comparism to CNNs, these convolutions are not followed by fully-connected layers but 1x1 convolutions, that allow "matrix multiplication with spatial information". The next layers are transposed convolutions, that result in an output layer of the same size as the initial input layer. The FCN is completed by adding "skip connections", that allow to preserve certain spatial information by skipping part of the encoding and decoding layers.
In this project, the encoding part of the FCN will not be trained. Instead, a pre-trained and "frozen" network (VGG16) will be used. In order to keep the training effort on a manageable level, just the decoding layers will be trained ("Transfer Learning").
As with the other projects of the Self-Driving Car Nano-Degree, most of the code was provided by Udacity. All open To Dos are found within the main.py script, which is divided into several functions:
- load_vgg is used to load the frozen "VGG16" neural network and extract the handles/pointers to the layers that will be used to build the FCN.
- layers builds the fully-convolutional neural network. As it will be used multiple times, I decided to define the 1x1 convolution function inside the layer function. The layout of the FCN was adopted fromFCN-8sin J. Long, E. Shelhamer, T.Darrell. Fully Convolutional Networks for Semantic Segmentation, see below.
- optimize sets the necessary optimizer used for training. Taking a look back on the Traffic Sign Classifier project in term 1 helped a lot in order to define this function.
- train_nn is the actual training function for the neural network. Batches of images with the correct labels are fed to the optimizer. Again, a lot of the necessary lines were taken over or adapted from the term 1 project.
- run is the routine calling all other functions in order to create the Tensorflow session, set up and train the neural network. In the end, a course-provided helper function saves the inference results as PNGs. Examples are shown below in the "Inference Results" section.
In this project, you'll label the pixels of a road in images using a Fully Convolutional Network (FCN).
Make sure you have the following is installed:
Download the Kitti Road dataset from here. Extract the dataset in the data folder. This will create the folder data_road with all the training a test images.
Implement the code in the main.py module indicated by the "TODO" comments.
The comments indicated with "OPTIONAL" tag are not required to complete.
Run the following command to run the project:
python main.py
Note If running this in Jupyter Notebook system messages, such as those regarding test status, may appear in the terminal rather than the notebook.
- Ensure you've passed all the unit tests.
- Ensure you pass all points on the rubric.
- Submit the following in a zip file.
helper.pymain.pyproject_tests.py- Newest inference images from
runsfolder (all images from the most recent run)
- The link for the frozen
VGG16model is hardcoded intohelper.py. The model can be found here - The model is not vanilla
VGG16, but a fully convolutional version, which already contains the 1x1 convolutions to replace the fully connected layers. Please see this forum post for more information. A summary of additional points, follow. - The original FCN-8s was trained in stages. The authors later uploaded a version that was trained all at once to their GitHub repo. The version in the GitHub repo has one important difference: The outputs of pooling layers 3 and 4 are scaled before they are fed into the 1x1 convolutions. As a result, some students have found that the model learns much better with the scaling layers included. The model may not converge substantially faster, but may reach a higher IoU and accuracy.
- When adding l2-regularization, setting a regularizer in the arguments of the
tf.layersis not enough. Regularization loss terms must be manually added to your loss function. otherwise regularization is not implemented.
If you are unfamiliar with GitHub , Udacity has a brief GitHub tutorial to get you started. Udacity also provides a more detailed free course on git and GitHub.
To learn about REAMDE files and Markdown, Udacity provides a free course on READMEs, as well.
GitHub also provides a tutorial about creating Markdown files.



