how to draw a 3d traffic light

Traffic Light Classification


Traffic Light Nomenclature

The goals/steps of this project are the following:

  • Gather and label the datasets
  • Transfer learning on a TensorFlow model
  • Classify the land of traffic lights
  • Summarize the results with a written report

Table of Contents

  1. Introduction
  2. Set upwardly Tensorflow
    1. Windows ten
    2. Linux
  3. Datasets
    1. The Lazy Arroyo
    2. The Diligent Approach
      1. Extract images from a ROSbag file
      2. Data labeling
      3. Create a TFRecord file
  4. Preparation
    1. Choosing a model
    2. Configure the .config file of the model
    3. Setup an AWS spot instance
    4. Grooming the model
    5. Freezing the graph
  5. Recommendation: Utilize SSD Inception V2
    1. Conclusion
  6. Troubleshooting
  7. Summary

Introduction

The goal of this project was to retrain a TensorFlow model on images of traffic lights in their dissimilar light states. The trained model was and so used in the final capstone project of the Udacity Self-Driving Motorcar Engineer Nanodegree Program as a frozen inference graph. Our project (and the implementation of the frozen graph) can be found here: Bulldoze Safely Capstone Project

The following guide is a detailed tutorial on how to fix the traffic lite classification project, to (re)train the TensorFlow model and to avoid the mistakes I did. For my project I've read Daniel Stang's, Anthony Sarkis' and Vatsal Srivastava's Medium posts on traffic lite classification. I encourage y'all to read through them also. Still, fifty-fifty though they were comprehensible and gave a basic understanding of the problem the authors notwithstanding missed the biggest and hardest part of the project: Setting up a training environment and retrain the Tensorflow model.

I will now endeavor to cover upwardly all steps necessary from the beginning to the end to have a working classifier. Also, this tutorial is Windows-friendly since the project was done on Windows 10 for the well-nigh part. I suggest reading through this tutorial first earlier following forth.

If you lot run into any errors during this tutorial (and you probably volition) please cheque the Troubleshooting section.

Prepare TensorFlow

If a technical recruiter ever asks me:

"Describe the toughest technical problem you've worked on."

my answer definitely volition exist:

"Get TensorFlow to work!"

Seriously, if someone from the TensorFlow team is reading this: Clean up your binder construction, use descriptive folder names, merge your READMEs and - more importantly - fix your library!!!

Merely enough of Google bashing - they're doing a good job just the library still has teething troubles (and an user-unfriendly installation setup).

I will now evidence you lot how to install the TensorFlow 'models' repository on Windows 10 and Linux. The Linux setup is easier and if you don't have a powerful GPU on your local machine I strongly recommend you to do the training on an AWS spot instance because this will save you a lot of fourth dimension. Nevertheless, yous tin can practise the basic stuff like data preparation and data preprocessing on your local car but I suggest doing the training on an AWS instance. I will show you how to set the training environment in the Grooming section.

Windows 10

  1. Install TensorFlow version one.4 by executing the following statement in the Command Prompt (this assumes y'all have python.exe set in your PATH environs variable)

                      pip install tensorflow==1.four                                  
  2. Install the following python packages

                      pip install pillow lxml matplotlib                                  
  3. Download protoc-three.4.0-win32.zip from the Protobuf repository (Information technology must be version three.4.0!)

  4. Excerpt the Protobuf .zip file e.g. to C:\Program Files\protoc-iii.4.0-win32

  5. Create a new directory somewhere and name it tensorflow

  6. Clone TensorFlow's models repository from the tensorflow directory by executing

                      git clone https://github.com/tensorflow/models.git                                  
  7. Navigate to the models directory in the Command Prompt and execute

    This is of import considering the lawmaking from the master branch won't work with TensorFlow version 1.4. Too, this commit has already fixed cleaved models from previous commits.

  8. Navigate to the research folder and execute

    ## The quotation marks are needed! "C:\Programme Files\protoc-3.4.0-win32\bin\protoc.exe" object_detection/protos/*.proto --python_out=.                
  9. If step 8 executed without any mistake and so execute python builders/model_builder_test.py

  10. In lodge to access the modules from the research binder from anywhere, the models, models/research, models/research/slim & models/research/object_detection folders need to be set as PATH variables similar so:

    10.one. Get to System -> Advanced organization settings -> Environs Variables... -> New... -> proper name the variable PYTHONPATH and add the absolute path from the folders mentioned above

    pythonpath

    x.ii. Double-click on the Path variable and add %PYTHONPATH%

    path variable

Source: cdahms' question/tutorial on Stackoverflow.

Linux

  1. Install TensorFlow version 1.4 by executing

                      pip install tensorflow==1.4                                  
  2. Install the following packages

                      sudo apt-get install protobuf-compiler python-pil python-lxml python-tk                                  
  3. Create a new directory somewhere and proper noun it tensorflow

  4. Clone TensorFlow'southward models repository from the tensorflow directory by executing

                      git clone https://github.com/tensorflow/models.git                                  
  5. Navigate to the models directory in the Command Prompt and execute

    This is important because the code from the master branch won't work with TensorFlow version 1.4. Also, this commit has already fixed broken models from previous commits.

  6. Navigate to the enquiry folder and execute

                      protoc object_detection/protos/*.proto --python_out=.  consign PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim                                  
  7. If the step 6 executed without any errors then execute

                      python object_detection/builders/model_builder_test.py                                  

Datasets

As ever in deep learning: Earlier you lot get-go coding you demand to assemble the correct datasets. For this project, y'all volition need images of traffic lights with labeled bounding boxes. In sum there are 4 datasets you can use:

  1. Bosch Pocket-size Traffic Lights Dataset
  2. LaRA Traffic Lights Recognition Dataset
  3. Udacity'south ROSbag file from Carla
  4. Traffic lights from Udacity's simulator

I ended up using Udacity's ROSbag file from Carla only and if you advisedly follow along with this tutorial the images from the ROSbag file volition be enough to take a working classifier for real-world AND simulator examples. There are 2 approaches on how to get the data from the ROSbag file (and from Udacity'due south simulator):

i. The Lazy Arroyo

You can download Vatsal Srivastava's dataset and my dataset for this project. The images are already labeled and a TFRecord file is provided equally well:

  1. Vatsal's dataset
  2. My dataset

Both datasets include images from the ROSbag file and from the Udacity Simulator.

My dataset is a little sparse (at least the amount of xanthous traffic lights is small) simply Vatsal'due south dataset has plenty images to train. Notwithstanding, I encourage you to apply both. For example, I used Vatsal's data for grooming and mine for evaluation.

2. The Diligent Arroyo

If you accept plenty time, love to characterization images, read tutorials about traffic lite classification before this one or want to get together more data, and then this is the mode to get:

ii.1 Extract images from a ROSbag file

For the simulator data, my team colleagues Clifton Pereira and Ian Burris collection around the rail in the simulator and recorded a ROSbag file of their rides. Because Udacity provides the students with a ROSbag file from their Car named Carla where (our and) your capstone project will exist tested on the lawmaking/process for extracting images will be (mostly) the same. The steps beneath presume you have ros-kinetic installed either on your local motorcar (if you take Linux as an operating system) or in a virtual environment (if y'all accept Windows or Mac as an operating organisation)

  1. Open up a last and launch ROS

  2. Open some other concluding (but do Not close or exit the offset final!) and play the ROSbag file

    rosbag play -50 path/to/your_rosbag_file.bag
  3. Create a directory where you want to save the images

  4. Open some other, tertiary final and navigate to the newly created directory and...

    1. ...execute the following argument if you have a ROSbag file from Udacity'due south simulator:

      rosrun image_view image_saver _sec_per_frame:=0.01 image:=/image_color
    2. ...execute the following argument if you accept a ROSbag file from Udacity'southward Machine Carla:

      rosrun image_view image_saver _sec_per_frame:=0.01 image:=/image_raw

    Equally you can see the difference is the rostopic after image:=.

These steps will extract the (camera) images from the ROSbag file into the binder where the code is executed. Please keep in mind that the ROSbag file is in an infinite loop and won't stop when the recording originally ended so it will automatically start from the beginning. If yous retrieve you have enough data you should interrupt one of the open up terminals.

If you can't execute footstep four.1 or iv.two yous probably don't take image_view installed. To fix this install image_view with sudo apt-get install ros-kinetic-image-view.

Hint: You tin can see the recorded footage of your ROSbag file past opening some other, quaternary terminal and executing rviz.

two.2 Data labeling

After you have your dataset you will need to label it by hand. For this process I recommend you to download labelImg. It's very user-friendly and easy to ready.

  1. Open labelImg, click on Open up Dir and select the binder of your traffic lights
  2. Create a new folder within the traffic lights folder and proper name it labels
  3. In labelImg click on Alter Save Dir and choose the newly created labels binder

Now you can offset labeling your images. When you lot have labeled an epitome with a bounding box hit the Save button and the plan volition create a .xml file with a link to your labeled paradigm and the coordinates of the bounding boxes.

Pro tip: I'd recommend you to split your traffic light images into 3 folders: Green, Yellow, and Red. The advantage is that you can check Use default label and use eastward.k. Red as an input for your cherry-red traffic lite images and the program volition automatically choose Red as your label for your drawn bounding boxes.

labeling a traffic light

2.iii Create a TFRecord file

At present that you take your labeled images y'all will need to create a TFRecord file in order to retrain a TensorFlow model. A TFRecord is a binary file format which stores your images and ground truth annotations. Just earlier you can create this file you will need the following:

  1. A label_map.pbtxt file which contains your labels (Red, Green, Yellow & off) with an ID (IDs must starting time at one instead of 0)
  2. Setup Tenorflow
  3. A script which creates the TFRecord file for you (experience free to employ my create_tf_record.py file for this process)

Please continue in mind that your label_map.pbtxt file can have more than 4 labels depending on your dataset. For case, if you're using the Bosch Small Traffic Lights Dataset y'all will most likely have about 13 labels.

In case yous are using the dataset from Bosch, all labels and bounding boxes are stored in a .yaml file instead of a .xml file. If you are developing your own script to create a TFRecord file you will have to accept care of this. If you are using my script I will now explain how to execute it and what it does:

For datasets with .yaml files (east.yard.: Bosch dataset) execute:

              python create_tf_record.py --data_dir=path/to/your/data.yaml --output_path=your/path/filename.record --label_map_path=path/to/your/label_map.pbtxt                          

For datasets with .xml files execute:

              python create_tf_record.py --data_dir=path/to/green/lights,path/to/carmine/lights,path/to/xanthous/lights --annotations_dir=labels --output_path=your/path/filename.record --label_map_path=path/to/your/label_map.pbtxt                          

Y'all will know that everything worked fine if your .tape file has nearly the same size as the sum of the size of your images. Also, you take to execute this script for your training set, your validation set (if y'all have i) and your test fix separately.

As you tin can encounter you don't need to specify the annotations_dir= flag for .yaml files because everything is already stored in the .yaml file.

The second code snippet (for datasets with .xml files) assumes yous have the following folder structure:

              path/to | └─dark-green/lights │   │  img01.jpg │   │  img02.jpg │   │  ... |   | │   └──labels │      │   img01.xml │      │   img02.xml │      │   ... | └─cerise/lights │   │  ... |   | │   └──labels │      │   ... | └─yellow/lights │   │  ... |   | │   └──labels │      │   ...                          

Important note well-nigh the dataset from Bosch: This dataset is very large in size considering every prototype takes approximately i MB of space. However, I've managed to reduce the size of each image drastically by simply converting it from a .png file to a .jpg file (for some reason the people from Bosch saved all images as PNG). You desire to know what I mean by 'drastically'? Before the conversion from PNG to JPEG, my .record file for the test set up was 11,3 GB in size. Afterward the conversion, my .record file for the examination set was simply 842 MB in size. I know... 😮 😮 😮 Trust me, I've checked the code and images and tested my script multiple times until I was finally convinced. The image conversion is already implemented in the create_tf_record.py file.

Training

one. Choosing a model

So far you should have a TFRecord file of the dataset(s) which you accept either downloaded or created by yourself. Now information technology'due south fourth dimension to select a model which you will train. You lot can run across the stats of and download the Tensorflow models from the model zoo. In sum I've trained 3 TensorFlow models and compared them based on their functioning and precision:

  • SSD Inception V2 Coco (17/11/2017) Pro: Very fast, Con: Not good generalization on unlike data
  • SSD Inception V2 Coco (xi/06/2017) Pro: Very fast, Con: Not adept generalization on different data
  • Faster RCNN Inception V2 Coco (28/01/2018) Pro: Good precision and generalization of different data, Con: Slow
  • Faster RCNN Resnet101 Coco (11/06/2017) Pro: Highly Accurate, Con: Very slow

Our team concluded upward using SSD Inception V2 Coco (17/11/2017) because it has good results for its performance.

You may inquire yourself why the date after the model's name is of import. As I've mentioned in the TensorFlow gear up section in a higher place, information technology'southward very important to bank check out a specific commit from the 'models' repository because the squad has fixed cleaved models. That's why it is important. And if you lot don't want to see the following results after a very long training session I encourage you to stick to the newest models or the ones I've linked above:

bad performance

You get these result too if y'all have also few training steps. You can imagine how much time I've spent to effigy this out...

After you lot've downloaded a model, create a new binder east.thousand. models and unpack the model with 7-zip on Windows or tar -xvzf your_tensorflow_model.tar.gz on Linux.

2. Configure the .config file of the model

You will demand to download the .config file for the model you've chosen or you can simply download the .config files of this repository if you've decided to train the images on one of the models mentioned in a higher place.

If yous desire to configure them on your own at that place are some of import changes y'all need to make. For this walkthrough, I will presume you lot are training on the Udacity Carla dataset with Faster RCNN Inception V2 SSD Inception V2.

TensorFlow model configs might differ but the post-obit steps below are the same for every model!

  1. Change num_classes: xc to the number of labels in your label_map.pbtxt. This will exist num_classes: 4
  2. Set the default max_detections_per_class: 100 and max_total_detections: 300 values to a lower value for case max_detections_per_class: 10 and max_total_detections: 10
  3. Change fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt" to the directory where your downloaded model is stored e.m.: fine_tune_checkpoint: "models/your_tensorflow_model/model.ckpt"
  4. Set num_steps: 200000 down to num_steps: 20000
  5. Change the PATH_TO_BE_CONFIGURED placeholders in input_path and label_map_path to your .tape file(s) and label_map.pbtxt

For Faster RCNN Inception V2:

  1. Change the default min_dimension: 600 and max_dimension: 1024 values to the minimum value (height) and the maximum value (width) of your images like and so

                      keep_aspect_ratio_resizer {     min_dimension: 1096     max_dimension: 1368 }                                  
  2. You lot tin increase batch_size: ane to batch_size: 3 or fifty-fifty higher

If yous don't want to apply evaluation/validation in your training, simply remove those blocks from the config file. However, if yous practice use it make sure to set up num_examples in the eval_config block to the sum of images in your .record file.

You can take a look at the .config files of this repsoitory for reference. I've configured a few things similar batch size and dropout as well. As I've mentioned earlier I've used Vatsal's dataset for preparation and my dataset for validation so don't become confused by the filename of my .record file jpg_udacity_train.record.

three. Setup an AWS spot instance

For preparation, I recommend setting upwards an AWS spot case. Training will be much faster and y'all can railroad train multiple models simultaneously on different spot instances (like I did):

simultaneous training Left: Training Faster RCNN Inception V2 Coco, Correct: Grooming SSD Inception V2 Coco

To set up an AWS spot example do the post-obit steps:

  1. Login to your Amazon AWS Account
  2. Navigate to EC2 -> Instances -> Spot Requests -> Asking Spot Instances
  3. Nether AMI click on Search for AMI, type udacity-carnd-advanced-deep-learning in the search field, choose Customs AMIs from the drib-downwards and select the AMI (This AMI is only available in US Regions so make sure you lot request a spot case from there!)
  4. Delete the default instance type, click on Select and select the p2.xlarge case
  5. Uncheck the Delete checkbox under EBS Volumes then your progress is not deleted when the example become's terminated
  6. Gear up Security Groups to default
  7. Select your key pair under Key pair name (if you don't have ane create a new primal pair)
  8. At the very bottom set Request valid until to about 5 - 6 hours and set Terminate instances at expiration as checked (You don't accept to practise this but keep in mind to receive a very big nib from AWS if y'all forget to terminate your spot instance considering the default value for termination is ready to 1 year.)
  9. Click Launch, expect until the instance is created and then connect to your instance via ssh

spot instance

iv. Preparation the model

  1. When y'all're connected with the instance execute the following statements consecutively:

    sudo apt-go update pip install --upgrade dask pip install tensorflow-gpu==i.4
  2. Set up up TensorFlow for Linux (merely skip step one because nosotros've already installed tensorflow-gpu!)

  3. Clone your classification repository and create the folders models & information (in your project folder) if they are non tracked by your VCS.

  4. Upload the datasets to the information folder

    1. If you're using my dataset you tin simply execute the following statements in the data binder:

      wget https://www.dropbox.com/south/vaniv8eqna89r20/alex-lechner-udacity-traffic-light-dataset.zip?dl=0 unzip alex-lechner-udacity-traffic-lite-dataset.zip?dl=0                                              ## Don't miss the ``?dl=0`` part when unzipping!                    
  5. Navigate to the models binder in your project folder and download your tensorflow model with

    wget http://download.tensorflow.org/models/object_detection/your_tensorflow_model.tar.gz tar -xvzf your_tensorflow_model.tar.gz
  6. Copy the file railroad train.py from the tensorflow/models/enquiry/object_detection folder to the root of your project folder

  7. Train your model by executing the post-obit statement in the root of your project binder

                      python train.py --logtostderr --train_dir=./models/railroad train --pipeline_config_path=./config/your_tensorflow_model.config                                  

five. Freezing the graph

When training is finished the trained model needs to be exported as a frozen inference graph. Udacity's Carla has TensorFlow Version ane.3 installed. Nevertheless, the minimum version of TensorFlow needs to be Version i.4 in lodge to freeze the graph but notation that this does not raise any compatibility issues. If you've trained the graph with a higher version of TensorFlow than 1.iv, don't panic! As long as you downgrade Tensorflow to version i.four earlier running the script to freeze the graph yous should be fine. To freeze the graph:

  1. Copy export_inference_graph.py from the tensorflow/models/research/object_detection binder to the root of your project binder

  2. Now freeze the graph by executing

                      python export_inference_graph.py --input_type image_tensor --pipeline_config_path ./config/your_tensorflow_model.config --trained_checkpoint_prefix ./models/train/model.ckpt-20000 --output_directory models                                  

    This will freeze and output the graph every bit frozen_inference_graph.atomic number 82.

Recommendation: Use SSD Inception V2

At first, our squad was using Faster RCNN Inception V2 model. This model takes about 2.9 seconds to classify images which is - as well the name of the model - not that fast. The reward about grooming the Faster RCNN Inception V2 is the generalization of the model to new, dissimilar & unseen images which ways the model was only trained on the image data of Udacity'due south parking lot and was able to allocate the light land of the traffic lights in the simulator too. Then why did we change the model to SSD Inception V2?

Our code was successfully tested on Carla but it failed in the simulator. This might sound funny - and it really is - but the reason why it failed is that the frequency of changing lights in the simulator is fix ridiculously high and so the light was changing every two - 3 seconds. The configuration of our traffic light detector node in our project is set to 3 sequent images of traffic lights until the final country (Red, Green, Yellow or Unknown) and action is passed to the amanuensis/car. That's the reason why nosotros changed the model from Faster RCNN Inception V2 to SSD Inception V2.

The good thing virtually SSD Inception V2 is its speed and functioning. Sometimes the SSD model misses to classify an paradigm with over 50% certainty only in full general, it is doing a good task for its functioning. Notwithstanding, unlike the Faster RCNN Inception V2 the model does not a expert job of classifying new, dissimilar images. For instance, I've trained the SSD model get-go on Udacity's parking lot information with 10.000 steps and information technology did a good job on classifying the parking lot traffic lights simply the model did not classify a unmarried image from the simulator data. After the training, I did transfer learning on the simulator data with 10.000 steps besides. After the training something interesting happened: The model was able to allocate the simulator information Simply the model "forgot" about its previous training on the Udacity parking lot data and therefore merely classified 2 out of 10 images from the Udacity parking lot dataset.

Decision

Our team is using now 2 trained SSD Inception V2 models for our Capstone project:

  • 1 SSD model for existent-world data
  • 1 SSD model for simulator data

If yous are using this arroyo as well I recommend you to train 2 SSD models simultaneously on an AWS instance. Because the SSD model "forgets" about the old trained data you don't have to do transfer learning and you can safely train 1 model on simulator data and 1 model on real-world data separately (and simultaneously) which will salvage you lot a tremendous corporeality of time.

SSD trained on parking lot images SSD trained on simulator images
ssd udacity ssd simulator

Take a look at the Jupyter Notebook to see the results.

UPDATE: At commencement, I've trained both SSD models with "just" x.000 steps and the results were okay. In order to have better results, I've trained it for another 10.000 steps so I'd recommend grooming both models with 20.000 steps in sum. To give yous an instance: Both SSD models had a problem to allocate traffic lights which were far away in the offset 10.000 steps session. Afterward training them for another ten.000 steps this problem was solved (and they had a college certainty in classifying the light land as well).

Troubleshooting

In instance you're running into any of the errors listed below, the solutions provided will fix it:

  • ValueError: Tried to convert 't' to a tensor and failed. Error: Argument must be a dense tensor: range(0, 3) - got shape [3], but wanted [].

Become to tensorflow/models/inquiry/object_detection/utils and edit the learning_schedules.py file. Go to the line 167 and replace it with:

              rate_index              =              tf.reduce_max(tf.where(tf.greater_equal(global_step,              boundaries),              list(range(num_boundaries)),                                       [0]              *              num_boundaries))

source: epratheeban'south answer on GitHub

  • ValueError: Protocol message RewriterConfig has no "optimize_tensor_layout" field.

Go to tensorflow/models/research/object_detection/ and edit the exporter.py file. Become to line 71 and alter optimize_tensor_layout to layout_optimizer.

If the same mistake occurs with the message [...] has no "layout_optimizer" field. then you have to change layout_optimizer to optimize_tensor_layout.

  • Can't ssh into the AWS case because of port 22: Resources temporarily unavailable

Become to Network & Security -> Security Groups -> correct click on the security group that is used on your spot instance (propably default) -> Edit inbound rules and prepare Source of SSH and Custom TCP to Custom and 0.0.0.0/0 like so:

aws inbound rules

  • Can't install packages on Linux because of dpkg: error: dpkg status database is locked by another process

This mistake volition probably occur when trying to execute sudo apt-get install protobuf-compiler python-pil python-lxml python-tk on the AWS spot instance after upgrading tensorflow-gpu to Version ane.4. Execute the following lines and try installing the packages again:

sudo rm /var/lib/dpkg/lock sudo dpkg --configure -a
  • tensorflow.python.framework.errors_impl.InternalError: Dst tensor is not initialized.

This error occurs when you don't have enough costless available retentivity on your GPU to train. To gear up this execute sudo fuser -v /dev/nvidia* and look for the process that is currently using your retention from the GPU.

kill memory

Then kill the process by executing sudo impale -9 <PID-to-kill>

Summary

If you are using Vatsal'southward and my dataset you only need to:

  1. Download the datasets
  2. Gear up TensorFlow only on the training instance, do the training and export the model

If yous are using your own dataset you need to:

  1. Set up TensorFlow locally (because of creating TFRecord files)
  2. Create your own datasets
  3. Gear up TensorFlow again on a grooming instance (if the grooming instance is non your local auto), practise the training and export the model

Training instance = System, where you train the TensorFlow model (probably an AWS instance and not your local car)

spencerweld1950.blogspot.com

Source: https://github.com/alex-lechner/Traffic-Light-Classification

0 Response to "how to draw a 3d traffic light"

Postar um comentário

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel