An automated lane keeping system is a closed-loop communication system employed in semi-autonomous and autonomous vehicles. An automated lane keeping system will work to keep a car in its lane by steering and applying throttle and brakes to the vehicle. Automated lane keeping systems are typically considered to be Level 3 in SAE’s levels of driving automation.
The SAE’s J3016 automation standard is the most widely adopted formal classification of autonomous driving systems. This standard has been adopted by organizations such as the National Highway Traffic Safety Administration (NHTSA) among others. This standard defines 6 levels of driving automation:
Closed loop control systems are different from open loop control systems because they feature a feedback loop between the motor and the controller. This allows them to self correct in real time based on changing variables as determined by the feedback system.
Before moving to production on any system, vehicle manufacturers and Tier 1 suppliers must conduct end to end verification and validation of their systems. This allows them to examine the functionality of their systems and ensure safety of passengers and other vehicles on the road.
Closed-loop system testing is a part of the verification and validation process. It provides robust statistical evidence on the functioning of an autonomous car system. When done right, it can help engineers see areas where the systems may be lacking, giving ideas on what to prioritize in the future.
AirSim, CarSim, and Applied Intuition are popular tools that vehicle manufacturers will employ to ensure their vehicle models are thoroughly tested in a 3D car simulator environment. Collimator is designed to seamlessly integrate with these 3rd party platforms via API and allows users to design and test their end to end system - their perception system, localization system and control system all in one place.
In this tutorial, we will design an automated lane keeping system (LKAS) in Collimator. We will then simulate the car's performance in a 3D simulated environment so we can view how the system reacts based on its position in a lane.
The goal is to see how the vehicle performs in the simulated environment to inform our controller design and test the real-time communication between Collimator and the AirSim environment.
Below is the simple schematic of the interface and closed loop feedback between the two systems:
An overview of the our model is pictured below:
In order to infer the position of the vehicle in its lane, we will use a pre-trained PyTorch neural network. The neural network is adapted from a paper written by Lucas Tabelini Torres of Cornell University titled PolyLaneNet: Lane Estimation via Deep Polynomial Regression.
PolyLaneNet will infer an image and draw over it, the leftmost and rightmost position of the lane the vehicle is on. The version of PolyLaneNet we are using has also been modified to return the offset recommendation of the steering wheel in order to keep the vehicle in the lane.
Note: for the purpose of this simulation we are assuming a constant throttle and no brakes applied onto the vehicle
The simulation begins with a clock block that’s responsible for time keeping across both Collimator and AirSim. The clock is connected to a series of Python scripts that will operate our AirSim model.
The inputs and outputs of the three main blocks are listed below:
\begin{array} {|l|l|} \hline Block & Input & Output \\\hline \text{AirSim_RX} & \text{Clock} & \text{RGB e.g. (200,200,200)} \\ \hline \text{PolyLaneNet} & \text{RGB} & \text{Offset (degrees)} \\ \hline \text{App_LangeLocalization} & \text{Steering (offset) and throttle} & \text{Vehicle position} \\ \hline \end{array}
The IntiScript sets up the neural net we will use for inference. Our Airsim_RX script block assumes a running AirSim instance. We’ll be using AirSim’s API to pull images from the AirSim simulator.
As specified in the AirSim documentation, we’ll connect to the AirSim instance and assign the speed, throttle and gear in our script.
Next we will grab the images from our running AirSim simulation and convert them to RGB numpy arrays for neural net inferring.
We'll begin our PolyLaneNet script block by importing the libraries we will use and setting the width and height of the simulator:
We also want to update the frame counter with each frame that we pull from AirSim:
In order to infer images using our version of PolyLaneNet we will need to load the model from the path we specified in our initializing script. This only needs to happen on the first frame of the simulation.
We’ll ensure this happens only once by checking that our variable model doesn’t exist.
Next we will assign an image buffer that will be our first image into the net. The image is a placeholder image that is fed on the 0'th frame.
Now we can begin writing code that will execute on every frame called from AirSim.
We want to be sure that our RGB values from each frame are assigned to our unscaled variable every time a new frame is loaded: We will also add bilinear interpolation to smoothen the image making it easier to infer. Then we’ll load the image onto a 1 dimensional tensor array.
Next we will run inference on Tensor passing our model and device variables that have already been generated.
Our updated version of PolyLaneNet returns an offset - which is just the difference between the left and right distance divided by the area - and draws the lane guidelines over the image we feed it. Will use cv2 to override our images with the results we got from PolyLaneNet.
We will then assign our offset variable as our output variable to the PolyLaneNet script block.
The throttle we will apply to the model is going to be generated by a step block whose parameters are shown below:
What this will do is provide a constant throttle of 1 units to the model and slow that to .04 units after the 5th second.
Next in our final script block we will apply our Collimator suggested throttle, steering and brake to the AirSim model.
In order to double check simulation data between Collimator and AirSim we will assign the x,y, and z position of the car to the output variables of the function.
Full Tutorial Video: