A lane keeping assist system is one of the most common advanced driver assistance systems (ADAS) available today. It’s responsible for ensuring that a driver does not unintentionally drift out of their lane.
Traditional lane keeping assist systems use a camera or sensors to identify where the car is relative to lane markings. In cases of drift, it will either alert the driver or use steering correction and selective braking to gently nudge the vehicle back into the lane.
ADAS systems such as lane keeping are gaining popularity with consumers. A survey by Consumer Reports in 2019 suggested that 74% of consumers are “very satisfied” with lane keeping systems. In addition, crash statistics conducted by the Insurance Institute for Highway Safety from 2009-15 suggested that lane keeping systems reduced relevant injury crashes by a whopping 21%.
Lane keeping assist systems can be classified into two categories: closed-loop and open-loop.
In both open-loop and closed-loop systems, the controller operates the vehicle that is being tested. However, the difference between closed loop vs open loop is that closed loop systems have feedback incorporated in the system. This means that closed-loop systems are able to take their output as an input and correctly adjust the system to respond according to the desired behavior.
Closed loop systems will feature a feedback loop between the motor and the controller allowing the controller to correct the motor based on its past output in order to achieve the desired output. As a result closed loop systems are also called feedback control systems.
Closed loop control systems will generally allow the controller to be more accurate when compared to the desired output because they are able to react instantly to changes within the system. Cruise control and advanced driver assistance systems (ADAS) systems are popular examples of closed loop systems.
Open loop systems will usually be cheaper and easier to implement than closed systems. Because open-loop systems will react on input alone they’re a great way to test initial controller functionality. The data provided through open loop system simulation can then be fine tuned and optimized through closed loop simulations. Some open loop system examples include timer based systems such as toasters and TV remote controls.
A modern lane keeping assist system (LKAS) will work to keep a vehicle in lane by gently steering the car back to the middle of the lane if it begins to drift out.
Simulations of open loop lane keeping assist systems are useful in testing the performance of the model or controller. It is usually conducted with the driver in the loop (DIL) and is a necessary step in verification and validation.
It is worth noting that many lane keeping assist systems feature hybrid open-closed-loop control schemes to reach the semi-autonomous or autonomous levels of driving desired. In this scheme, an open-loop control system will be in effect at all times, comparing the driver’s steering to the controller’s steering. When the margin of error hits a certain threshold the traditional closed-loop controller is then employed correcting the vehicle's actual steering.
In order for autonomous vehicles and computer controlled systems to be adopted into mainstream life, they must undergo rigorous verification and validation testing to ensure the track record of their reliability can be proven.
The standards of verification and validation may vary depending on the country and state. However, these systems must prove that they can operate in the real world without human intelligence to fall back on. This requires large amounts of simulation using real world data and synthetic data.
Driver in the loop (DIL) testing is a method of comparing a controller’s decision making with a human. The goal of which is to maximize the statistical safety of the system before it gets to the road. The first step is to prove that computer controlled driving is better than a human driver.
In the following tutorial, we will create an open loop simulation in Collimator. It will take in video input of a driver driving a car, and run this against a computer to compare the driver performance with the controller performance. The point of our driver in the loop open simulation will be to study just how effective the controller is over a human at steering a vehicle.
An overview of the model is pictured below:
The simulation begins with video input of a human driver. This video is passed to the perception app that will then determine the different aspects of the road, including lanes, obstacles, etc. in this example, the output will return the left and right lane parameters that will feed into the lane localization app.
The lane localization app will compute the car's distance when compared to the center of the lane. The steering controller app returns the controller's suggestions for steering wheel correction. The driver submodel inputs feedback from the perception app to determine what the driver did and returns the actual drivers steering commands. The final adder block computes their difference between the two.
The inputs, outputs and parameters of each submodel are listed below:
\begin{array} {|l|l|} \hline Block & Input & Output \\ \hline \text{Video} & -- & \text{RGB eg (200,200,200)} \\ \hline \text{Model_perception} & \text{RGB} & \text{Left and right corners of the lane} \\ \hline \text{App_LaneLocalization} & \text{Left and Right lane position} & \text{Vehicle distance to lane center} \\ \hline \text{App_steeringController} & \text{Vehicle distance to lane center} & \text{Degree of suggested steering angle} \\ \hline \text{Model_Driver} & \text{Vehicle distance to lane center} & \text{Degree of steering angle} \\ \hline \text{err_controller_vs_driver} & \text{Human angle - controller angle} & -- \\ \hline \end{array}
The perception submodel takes in the three parameters that correspond to the RGB values from the video and connects them to a python script block. It then processes the values using a pretrained PyTorch Neural Net in order to determine the lane of the vehicle.
We will import our PyTorch model for inferring: which we have stored in our single_test python file.
Then we will convert each video frame into Tensor to ensure our model is able to read and infer the images.
We will then call our PyTorch Model and run inference on our Tensor of RGB values.
Our Model_outputs variable contains the prediction of which lane is which and when our vehicle is in each lane.
We will store these values into their respective variables, and flush the buffer to ensure predictions are released while the script is running.
Now that we have our outputs we can connect the variables to our script block properties
We see the maximum number of lanes we’re predicting is 5 so we create the corresponding mux blocks for them. Mux blocks will allow us split the lane information into its left and right edges as shown below.
Now that we have the left and right edges of the lane we will compute the distance to the center by halving the left and right values then combining them together.
For the App_SteeringController, we used a PID controller to determine the corrections required based on the position of the vehicle. We begin by converting our distance to meters and our speed with respect to seconds instead of hours. We then calculate the derivative of the distance to the center in order to compute the velocity with respect to the center of the lane caused by steering wheel correction.
We calculate the tangent of this product in order to represent the velocity as a degree tilt of the steering wheel. We then add a discrete PID controller block to provide corrections. This correction is converted to degrees and will be the steering angle recommendation of the controller.
Much like our App_SteeringController block, our Model_Driver block takes the derivative of the vehicle's position with respect to the center of the lane. It divides that by the car’s velocity and runs a tan function to convert it to determine the degrees of steering applied by the human driver.
Our err_controller_vs_driver adder block compares the steering wheel correction of the driver vs the controller. It seems that there is an initial sharp overcorrection from both the driver and the controller. The controller seems to overcompensates more at the beginning stages as the PID controller gains more data. However, after a few seconds, the steering of the controller and the human driver seem to be virtually the same.
Let's zoom in to the time after the initial turn to get a closer look.
After the initial few seconds, it seems the controller performs better. The driver seems to overcorrect his steering every couple of seconds to keep at the center of the lane. This suggests that this simple controller would provide a smoother driving experience when compared to the human driver.