The new Advance Driving Assist Systems (ADAS) being rolled out in the higher end automobiles are one of the first interactions the public will have with Artificial Intelligence (AI). Science fiction movies have the public believing that AI is some futuristic robot, however most people don’t realize that the systems within ADAS use AI. These systems consist of:
- Adaptive Cruise Control: Maintains a safe distance from other cars through automatic adjustments of the vehicle’s cruise control system
- Automotive Night Vision: Increases a driver’s vision at night or in poor weather using a thermographic camera
- Traffic Sign Recognition: Enables a vehicle to recognize traffic signs such as speed limit, school zone or cross walk
- Lane Departure Sensor: Warns a driver when the vehicle begins to move out of its lane without signaling
- Parking Assistance: Assists drivers with parking the vehicle
- Backup Cameras: Aids backing up and alleviates the rear blind spot
- Collision Avoidance: Alerts drivers to potential collisions, helping to reduce the severity of accidents
- Automatic Electronic Braking: Automatically varies the force applied to a vehicle’s wheels based on road conditions, speed, loading, etc.
- Smart Headlights: Automatically tailors headlamp range, helping to ensure maximum visibility without impacting other drivers
Figure 1. Typical Autonomous Vehicle System
Any autonomous vehicle system can be broken into four main functional elements; Sense, Perceive, Plan, and Control. The hardware and software complexity of these functional elements varies depending on the level of autonomy the system provides.
A vehicle operating with autonomous features must be able to sense and perceive physical aspects of the driving environment in order to make control decisions. Examples of sensors employed in an automobile can include LIDAR, cameras, radar, ultrasonic sensors, and GPS. Sensors for low level autonomy that are typical to current production vehicles include radar for adaptive cruise control, brake assist, and collision avoidance. Cameras are used for lane departure, parking assist, and backup. High levels of autonomy will require LIDAR to paint a 360 degree, 3D image of the driving environment to be used for object detection and classification. In addition, a greater number of high definition cameras will be needed. Increasing the number and complexity of the sensor arrays will also increase the complexity of the perception algorithms and the compute power needed to execute the algorithms.
Perception is the autonomous system’s ability to collect data and extract relevant information from an environment. Environmental perception involves applying context to an environment, e.g. object location, road sign detection/marking, drivable areas, velocities, and prediction of an object’s future state. As an example, LIDAR can be used to create a dynamic 3D map of an environment. Raw clustering point data from the sensor is applied to two algorithm steps, segmentation and classification. Edge based, attributes based, region based, model based, and graph based segmentation algorithms process clustering points into multiple homogeneous groups. These segmentation clusters can then be classified as a bike, pedestrian, road sign, building, school bus, etc. Detection algorithms use the automobile’s vision system to identify less complex objects such as lane markings and road surface. The other component to perception is localization. For an autonomous system to be able to react safely to an environment it must comprehend the vehicle’s position and orientation. Again, a complex problem that typically requires the fusion of multiple sensors that may include GPS and inertial navigation hardware.
The planning subsystem is responsible for compiling information from the perception engine, considering inputs on mission/behavior/motion, and making decisions. The planning framework must be robust enough to handle a wide range of urban driving scenarios. The mission planner typically handles high level objectives related to route, e.g. road selection, pickup/dropoff tasks, schedule, etc. The behavior planner makes real-time decisions to ensure proper object interaction and rule compliance. Examples of output from the behavior planner are commands to change lane, overtake vehicle, proceed through intersection, etc. The motion planner is responsible for generating appropriate paths and actions that meet local objectives (typically reaching a location while avoiding obstacle collision). Multi-dimensional motion planning requires a high level of computational complexity.
The control block brings everything together to execute the competency of the autonomous system. It provides necessary inputs to the hardware that generates desired motions. An example control structure in an autonomous system is feedback control. Measured system response is used to actively compensate for deviations from the desired behavior model. Another example is model predictive control. System modeling is used to perform predictive control for a short time horizon. Systems may employ one of these control methods or combinations to achieve functional goals.
Things that were only seen in science fiction movies a decade ago are now becoming part of everyday life. As the level of autonomy in vehicles continues to increase to the point of no human interaction, the demand on the hardware and software will require swift innovation to keep pace. Memory will continue to play a huge role in the capabilities and performance of these systems. These are exciting times as we prepare for the next big technology boom.
Reference: MDPI: Perception, Planning, Control, and Coordination for Autonomous Vehicles, Jan 2017