¶¶Ňőapp

Skip to main content Skip to search

YU News

YU News

Researchers Using AI to Reduce Accidents in Self-Driving Cars Receive Award

Youshan Zhang, assistant professor of artificial intelligence and computer science, and Lakshmikar Polamreddy, a master’s candidate in artificial intelligence, developed a convolutional neural network model for self-driving cars. photo by Denton Field

By Dave DeFusco

Katz School researchers received the Emerging Research Award at the Future Technologies Conference for their work on a machine learning algorithm that could reduce the number of traffic accidents involving self-driving cars.

The conference is the world's leading forum for reporting research breakthroughs in AI, computer vision, data science, computing and related fields, and attracts top research think tanks, industry technology developers and academic researchers.

In their paper, “,” Youshan Zhang, assistant professor of artificial intelligence and computer science, and Lakshmikar Polamreddy, a master’s candidate in artificial intelligence, describe their convolutional neural network (CNN) model for self-driving cars, which aimed to address the limitations of previous work in this area.

The LaksNet model uses images and steering angles collected from a Udacity simulator—an open-source simulator for training and testing self-driving, deep-learning algorithms. The Udacity simulator includes a virtual representation of a car and its surroundings, allowing users to implement and test their algorithms for tasks, like perception, decision-making and control, providing a safe environment for learning and experimenting with self-driving technologies.

“Our approach involved building and training end-to-end, machine-learning models using extensive sets of data, typically in the form of images collected from cameras,” said Zhang. “These models were trained to drive vehicles in a way that minimized accidents.”

CNNs, a type of artificial neural network designed specifically for image recognition and processing, were inspired by the visual processing of the human brain and are characterized by their ability to automatically learn spatial hierarchies of features from images. They have become the backbone of many computer-vision applications, including autonomous vehicles, facial recognition, image classification and medical image analysis.

Zhang and Polamreddy’s CNN model, called LaksNet, used a simulated environment provided by Udacity's self-driving car nanodegree program to generate training data and assess model performance. The approach involved training the CNN model using 130,000 images and their associated steering angles generated in the Udacity simulator. After training for the required number of epochs—an epoch is one complete pass through a training dataset—the model was used to predict steering angle values, which are then passed into the simulator.

“Too few epochs may result in underfitting, where the model hasn’t learned the underlying patterns in the data,” said Polamreddy. “Too many epochs, on the other hand, can lead to overfitting, where the model becomes too specific to the training data and performs poorly on new, unseen data."

With LaksNet, Zhang and Polamreddy evaluated the performance of a model developed by the company NVIDIA. That model is designed to predict steering angles directly from raw pixels in a camera feed. The researchers then explored whether pre-trained ImageNet models, which are often used for various computer vision tasks, including object recognition and detection in the context of self-driving cars, could outperform the NVIDIA model.

During the testing stage, they monitored the steering angles on the terminal and observed the car’s movements in the simulator window in real-time. It offered two different tracks for training and testing, and images were captured using three different cameras positioned at various angles. They also captured the car’s view, including the track, the car itself and the environment outside the track.

Since pre-trained models did not meet expectations set by the NVIDIA model, the researchers embarked on building their own CNN models tailored to the task. One of the custom CNN models outperformed the pre-trained models and the NVIDIA model by allowing the car to drive on the track autonomously for 150 seconds.

“We developed a novel model with two main objectives: achieving state-of-the-art performance and using fewer parameters for training,” said Zhang. “Our model is more efficient and effective for reducing accidents in autonomous driving.”

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us