RACECAR Neo Papers – Volume 1
A collection of 7 technical papers on the topics of solving problems within the autonomous vehicle industry. Papers focus on the detection of various environmental anomalies, risk mitigation techniques, and solutions to improve autonomous vehicle safety.
All student research efforts are completed within a 3-week timespan during the summer of 2024 and have been reviewed by student peers and industry professionals.

Leading Car Pose Estimation for Autonomously Yielding to Emergency Vehicles
A. Rao, C. Liu, N. Chaudhuri, X. Ren, K. Hong, M. Viswanath, C. Lai
Abstract: As autonomous vehicles (AVs) become more prevalent, ensuring they can effectively detect and yield to emergency vehicles is critical for improving road safety and operational efficiency. This study explores the use of fiducial markers, specifically ArUco markers, for accurate pose estimation in AVs. By integrating camera calibration with marker detection, we aim to enhance the vehicle’s ability to recognize and respond to emergency vehicles. Our approach focuses on detecting the movements of a leading vehicle, using pose estimation to trigger a sequence of predefined instructions that enable the AV to yield appropriately in real-time situations.

Analysis of Glare Reduction in Traffic Light Detection for Visual-Based Autonomous Systems
H. Nanthatkumar, R. Jiang, S. Hart, S. Yu, K. Bandyopadhyay, C. Lai
Abstract: Solar glare poses a significant challenge in both manual and autonomous driving, particularly during sunrise or sunset when the sun is low on the horizon. This glare impairs autonomous vehicles’ cameras, hindering their performance and accuracy. This error can lead to fatal consequences for autonomous vehicles, which rely on sensor data for environment perception, especially at intersections. Thus, we use the RACECAR Neo platform and develop computer vision algorithms to enhance the performance of autonomous vehicles under bright and sunny conditions. This includes High Dynamic Range (HDR) imaging to improve the color, resolution, and clarity of camera feedback and a machine learning model to correctly determine the stoplight’s color. We evaluate our research by comparing the accuracy of the machine learning model after testing with unfiltered versus filtered images. Our experimental findings validate the efficacy of our research as the system recognizes color in the filtered images significantly better than the unfiltered images.

Binary Classification of Track Conditions through the Analysis of Inertial Properties
H. Cao, M. Martono, S. Wang, S, Hosokawa, A. Qattan, C. Lai
Abstract: The parameter-based nature of traditional control systems makes it difficult for vehicles to adapt to different terrains. This challenge is especially evident when driving systems encounter a loss of traction, which can lead to uncontrolled motions such as sliding and drifting. Traditional autonomous driving systems typically operate on controllers with static parameters, such as Proportional–Integral–Derivative (PID) and Iterative Learning Control (ILC). While these methods, rooted in control theory—a field focused on understanding how autonomous systems function—are effective in some scenarios, they struggle to adapt to varying terrains. This paper introduces a neural network that utilizes inertial measurements and driver inputs to classify terrains with reduced traction, which will enable autonomous vehicles to react to such hazardous conditions. The network demonstrates high accuracy on the gathered data for both normal and reduced traction conditions and achieves a similar accuracy in real life test conditions during the validation phase. Additionally, the model’s performance data indicates that it operates with minimal computational overhead, ensuring it does not slow down or interfere with other critical programs running on the vehicle’s CPU. This efficiency makes the model particularly well-suited for real-time applications in autonomous driving, where it is crucial to maintain high processing speed and reliability.

Enhancing Autonomous Vehicle Safety with Preemptive Broadcasting Systems at Intersections
R. Parikh, J. Lin, S. Zhao, H. Sabbella, C. Kim, E. Hen, C. Lai
Abstract: Navigating intersections often requires human drivers to use hand gestures or turn signals, but limited visibility and confusion over right-of-way lead to significant hesitation and human error, contributing to around 61.2% of fatal crashes [1]. While there isn’t a better solution for human drivers, autonomous vehicles have many abilities that could be used to our advantage. One such ability is the implementation of a communication system that allows vehicles to broadcast their intentions to each other. Most current systems use real-time reactive motion planning to avoid accidents, but high crash rates of autonomous vehicles show that these types of systems are often not enough. This research explores the development and implementation of a broadcasting algorithm for autonomous vehicles, allowing them to communicate their intended actions in advance to prevent collisions. The algorithm was tested in two ways: through communication between two Rapid Autonomous Complex-Environment Competing Ackermann-steering Robots (RACECAR Neos) and by comparing the effectiveness of preemptive broadcasting systems versus reactive motion planning at various four-way stops in a simulator. The RACECAR Neos are small-scale, autonomous vehicles equipped with sensors and computing power, designed for research and education in autonomous driving. The goal of this research is to demonstrate that early signaling with a communication protocol can significantly enhance road safety, reduce collisions, and create a more predictable driving environment.

Software-based Approach for Detecting Tire Blowouts through Inertial Measurement Anomalies in Autonomous Ground Systems
A. Goyal, R. Petkar, S. Ayyagari, S. Cai, V. Seth, Y. Zhu, C. Lai
Abstract: A prominent issue among ground vehicles is tire blowout or other catastrophic tire failure, which occurs when at least one of the vehicle’s tires rapidly deflates or detaches completely. This can cause a driver to lose control of the vehicle’s steering which can lead to dangerous, potentially fatal accidents. These incidents can become even more perilous when tire failures occur on vehicles without a human driver behind the wheel, specifically in autonomous vehicles. Therefore, we propose a method for detecting tire failures by analyzing data from the inertial measurement unit (IMU) for anomalies in accelerometer and gyroscope values, which signify such incidents. To test our method’s accuracy, we simulate tire failure in highway conditions by physically detaching a wheel on our RACECAR, a 1/14 scale model autonomous car, while it performs line following on a curved path. We develop and compare a threshold-based algorithm, a Z-score-based algorithm, and a convolutional neural network (CNN) to predict when blowouts occur during our trials, the latter two of which validate the use of IMUs in the detection of tire failures.

Enhancing Autonomous Vehicle Navigation in Fog Through Localization with Deteriorated LiDAR Data
A. Zheng, B. Wei, J. Wang, K. Ashar, K. Yu, C. Lai
Abstract: Autonomous vehicles (AVs) have increased in popularity in the past decade, with most able to navigate without human interaction for prolonged periods in ordinary road conditions. However, when faced with non-ideal road conditions such as dense fog, autonomous cars struggle to detect their surroundings and make safe decisions. AVs use a LiDAR sensor, which measures distances around the vehicle by timing how long it takes for laser pulses emitted to bounce back into the sensor from its environment, loses accuracy proportional to the density of fog. As fog density increases, data returned by the LiDAR becomes segmented, returning gaps where data becomes lost. Through extensive testing with a remote-controlled car with autonomous capabilities placed in a simulated foggy environment, we determined an effective method to localize with fog-affected data. Our paper proposes a specific method of localization using the Iterative Closest Point (ICP) algorithm, which maps the deteriorated LiDAR data to a pre-existing LiDAR reference map to obtain vehicle location.

Mitigating AV Sideswipe Incidents Through the Implementation of a Deterministic Vehicle Detection Algorithm
J. McGivern, A. Cheng, A. Zhu, G. Focia, C. Lai
Abstract: Driver error significantly contributes to road accidents, with over three million crashes annually, including nearly fifty thousand fatalities, as reported by the US National Highway Traffic Safety Administration in 2019. Among the most common accidents involving autonomous vehicles (AVs) are sideswipes, typically caused by human drivers making unsafe or sudden lane changes. This paper proposes a system to address this issue by integrating high-resolution sensors, such as Light Detection and Ranging (LIDAR), with real-time data analysis to detect erratic driving behaviors. The system continuously monitors nearby vehicles, identifying swerving or sudden lane changes through acceleration checks and dynamic angle thresholds, thereby calculating the probability of a potential collision. Upon detecting such behaviors, the system adjusts the AV’s speed and heading to avoid collisions. The system was tested using one-fourteenth-scale programmable RACECARs and their virtual twins in the RACECAR simulator, simulating erratic driving conditions. The initial tests revealed limitations due to the static data training of the Convolutional Neural Network (CNN) used for detection, which affected the system’s ability to predict and prevent sideswipe collisions. Despite these challenges, the research focused on enhancing AV safety by addressing one of the most common collision scenarios, contributing valuable insights into the development of systems capable of improving the reliability and safety of autonomous vehicles.
