I am a PhD. candidate in the Mechanical Engineering department at Clemson University. I work at the Interdisciplinary Intelligent Research Laboratory at Clemson University under Dr. Yue "Sophie" Wang. I have also worked with a larger consortium of researchers at VIPR-GS group with the Automotive Department at CU-ICAR. Before this, I completed my M.S. from Mechanical Engineering at Purdue University and my B.E. in Mechanical Engineering at University of Pune.
BE Mechanical Engineering
MIT - Pune
2015 - 2018
MS Mechanical Engineering
Purdue University
2019 - 2021
PhD Candidate Mechanical Engineering
Clemson University
2022 - Present
My research interests lie in neurosymbolic deep learning, formal verification and controls for robotics and autonomous vehicle applications. I have worked on developing path planning and navigation tools for ground robots using formal tools like temporal languages [1] [2]. I have also worked on integrating 3D semantic mapping tools for off-road ground robot applications using the octomap library. Currently, I am developing neurosymbolic tools to formally verify convolutional neural networks and neural network controllers for complex dynamical systems.
Aditya Parameshwaran,
Yue Wang
2D navigation model for an autonomous vehicle based on task specifications given in signal
temporal logic (STL) guaranteeing safety. This work is presented in the SAE WCX 2023 conference at Detroit.
Paper | Slides | Source Code
Aditya Parameshwaran,
Yue Wang
2D controller synthesis combining Linear and Signal Temporal Logic specifications to gurantee safe
and robust navigation for ground robots. This method is an update on the SAE Paper from 2023 and
is faster while maintaining similar levels of safety as before. It is published as part of the IFAC papers in the MECC 2024 conference..
Edwina Lewis,
Aditya Parameshwaran
Bayesian Calibration Routine based off-road terrain roughness estimator combined with Simplex controller for mobile robots. This work is applied on
NVIDIA's Isaac Sim environment along with Jackal robot to collect IMU data and predict the
roughness of the terrain. The roughness estimates allow a Simplex controller to switch between performance and safety modes of operation. This work is part of the
SAE WCX 2025 Conference at Detroit.
I have been involved in various robotics projects since completing my MS at Purdue, some in collaboration with companies like Wabtec Corporation, and others as side projects for the US Army VIPR Centre. These projects have spanned areas such as Controls, Deep Learning, Autonomous Navigation, and Computer Vision.
(ROS, Python, PyTorch, Embedded C)
At Purdue University, I contributed to the development of an autonomous railway bot that navigated tracks autonomously while collecting sensor
data using LiDAR, stereo cameras, and IMU. The bot created 3D maps of its surroundings and could carry a 30-pound payload for additional sensors.
An NVIDIA Jetson AGX processed the data, which was used to train a CNN for traffic sign recognition. This project greatly improved my skills in electronics, programming, and robotics.
(MATLAB, PyTorch, Embedded C, OpenCV)
As part of a group project, we developed an autonomous lane-keeping and sign-detecting RC car. The track layout was identified, and optimal camera
placements were chosen. Image data was processed using OpenCV for lane-keeping. A ResNet-based CNN was trained to detect
traffic signs, and the RC car successfully navigated the track, responding accurately to signs.
Paper | Source Code | Slides
(C++, Python, ROS2, Sensor Fusion)
Integrated stereo cameras and LiDAR on a Husky robot using ROS2 in Unity simulation environment to develop semantically segmented 3D maps of the environment.
Used the octomap library to fuse semantically segmented RGB data from the camera with LiDAR point cloud to generate 3D maps using voxels.