List of works
Conference proceeding
Accelerating Classical Path Planning via Learned Search Space Reduction
Published 01/08/2026
AIAA SCITECH 2026 Forum, 1997
AIAA SciTech 2026, 01/12/2026–01/16/2026, Orlando, Florida, USA
Efficient path planning is critical for autonomous navigation in complex environments. This paper presents a comparative analysis of two classical heuristic algorithms, A* and Dijkstra, which are widely used to compute optimal paths on grid-based maps. While Dijkstra guarantees path optimality through an exhaustive search, A* improves computational efficiency by leveraging heuristics, sometimes at the expense of suboptimal solutions. Recognizing the complementary strengths and limitations of these methods, we propose a hybrid approach that integrates their outputs using a convolutional autoencoder-based neural network. Trained on path distributions from both algorithms, the model learns to predict high-probability path regions and expected path lengths, effectively reducing the search space for downstream planners. This integration allows the system to balance optimality and speed, providing a data-driven mechanism to guide classical search algorithms more efficiently. Experimental results demonstrate that our hybrid planner significantly improves planning time without sacrificing path quality, illustrating the benefits of combining heuristic search with learned priors for scalable and adaptive robotic navigation.
Conference proceeding
Published 01/08/2026
AIAA SCITECH 2026 Forum
AIAA SCITECH 2026 Forum, 01/12/2026–01/16/2026, Orlando, Florida, USA
The increasing demand for unmanned aerial vehicle (UAV) systems across various sectors underscores their escalating significance within multiple industries. A variety of unmanned aerial vehicles are employed in military operations, entertainment, and delivery services. Nevertheless, the control of extensive UAV swarms presents considerable complexity. Hierarchical control is recognized for its scalability and reliability in managing large swarms; however, the failure of a leader node can lead to a significant collapse of the swarm agents due to the inherently unstable control mechanisms. This research proposes a distributed, autonomous, time and memory efficient method for leader selection, leveraging the principles of swarm intelligence. The problem is formulated under the best-of-n mode, achieving satisfactory productivity. This algorithm outperforms leading methods such as Raft and GHS, demonstrating that the proposed approach exhibits over 90% accuracy while optimizing both time and memory usage. The objective of this approach is to enhance the reliability of search and rescue operations, ensure the safety of drone displays, and facilitate various large UAV swarm applications.
Conference proceeding
Investigating the Role of Uncertainty in Scalability of Human Multi-Robot Teams
Published 11/03/2025
2025 34th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2155 - 2161
IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2025), 08/24/2025–08/28/2025, Eindhoven, Netherlands
Scalability of human multi-robot teams is quickly becoming a crucial area of research as autonomous systems become more capable and sophisticated. A key research challenge is developing predictive measures of scalability, such as fan-out. This paper presents the results from a study that confirms the improved accuracy of a novel fan-out model over two previous models. It utilizes a new test domain to assess scalability and investigate the role of uncertainty through a variety of complexities driven by environmental factors, robot behaviors, and human-robot interactions. Our analysis highlights potential enhancements to optimize model accuracy across all the models. Finally, we show that when calibrating for measurement error, the new model is bounded, which sets it apart from previous models that are unbounded. The new model provides a more nuanced understanding of the dynamics at play and the factors involved in scaling Human Multi-Robot Teams under uncertainty.
Conference proceeding
Fan-Out Revisited: The Impact of the Human Element on Scalability of Human Multi-Robot Teams
Published 09/02/2025
2025 IEEE International Conference on Robotics and Automation (ICRA) DOI: 10.1109/ICRA55743.2025 19-23 May 2025, 4280 - 4286
IEEE International Conference on Robotics and Automation (ICRA 2025), 05/18/2025–05/22/2025, Atlanta, Georgia, USA
This paper introduces a novel fan-out model that improves accuracy over previous models. The commonly used models rely on neglect time, the time an agent operates independently, which confounds both human and robot abilities. The proposed model separates neglect time into two functionally distinct concepts: the time a robot can operate self-sufficiently, and the time a human estimates the robot can do so. Previous research indicates fan-out is often overestimated. This work explains why robot ability provides an upper bound to fan-out, but that actual achieved fan-out is influenced by both the human and robot abilities. We conduct a study to validate this new model and show improved performance over the two most common fan-out models. The results show that both previous models overestimate as predicted. Using the new fan-out model, we show that as the difference between human estimation and robot abilities grows, the actual fan-out will fall further from the upper bound potential fan-out. By including assessments of both the robotic and human elements, the new model provides a more nuanced understanding of the dynamics at play and the factors involved in scaling Human Multi-Robot Teams.
Conference proceeding
Common and Unique Representation Deep Embedded Clustering
Published 08/18/2025
2025 IEEE International Conference on Image Processing (ICIP), 611 - 616
IEEE International Conference on Image Processing (ICIP 2025), 09/13/2025–09/16/2025, Anchorage, Alaska, USA
A goal of multi-view clustering (MVC) is to discover common features of an object across views while identifying unique features within each view. Deep neural networks are good at feature learning on large-scale unlabeled datasets, but most deep MVC methods struggle to extract and utilize complementary information from view-unique features. Additionally, many lack support for single-sample inference, limiting applications. This paper presents a novel Common and Unique Representation Deep Embedded Clustering (CUR-DEC) architecture and optimization method that learns view-invariant representations, aiding in clustering assignments by leveraging view-unique information. This method is suitable for single-sample inference. We first pretrain an autoencoder to extract both view-common and view-unique features, then a common cluster representation is learned by leveraging complementary information. Experimental results on multi-view datasets show that our method provides significant improvements compared to other deep multi-view clustering methods.
Conference proceeding
A Human Multi-Agent Teaming Testbed: Escape Room Simulation
Published 07/26/2025
Human Factors and Simulation, 180
International Conference on Applied Human Factors and Ergonomics (AHFE 2025), 07/25/2025–07/29/2025, Orlando, Florida, USA
Scalability of multi-agent systems is quickly becoming a crucial area of research as artificial agents become more capable and sophisticated. This is particularly relevant in the context of increasingly ubiquitous AI and automation. A key challenge to furthering research in the area of human-multi-agent teams is having suitable test domains that are simple enough to allow thorough analysis, yet rich enough to foster an appropriate level of human-agent interaction. We present a new open-source test domain designed and developed to explore the dynamics of human interactions with multi-agent systems and the factors that enable and inhibit scalability.Our desire to understand and model the factors that influence scalability and related measures led us to search for a testbed simulation that could provide a suite of capabilities. First, it needed to model a specific type of activity, where the human owns the mission goals and coordinates the efforts of multiple agents to achieve a common goal. Second, we needed to be able to control the interaction and observability of the agents. Lastly, we needed to be able to adjust the autonomy of the agents. After researching available testbeds, we did not find many simulations that met these criteria, were readily available, open-source, easy to use, and most importantly, provided the desired human interaction dynamics. As such, we developed a suitable simulation testbed to explore measures of scalability and how both the characteristics of the technology and the human users affect the scalability of human multi-agent teams.The mission scenario we found that exemplifies the collaborative teamwork dynamics we are interested in is the escape room; a cooperative human team game that became popular in the last couple of decades. The escape room models the aspects of teamwork we were looking for: exploration, partial knowledge, distributed execution, interdependent decision-making, and puzzle-solving. Our escape room simulation is a single-player game where the human player directs a team of agents to explore several rooms to collect keys to open doors. The simulation can be simple with just a few keys and doors or increasingly complex with several layers of keys and doors to solve.In the escape room simulation, we can control the two main drivers of scalability: interaction and automation. The testbed is instrumented to measure both, as well as a host of additional relevant measures for analyzing scalability. We explain how the testbed supports several of the more popular models of scalability as well as our own variant. The simulation was developed using the p5 JavaScript library and the p5.js web editor. The source code is freely accessible online for tailoring the environment and modifying agent capabilities. With the increase of AI and agent capabilities and the desire to employ ever larger multi-agent teams, it is important to have a simulation to test out theories to better understand what influences, constrains and enables effective human multi-agent teaming. Our simulation provides a valuable tool for researchers and practitioners in this evolving field.
Conference proceeding
Deep embedded multi-view Object clustering using Aerial images in the wild
Published 05/29/2025
Proc. SPIE 13463, Automatic Target Recognition XXXV, 134630H, Proceedings Volume 13463
SPIE Defense + Commercial Sensing, 04/12/2025–04/16/2025, Orlando, Florida, USA
Multiview clustering has garnered attention, with many methods showing success on simple datasets such as MNIST. However, many state-of-the-art methods lack sufficient representational capability. This paper presents results of deep embedded clustering on multiview real-world aerial imaging data. Adapting previous methods to the challenges of aerial imagery, the approach introduces a ResNet-18 autoencoder backbone and data augmentation techniques to handle complex images and diverse environmental conditions. Advanced feature extraction using convolutional autoencoders captures intricate patterns and spatial relationships. By integrating multiview data, the self-supervised method enhances clustering accuracy and robustness, advancing aerial image analysis for environmental monitoring, urban planning, and first responder efforts.
Conference paper
A Visual Question Answering-based Object Detection Framework using a Team of Multi-Agent UAVs
Date presented 05/09/2025
Florida Conference on Recent Advances in Robotics (FCRAR 2025), 05/08/2025–05/09/2025, Dania Beach, Florida, USA
This paper presents an autonomous multi-agent unmanned aerial vehicle (UAV) system designed to perform object detection through Visual Question Answering (VQA) using aerial imagery. The system utilizes an entropy-based distributed behavior model to coordinate UAV movements toward designated
waypoints. A VQA model is used to analyze aerial footage for detection of objects of interest. The study investigates the impact of various distributed behavior configurations, including number of UAVs, UAV formations, flight altitude, and separation distance. After analysis, a final optimized configuration for maximizing surface area coverage and VQA model performance were found. These findings contribute to the development of aerial systems capable of collaborative visual reasoning in complex environments.
Conference paper
Text-to-Image Model-based Image Segmentation for Scene Understanding in Autonomous Robot Navigation
Published 05/08/2025
Florida Conference on Recent Advances in Robotics (FCRAR 2025), 05/07/2025–05/08/2025, Florida Atlantic University, Dania Beach, Florida, USA
Image segmentation is essential for navigation and scene understanding in autonomous systems, particularly in unstructured outdoor environments. This study investigates the segmentation capabilities of DALL-E 3, a generative text-to-image model, that is not explicitly trained for semantic segmentation.
A custom segmentation pipeline was developed to evaluate and refine DALL-E 3 outputs on outdoor images from the RELLIS-3D dataset. The post-processing workflow includes morphological operations with varied structure elements to enhance segmentation accuracy. Segmentation accuracy was assessed using mean Intersection over Union (mIoU) across selected terrain classes. Results show that the raw DALL-E 3 outputs were improved after
developed post-processing refinement, and resulting accuracy values are competitive with supervised models, HRNet+OCR and GSCNN. These results demonstrate that text-to-image models, when paired with domain-aware post-processing, offer a promis-ing alternative for flexible, rapid-deployment segmentation for universal robotics without requiring labeled training data. These efforts contribute to our research team’s broader goal of enabling
intelligent mobile robots capable of autonomous perception and decision-making in complex environments.
Conference proceeding
Design and Development of a Small UGV for Object Retrieval in Domestic Environments
Published 04/25/2025
SoutheastCon 2025, 537 - 542
2025 IEEE SoutheastCon, 03/22/2025–03/30/2025, Concord, North Carolina, USA
The objective of this project is to develop an unmanned ground vehicle (UGV) prototype capable of retrieving small objects in domestic environments. As the robotics field continues to advance, the demand for autonomous systems that enhance daily life is rapidly increasing. Autonomous systems have the potential to improve efficiency by performing routine tasks, thereby reducing human effort and promoting convenience in household settings. This study explores the development of a small UGV designed to identify, navigate toward, and retrieve small objects while autonomously avoiding obstacles such as walls and furniture in domestic environments. The system utilizes neural networks for real-time object detection. The UGV design consists of an object retrieval mechanism suitable for handling items typically left scattered in domestic environments. The proof-of-concept results show the potential of designed UGV domestic, and could potentially enable further advancements in autonomous household assistance technologies.