ONGOING PROJECTS:
Project 1: Multi-Surface Vessel Motion Planning and Tracking
This project focuses on developing a test platform for autonomous unmanned surface vehicles (USVs). Our goal is to design and implement motion planning algorithms and validate them in a controlled environment within our lab. A dedicated test pool will be established to evaluate the performance of these vehicles under various wave disturbances. The motion of the vessels will be tracked using an OptiTrack motion tracking system to ensure precise data collection and analysis. A key research objective includes designing autonomous motion planning solutions for tasks such as harbor docking and cargo transfer.
Project 2: Visual-Inertial Odometry Sensor Fusion Box
This project aims to construct a comprehensive dataset for visual-inertial odometry (VIO) tasks using various robotic platforms within our lab, including wheeled, legged, and underwater vehicles. The objective is to develop and test VIO algorithms in diverse and challenging environments. The sensor fusion box will integrate multiple RGB-D cameras and a GPS sensor, enabling data collection from platforms of varying scales—from miniature robots to large vehicles, including cars and humans.
Additionally, we plan to implement the MSC-KF (Multi-State Constraint Kalman Filter) algorithm in real-time on our platform. This will allow us to not only gather data during each test but also produce an estimation of the vehicle’s trajectory. These real-time estimations can serve as valuable references for future studies and facilitate comparisons with other algorithms.
Project 3: Adaptive Path Planning and Dynamic Obstacle Avoidance for Unmanned Ground Vehicles
This project focuses on developing collision-free motion planning algorithms for unmanned ground vehicles in 2D environments with both static and dynamic obstacles. Using the RRT algorithm as a foundation, our approach comprises three phases. In the first phase, we implement path planning using funnel structures, ensuring collision-free navigation in static environments.
In the second phase, we address dynamic obstacle avoidance using a probabilistic approach. A Kalman Filter predicts the future positions of dynamic obstacles, and these estimates are checked for potential collisions with the funnels. This method avoids the computational complexity of real-time analytical solutions.
In the final phase, a potential field motion planning algorithm guides the vehicle to its goal. While potential field methods may face local minima issues, in our scenario, these occur momentarily, allowing the vehicle to wait for the obstacle to pass before continuing.
Project 4: Data-Driven Controller Design Using Temporal Logics
This project aims to develop a novel data-driven approach for robot controller design using temporal logics, applied to an unmanned land vehicle and an unmanned surface robot. The method involves three main steps: (1) generating labeled datasets through simulations with set-valued controllers, (2) synthesizing temporal logic formulas to describe positive and negative events, and (3) repairing controllers to satisfy or violate these formulas as needed.
Temporal logics, traditionally used for system verification, are leveraged here for their expressiveness and adaptability, enabling the definition of temporal patterns and the creation of controllable formula templates for controller repair. Machine learning techniques will be employed for formula synthesis, allowing comparisons with existing datasets to evaluate classifier performance and computational efficiency.
The approach is platform-agnostic, as it does not rely on system dynamics, making it robust against uncertainties. Simulation environments and real-world testing will validate the proposed method’s effectiveness in motion planning tasks for both robot platforms, aiming to enhance controller robustness and interpretability.
Project 5: Active Sensing and Environment Reconstruction in Glass Knifefish
This project investigates the active sensing behavior of the glass knifefish (Eigenmannia virescens), which uses its longitudinal fin and electromagnetic sensory system to navigate and interact with its surroundings. The fish employs a dynamic sensing mechanism, constantly adjusting its movements to gather sensory information, especially when error dynamics occur. This study focuses on the relationship between the fish’s visual and electromagnetic sensory systems, with experiments conducted in dark and light conditions to observe the interplay. Using a test setup with a high-speed camera and a linear track refuge, we aim to simulate the fish’s behavior by driving the error to nonzero, replicating its active sensing mechanism, and reconstructing the environment model the fish perceives. The research also models the trade-off between exploration (increasing sensory input) and exploitation (tracking a target) within the fish’s movement patterns.
Project 6: Actuation Mechanism of the Glass Knifefish
This project aims to explore and understand the unique actuation mechanism of the glass knifefish, particularly focusing on the dynamics of its elongated fin and the mutually opposing waves generated along this fin. The glass knifefish’s ability to produce undulatory motion through its fin is key to its swimming and maneuvering, and understanding this actuation mechanism can provide insights into bioinspired robotic movement. In the first phase of the project, we successfully constructed and identified a single wave structure generated by the fish’s fin, studying its characteristics and the underlying physical principles. Building on this foundation, phase 2 will delve deeper into the complexities of the wave dynamics, exploring how these opposing waves interact and how they contribute to the fish’s propulsion and control. The ultimate goal is to develop a more comprehensive model of the actuation mechanism, which can be applied to the design of bioinspired robotic systems capable of mimicking the fish’s efficient and versatile movement in aquatic environments.
Project 7: Enhancing SLAM with Event Cameras for Active Sensing in Challenging Lighting
Event cameras are visual sensors that respond to changes in light intensity, capturing differential signals over time rather than traditional frame-based images. In this project, we aim to explore the potential of event cameras by creating intentional errors in sensor position to actively sense the environment, similar to the active sensing mechanisms observed in species like the glass knifefish. By leveraging this approach, we plan to enhance existing SLAM (Simultaneous Localization and Mapping) techniques, enabling more robust performance in environments with ultra-low or fluctuating light conditions. This project focuses on overcoming the limitations of standard RGB-D cameras, which struggle with blurriness and focus issues in challenging lighting, to provide improved sensing and environment reconstruction capabilities.
Project 8: LIDAR Data Segmentation and Compression for Real-Time Environmental Reconstruction
Recent advancements in LIDAR technology have enabled the collection of rich, dense environmental data, especially in airborne systems. However, the large volumes of data generated from unmapped regions, driven by the speed of the vehicle, pose significant challenges in real-time environmental reconstruction. This vast amount of data, if left unprocessed, can overwhelm both the onboard processing system and the communication channels used for transmitting the information. To address this, our project focuses on segmenting and compressing LIDAR data to enable efficient transmission and processing through communication channels with limited bandwidth. By estimating the shapes of surrounding objects from the LIDAR data, we aim to reconstruct the environment and perform SLAM (Simultaneous Localization and Mapping) in real time. Additionally, we explore methods to ensure the data is lightweight enough to be transmitted without overloading the system, thus improving the efficiency of environmental reconstruction and autonomous navigation.
Project 9: LEO Rover
Recent advancements in LIDAR technology have enabled the collection of rich, dense environmental data, especially in airborne systems. However, the large volumes of data generated from unmapped regions, driven by the speed of the vehicle, pose significant challenges in real-time environmental reconstruction. This vast amount of data, if left unprocessed, can overwhelm both the onboard processing system and the communication channels used for transmitting the information. To address this, our project focuses on segmenting and compressing LIDAR data to enable efficient transmission and processing through communication channels with limited bandwidth. By estimating the shapes of surrounding objects from the LIDAR data, we aim to reconstruct the environment and perform SLAM (Simultaneous Localization and Mapping) in real time. Additionally, we explore methods to ensure the data is lightweight enough to be transmitted without overloading the system, thus improving the efficiency of environmental reconstruction and autonomous navigation.