I am a Master's student in Mechanical Engineering at Stanford University and a researcher at the Stanford Vision and Learning Lab.
My current work focuses on the development of intelligent robots capable of performing everyday tasks through brain-robot interfaces (BRIs), using neural signal decoding and advanced machine learning techniques.
I am passionate about advancing robotic systems that can seamlessly collaborate with humans to enhance autonomy and task efficiency.
Previously, I was a robotics intern at Centrillion Technology, where I applied bimanual mobile manipulation techniques to automate manufacturing processes.
I also earned my B.S. in Mechanical Engineering at the University at Buffalo, where I contributed to advanced robotics research and developed a solar navigation system for street signs.
My long-term goal is to merge my engineering expertise with robotics to create innovative, robust, and socially impactful systems that improve the quality of life and expand the possibilities for human-robot collaboration.
- Applied Trossen Robotics' Mobile ALOHA system (Bimanual Mobile Manipulation) to automate complex manufacturing processes, utilizing imitation learning techniques to enable robots to accurately replicate human actions in dynamic environments.
- Conducted extensive experimentation with various policy algorithms (ACT, Diffusion), across diverse simulation and real-world environments, to optimize robotic performance and improve adaptability in different task scenarios.
Research Experience
My research interests center on robotics, particularly in developing advanced robotic controllers for manufacturing automation, medical applications, and biomechanics enhancement through exoskeleton technology.
- Contributed to projects integrating advanced machine learning models with brain-computer interface (BCI) systems for robotics.
- Collaborated with researchers to apply findings in real-world scenarios, emphasizing the seamless interaction between human intentions and robotic execution.
- Focused on enhancing human-robot interaction using EEG signals by developing efficient algorithms for improved brain signal decoding.
- Minimized user training time and created intuitive applications for robots to perform everyday tasks with minimal oversight.
- Focused on creating a robotic system capable of performing critical tasks that require touch-based diagnosis, such as applying pressure and stabilizing limbs.
- Implemented a compliance controller for maintaining contact with dynamic objects and a diagnosis mode to palpate patients and record stiffness data, using the Kinova Gen 3 robot arm equipped with Bota SensOne force sensor and Haply Inverse-3 haptic device.
- Demonstrated the system's ability to maintain consistent contact force, detect varying stiffness levels, and provide accurate haptic feedback. This includes adapting to patient movements without causing harm and enabling detailed tissue stiffness assessment, crucial for detecting abnormalities like tumors.
- Vine robots can navigate tight spaces and complex environments by extending their flexible, tube-like bodies.
- Developed a base station capable of maintaining up to 15 PSI to facilitate the robot's extension.
Also, engineered new sealing methods with gaskets and thread sealants to ensure an air-tight system.
- Redesigned clamping and closure systems to accommodate various diameters, simplifying operations.
- Programmed ground robots (e-puck2) and aerial robots (crazy-fly) using C++ and Python.
- Did experiments in the motion capture lab using Vicon Tracker to control the swarm robots at the same time. Could find some packages to move the swarm bots easily and learn how to conduct physical experiments in which dozens of robots jointly search, investigate, or deliver goods by using ROS.
- Utilized various packages to reduce the delay in communication between robots, computers, and Vicon systems so that I could control the robot more immediately.
The development of an enhanced brain-robot interface (NOIR 2.0) utilizing non-invasive EEG signals to control robots for everyday tasks. The system improves human-robot collaboration by integrating faster brain decoding algorithms, few-shot learning, and one-shot skill parameter prediction, reducing task completion time and human involvement.
The development of a decision support system for air traffic management at Urban Air Mobility vertiports using graph learning.
Test the decision algorithm using E-pucks to evaluate its efficiency in a real-world setting.
The development and use of a virtual environment to analyze human cognition in operationally relevant human-swarm interaction, focusing on how different conditions impact cognitive states and decision-making.