I build the intelligence layer for real-world robots, focusing on software and robot learning systems that enable general-purpose behavior. My work centers on perception-first approaches, ensuring robots can robustly understand and generalize across real-world environments before acting.
What I'm currently working on
I am building an agentic intelligence system deployed on the R2D3 robot platform at Open Droids. The system uses modular agents – perception, memory, and manipulation – to form grounded representations of the environment.
My current focus is on perception and scene understanding as the foundation for manipulation and navigation, alongside developing the orchestration layer that coordinates planning and execution across subsystems.
Research Interests
- Perception Grounded scene and object representations for real-world robots.
- Manipulation Robust control and interaction under uncertainty.
- Navigation Reasoning and planning in dynamic, unstructured environments.
- Memory State and context across perception and action.
- Reinforcement Learning Learning from interaction and experience in real-world settings.
News & Updates
- Feb 2026 Served as a mentor for the Physical AI Hack 2026 at Founders, Inc.
- Jan 2026 Exhibited at CES (Enterprise section) in collaboration with Solo Tech.
- Sep 2025 Managed on-site robot deployment and live operations for the BitRobot Foundation booth at CoRL 2025.