Skip to content

News Highlights

[Mar.2024] I will be taking on additional responsibilities as Deputy Group Lead for the Intelligent Robotics Group at NASA Ames.

[Dec.2023] We have released the NASA POLAR Traverse Dataset, a visual odometry companion to the POLAR Stereo Dataset (2017). This is the first of the Lunar Lab LHS-1 data to be released to the public, made possible through the work of NASA fellowship student Maggie Hansen.

[May.2023] I am thrilled to be awarded the Exceptional Technology Achievement Medal by NASA for my contributions to the VIPER mission and perception in extreme environments!

[Apr.2023] Proud of the VIPER Navigation Team for hosting top-level NASA Administrators who visited our lab to see rover navigation development for the Lunar poles.

About Me

I am a robotics researcher based in the San Francisco bay area. My technical focus is in extreme perception, at the intersection of physics-based vision and mobile robotics. I believe novel camera systems and understanding of light transport enable robust perception in the most challenging conditions. I have spent almost 20 years giving robots the ability to see at the frontiers of exploration – from dark caves and planetary poles to icy surfaces.

I am currently a Senior Computer Scientist in the civil service at NASA’s Ames Research Center. My job duties include managing all aspects of internal and contract research, advocating for technology, building partnerships, and serving as a subject matter expert. I was offered an appointment after being employed at Ames through a government contractor for a number of years. Before that, I did an extended tour at Carnegie Mellon University, culminating as a research scientist in the Robotics Institute. Academically, I taught, volunteered as a STEM mentor, and chaired the Field Robotics Seminar series.

I received my PhD in Robotics from CMU under Red Whittaker. My dissertation explored fusion of optical sensors (cameras, LIDAR, structured light, etc) for planetary 3D perception. The key idea is use of targeted vision and illumination approaches (coined Lumenhancement) in appearance-constrained environments and generalization to similar spaces using the concept of “appearance domains.