• Optical Sensors and Active Illumination

    Optical Sensors and Active Illumination

  • Model Visualization

    Model Visualization

  • Subterranean and Space Robots

    Subterranean and Space Robots

  • Automated 3D Mapping

    Automated 3D Mapping

About Me View Uland Wong's LinkedIn profile

News Highlights
[Jun.2022] NASA dataset links are currently down. I'm told they are working to restore the servers, but with no ETA. If you urgently need a copy of data, send email. Dataset links are back up for now. There may be formatting issues on the pages as we migrate servers, but downloads will work.
[May.2022] Our collaborative StreamFlow project, led by Dr. Legleiter of the USGS, was selected by the AIST program for funding.
[Nov.2021] NASA's VIPER Mission has passed its critical design review. I have spent the last two years leading the navigation sensor team for VIPER. When engineering ramps down in Sept 2022, I will be participating on the science team.

I am a robotics researcher based in the SF bay area. My technical focus is in extreme perception, at the intersection of physics-based vision and mobile robotics. I believe novel camera systems and understanding of light transport enable robust perception in the most challenging conditions. I have spent more than 15 years giving robots the ability to see at the frontiers of exploration - from dark caves and planetary poles to icy surfaces.

I am currently a Senior Computer Scientist in the civil service at NASA's Ames Research Center. My job duties include managing all aspects of internal and contract research, advocating for technology, building partnerships, and serving as a subject matter expert. I was offered an appointment after being employed at Ames through a government contractor for a number of years. Before that, I did an extended tour at Carnegie Mellon University, culminating as a research scientist in the Robotics Institute. Academically, I taught, volunteered as a STEM mentor, and chaired the Field Robotics Seminar series.

I received my PhD in Robotics from CMU in 2012. My dissertation explored fusion of optical sensors (cameras, LIDAR, structured light, etc) for planetary 3D perception. The key idea is use of targeted vision and illumination approaches (coined Lumenhancement) in appearance-constrained environments and generalization to similar spaces using the concept of "appearance domains". My advisor was Prof. William "Red" Whittaker.


PhD — Robotics, Carnegie Mellon University (2012)

MS — Robotics, CMU (2009)

MS — Electrical & Computer Engineering, CMU (2006)

BS (hons.) — Electrical & Computer Engineering, CMU (2006)