• Optical Sensors and Active Illumination

    Optical Sensors and Active Illumination

  • Model Visualization

    Model Visualization

  • Subterranean and Space Robots

    Subterranean and Space Robots

  • Automated 3D Mapping

    Automated 3D Mapping

About Me View Uland Wong's LinkedIn profile

News Highlights
[Feb.2020] Our computational 3D microscope concept for planetary micro-rovers has been selected for Center Innovation Funding.
[Dec.2019] We have released the FROST dataset, the first catalog of icy moon analogs emphasizing terrain geometry and appearance at the 1cm scale.
[Sep.2019] We are participating on a CMU-led NIAC Ph.3 project to develop the first mission to a Lunar skylight via commercial means.
[Mar.2019] The Pits and Caves 3D Dataset has found a new permanent home on NASA servers.

Biography
I am a robotics researcher based in the SF bay area. My technical focus is in extreme perception, at the intersection of physics-based vision and mobile robotics. I believe novel camera systems and understanding of light transport enable robust perception in the most challenging conditions. I have spent more than 15 years giving robots the ability to see at the frontiers of exploration - from dark caves and planetary poles to icy bodies.

I am currently a Senior Computer Scientist in the civil service at NASA's Ames Research Center. My job duties include managing all aspects of internal and contract research, advocating for technology, building partnerships, and serving as a subject matter expert. I was offered an appointment after being employed at Ames through a government contractor for a number of years. Before that, I did an extended tour at Carnegie Mellon University, culminating as a research scientist in the Robotics Institute. Academically, I taught, volunteered as a STEM mentor, and chaired the Field Robotics Seminar series.

I received my PhD in Robotics from CMU in 2012. My dissertation explored fusion of optical sensors (cameras, LIDAR, structured light, etc) for planetary 3D perception. The key idea is use of targeted vision and illumination approaches (coined Lumenhancement) in appearance-constrained environments and generalization to similar spaces using the concept of "appearance domains". My advisor was Prof. William "Red" Whittaker.

Education

* PhD — Carnegie Mellon, Robotics (2012)

* MS — Carnegie Mellon, Robotics (2009)

* MS — Carnegie Mellon, Electrical & Computer Engineering (2006)

* BS with University Honors — Carnegie Mellon, Electrical & Computer Engineering (2006)