My research seeks to improve the performance of computer vision in extreme environments by understanding and exploiting the physics of light transport. It is well-known that use of active illumination in the form of “structured light” can greatly enhance image understanding; however, accurately modeled natural illumination can also be leveraged in the same manner. My dissertation explored new ways to integrate computer vision, lighting, and photometry to tackle the most challenging perception environments. I have continued this approach in my professional career at NASA by leading navigation efforts on the VIPER and Resource Prospector missions, targeted to the polar regions of the Moon. I also led appearance modeling efforts on several of NASA's photorealistic simulators for the Moon and Europa.
Planetary Skylight and Cave Exploration
Skylights are recently-discovered “holes” on the surface of the Moon and Mars that may lead to planetary caves. These vast, stadium-sized openings represent an unparalleled opportunity to access subterranean spaces on other planets. This research thrust developed robotic mechanisms and operations concepts for exploration of these features. Tyrobot ("Tyrolean Robot") was developed to map skylight walls and floors by suspending on a cable tightrope. PitCrawler is a wheeled robot which utilizes flexible chassis, extremely low center-of-gravity, and energetic maneuvering to negotiate bouldered floors and sloped descents. We demonstrated these prototypes in analog pit mines which simulated the size and shape of lunar skylights.
Many types of imaging and range sensors exist on the market. Manufacturer specifications are often non-comparable, collected in ideal settings, and not oriented to robotics application. The goal of this work was to provide a common basis for empirical comparison of optical sensors in underground environments. Data distribution and accuracy is then used to optimize for sensor selection. The work included both an ideal laboratory characterization where a novel 3D “checkerboard” target was scanned from multiple perspectives and also an in situ component where mobile mapping in underground spaces was compared. I helped create and lead the Sensor Characterization Lab at CMU, which successfully fulfilled a DOD contract and other funded work in this area.
At NASA Ames, I am building new facilities and expertise for characterization of planetary surface sensors.
Multi-sensor Fusion for 3D Mapping (2009-2014)
Current robotic maps are limited by decades-old range sensing technology. Only multisensor (LIDAR, camera, RADAR and multispectral) approaches can provide the density and quality of data required for automated inspection, operation and science. My PhD research explored synergistic cooperation of multi-modal optical sensing to enhance understanding of geometry (super-resolution), location of interest sampling (image-directed scanning) and material understanding from source-motion.
Hybrid Optical Sensors (2009-)
I developed several novel sensors for mapping and imaging. My image-directed structured light scanner optically co-locates a high resolution camera with the output illumination of a DLP projector/camera using a half-silvered mirror. This configuration enables hardware-supported intelligent sample selection with high resolution interpolation and texturing. During my thesis I also built a room-sized gonioreflectometer/sun simulator with no moving parts using an array of commodity SLR cameras and LED illumination. This design was accurate enough to extract BRDFs of planetary materials for graphics rendering while costing about 1/100th that of commercial spherical gantries.
Model Visualization (2009-2014)
Human are consumers of 3D models for training, oversight, operations and presentation. My research investigated new methods for immersive display that enhanced these tasks. Approaches included nonphotorealistic techniques for feature highlighting, point splatting, hole filling for imperfect data, adaptive BRDF selection, radiance estimation and geometry image parameterizations. I later dabbled in 3D printing of robot-made models as tools for scientific understanding.
Commercial Lunar Robotics (2008-2011)
I supported ongoing research in lunar robotics at CMU. I led automation of RedRover, a prototype equatorial rover designed to win the Google Lunar XPRIZE, and helped develop its stereo mapping capability. More recently, I contributed to the autonomous lunar lander project developing algorithms for terrain modeling and analysis for use in midflight landing site selection. A combined lander/rover team from CMU and a spinoff company is intended to visit the Lacus Mortis pit on the moon.
Subterranean Mapping (2007-2012)
Robots are poised to proliferate in underground civil inspection and mining operations. I took over development of the CaveCrawler mobile robot, which was a platform for research in these areas. CaveCrawler has inspected and mapped many miles of underground mines and tunnels using LIDAR. We also demonstrated use of rescue scout robots for use in disaster situations to locate victims and carry supplies.
Borehole Scanning and Imaging (2006-2009)
I developed robots for inspecting the most hazardous and access-constrained underground environments. The MOSAIC camera is a borehole-deployed inspection robot that generates 360 degree panoramas. MOSAIC takes long range photography using active illumination and generates all-exposed images using HDR imaging. Ferret, an underground void inspection robot. It is capable of deploying through a 3” drill core into unlined boreholes, and produces 3D models of voids with a fiber-optic LIDAR.
Human Odometer (2005)
The Human Odometer was a wearable personal localization system for first responders and warfighters. Teams utilizing smart positioning and identification experience enhanced situational awareness and reduced friendly fire incidents. Bluetooth accelerometers and gyroscopes woven into suit tracked a person’s steps and orientation. This information was reported to a battalion commander while a handheld PDA which brought up context-sensitive mapping and position information. My undergraduate senior thesis investigated Kalman filtering to fuse intermittent GPS and odometry data for more accurate positioning and learning the specific parameters and variances of the step-detection model.