RIM@GT Special Seminar
Taming the Complexity of Light Transport
Robotics Institute at Carnegie Mellon University
Date: April 2, 2012 (Monday)
Location: Klaus 2456
Underlying most of computer vision research is a model of how light interacts with a scene and then reaches a camera to form images. Light propagates through a scene in complex ways: inter-reflections between scene points, diffusion beneath the surface of translucent materials like skin and marble, and scattering through media like the atmosphere and murky water. Despite this complexity, the vision community has historically defined the brightness of a pixel in the image as solely due to the light reflected from a single point in the world. Modeling this wide variety of optical phenomena is crucial for effective scene understanding in real-world environments. This talk will present computational imaging and illumination techniques to control and tame the complexity of light transport for many applications in the areas of computer vision, graphics, displays and robotics.
Srinivasa Narasimhan is an Associate Professor in the Robotics Institute at Carnegie Mellon University. He received his Masters and Doctoral degrees in Computer Science from Columbia University in Feb 2000 and Feb 2004. His group focuses on novel techniques for imaging, illumination and light transport to enable applications in vision, graphics, robotics and medical imaging. His works have received several awards: the NSF CAREER Award (2007), the Okawa Research Grant (2009), IEEE Best Paper Honorable Mention Award (CVPR 2000), Adobe Best Paper Award (IEEE Workshop on Physics based methods in computer vision, ICCV 2007) and Best Paper Award (IEEE PROCAMS 2009). His research has been covered in popular press including NY Times, PC magazine and IEEE Spectrum. He co-chaired the ONR International Symposium on Volumetric Scattering in Vision and Graphics in 2007, the IEEE Workshop on Projector-Camera Systems (PROCAMS) in 2010, and the IEEE International Conference on Computational Photography (ICCP) in 2011, and serves on the editorial board of the International Journal of Computer Vision.
Computer Vision Applications in Planetary Mapping and Robotics at NASA Ame
Ara Nefian and Xavier Bouyssounouse
Tuesday Feb 1, 2011, CoC 102
Generating accurate planetary maps and terrain models is becoming increasingly more important as NASA plans new robotic missions to Moon and Mars in the coming years. This talk describes the research at the Intelligent Robotics Group at NASA Ames on building large scale planetary maps using stereographic and photometric techniques from imagery captured by the Apollo missions as well as current NASA and international missions. These maps are used by planetary scientists, mission planers and are publicly released in Google Earth/Moon/Mars as well as Microsoft World Wide Telescope. The second part of the talk focuses on the vision systems used by the Mars rover missions as well as Earth bound aerial vehicles.
Dr. Ara Nefian is a Senior Scientist with Carnegie Mellon University and NASA Ames Research Center where he leads the planetary mapping and modeling team.
His current research is focused on 3D Lunar mapping from Apollo era images and the vision system of the Mars Science Lab. In the past he was with Intel Research Labs in Santa Clara, CA involved in several research projects including the OpenCV library, face and gesture recognition, audio-visual speech processing, web image clustering and bioinformatics. In 2005 Dr. Nefian was part of the computer vision group within the Stanford racing team (Stanley) that won the DARPA Autonomous Navigation Grand Challenge. His general interest are in the area of 3D computer vision, machine learning and robotics. He co-authored more than 40 research papers and holds ten US and international patents. Ara Nefian holds a BS from Politehnica University Bucharest (1993) and a MSEE and PhD from Georgia Institute of Technology (1999).
Xavier Bouyssounouse leads the Pipeline Threat Detection Project at NASA Ames Research Center, and is currently working in 2D and 3D computer vision algorithms and robot locomotion. Xavier has a Master’s Degree in Electrical Engineering from the University of California, and Bachelor’s Degrees in Physics and Mathematics from the University of California at Santa Cruz.
Title: Non-Photorealistic Rendering and The Science of Art
University of Toronto
Thursday June 3, 2010
This will be an informal talk in two parts. First, I will describe some of my previous work and background in Non-Photorealistic Rendering, which aims to create images and animations that appear hand-painted, drawn, and to otherwise combine traditional styles of images with computer graphics. Second, I will discuss how this kind of research could be used in the future to develop scientific theories of art and aesthetics.
Aaron Hertzmann is an Associate Professor of Computer Science at University of Toronto. He received a BA in Computer Science and Art & Art History from Rice University in 1996, and an MS and PhD in Computer Science from New York University in 1998 and 2001, respectively. In the past, he has worked at University of Washington, Microsoft Research, Mitsubishi Electric Research Lab, Interval Research Corporation and NEC Research Institute. His awards include the MIT TR100 (2004), an Ontario Early Researcher Award (2005), a Sloan Foundation Fellowship (2006), a Microsoft New Faculty Fellowship (2006), a UofT CS teaching award (2008), and the CACS/AIC Outstanding Young CS Researcher Prize (2010). He is currently on sabbatical at Pixar Animation Studios and Hebrew University of Jerusalem.