RIM@GT Special Seminar
Taming the Complexity of Light Transport
Robotics Institute at Carnegie Mellon University
Date: April 2, 2012 (Monday)
Location: Klaus 2456
Underlying most of computer vision research is a model of how light interacts with a scene and then reaches a camera to form images. Light propagates through a scene in complex ways: inter-reflections between scene points, diffusion beneath the surface of translucent materials like skin and marble, and scattering through media like the atmosphere and murky water. Despite this complexity, the vision community has historically defined the brightness of a pixel in the image as solely due to the light reflected from a single point in the world. Modeling this wide variety of optical phenomena is crucial for effective scene understanding in real-world environments. This talk will present computational imaging and illumination techniques to control and tame the complexity of light transport for many applications in the areas of computer vision, graphics, displays and robotics.
Srinivasa Narasimhan is an Associate Professor in the Robotics Institute at Carnegie Mellon University. He received his Masters and Doctoral degrees in Computer Science from Columbia University in Feb 2000 and Feb 2004. His group focuses on novel techniques for imaging, illumination and light transport to enable applications in vision, graphics, robotics and medical imaging. His works have received several awards: the NSF CAREER Award (2007), the Okawa Research Grant (2009), IEEE Best Paper Honorable Mention Award (CVPR 2000), Adobe Best Paper Award (IEEE Workshop on Physics based methods in computer vision, ICCV 2007) and Best Paper Award (IEEE PROCAMS 2009). His research has been covered in popular press including NY Times, PC magazine and IEEE Spectrum. He co-chaired the ONR International Symposium on Volumetric Scattering in Vision and Graphics in 2007, the IEEE Workshop on Projector-Camera Systems (PROCAMS) in 2010, and the IEEE International Conference on Computational Photography (ICCP) in 2011, and serves on the editorial board of the International Journal of Computer Vision.
Microsoft Research Cliplets.
What Are Cliplets?
A still photograph is a limited format for capturing a moment in time. Video is the traditional method for recording durations of time, but the subjective “moment” that one desires to capture is often lost in the chaos of shaky camerawork, irrelevant background clutter, and noise that dominates most casually recorded video clips.
Microsoft Research Cliplets is an interactive app that uses semi-automated methods to give users the power to create “Cliplets” — a type of imagery that sits between stills and video from handheld videos. The app provides a creative lens one can use to focus on important aspects of a moment by mixing static and dynamic elements from a video clip.
I am working on a dissertation about self-documentation and social media and have decided to take on theorizing the rise of faux-vintage photography e.g., Hipstamatic, Instagram. From May 10-12, 2011, I posted a three part essay. This post combines all three together. Part I: Instagram and HipstamaticPart II: Grasping for AuthenticityPart III: Nostalgia for the Present
via The Faux-Vintage Photo: Full Essay Parts I, II and III » Cyborgology.
It looks like the wait for plenoptic cameras to hit the market is shorter than we thought when we reported earlier today on Adobe’s interesting demonstration on the technology. In fact, there is no wait — you can already purchase a plenoptic camera. German company Raytrix is the first to offer plenoptic cameras that allow you to choose focus points in post processing and capture 3D images with a single sensor.
Their “entry-level” R5 camera shoots at 1 megapixel, while the high-end R11 shoots at 3 megapixels. They can also convert any existing digital camera into a plenoptic camera, creating a lens array for it within 6-8 weeks.
Processing the resulting images is done through proprietary software written by Raytrix. How much these cameras cost isn’t published on their website, and is only provided on request. If you have any idea, leave a comment letting us know!
via The First Plenoptic Camera on the Market.
The BigShot, still in testing, is a super-simple digicam from the Computer Vision Lab at Columbia University. It comes in parts, ready to be assembled (by kids, but I can’t wait to get my hands on one), and teaches you along the way how these things work. It’s not quite the transparent view you get from making an old analog camera, where you can see how everything works, but it’s as close as you can get from a machine that uses circuit boards.
via BigShot Kit Camera, Like Crack for Kids | Gadget Lab | Wired.com.
That’s not true. In most cameras, lenses still form the basic image. Computers have only a toehold, controlling megapixel detectors and features like the shutter. But in research labs, the new discipline of computational photography is gaining ground, taking over jobs that were once the province of lenses.
via Computational Photography May Help Us See Around Corners – NYTimes.com.