GUEST SPEAKER: Srinivasa Narasimhan “Taming the Complexity of Light Transport”

RIM@GT Special Seminar

Taming the Complexity of Light Transport

Srinivasa Narasimhan 
Robotics Institute at Carnegie Mellon University
http://www.cs.cmu.edu/~srinivas/
 Date: April 2, 2012 (Monday)
Time: 4:30pm
Location: Klaus 2456

Abstract:

Underlying most of computer vision research is a model of how light interacts with a scene and then reaches a camera to form images. Light propagates through a scene in complex ways: inter-reflections between scene points, diffusion beneath the surface of translucent materials like skin and marble, and scattering through media like the atmosphere and murky water. Despite this complexity, the vision community has historically defined the brightness of a pixel in the image as solely due to the light reflected from a single point in the world. Modeling this wide variety of optical phenomena is crucial for effective scene understanding in real-world environments. This talk will present computational imaging and illumination techniques to control and tame the complexity of light transport for many applications in the areas of computer vision, graphics, displays and robotics.

Website: http://www.cs.cmu.edu/~ILIM

 Bio:

Srinivasa Narasimhan is an Associate Professor in the Robotics Institute at Carnegie Mellon University. He received his Masters and Doctoral degrees in Computer Science from Columbia University in Feb 2000 and Feb 2004. His group focuses on novel techniques for imaging, illumination and light transport to enable applications in vision, graphics, robotics and medical imaging. His works have received several awards: the NSF CAREER Award (2007), the Okawa Research Grant (2009), IEEE Best Paper Honorable Mention Award (CVPR 2000), Adobe Best Paper Award (IEEE Workshop on Physics based methods in computer vision, ICCV 2007) and Best Paper Award (IEEE PROCAMS 2009). His research has been covered in popular press including NY Times, PC magazine and IEEE Spectrum. He co-chaired the ONR International Symposium on Volumetric Scattering in Vision and Graphics in 2007, the IEEE Workshop on Projector-Camera Systems (PROCAMS) in 2010, and the IEEE International Conference on Computational Photography (ICCP) in 2011, and serves on the editorial board of the International Journal of Computer Vision.

Microsoft Research Cliplets

Microsoft Research Cliplets.

What Are Cliplets?

A still photograph is a limited format for capturing a moment in time. Video is the traditional method for recording durations of time, but the subjective “moment” that one desires to capture is often lost in the chaos of shaky camerawork, irrelevant background clutter, and noise that dominates most casually recorded video clips.


Microsoft Research Cliplets is an interactive app that uses semi-automated methods to give users the power to create “Cliplets” — a type of imagery that sits between stills and video from handheld videos. The app provides a creative lens one can use to focus on important aspects of a moment by mixing static and dynamic elements from a video clip.

The Faux-Vintage Photo: Full Essay Parts I, II and III » Cyborgology

I am working on a dissertation about self-documentation and social media and have decided to take on theorizing the rise of faux-vintage photography e.g., Hipstamatic, Instagram. From May 10-12, 2011, I posted a three part essay. This post combines all three together. Part I: Instagram and HipstamaticPart II: Grasping for AuthenticityPart III: Nostalgia for the Present

via The Faux-Vintage Photo: Full Essay Parts I, II and III » Cyborgology.

The First Plenoptic Camera on the Market

It looks like the wait for plenoptic cameras to hit the market is shorter than we thought when we reported earlier today on Adobe’s interesting demonstration on the technology. In fact, there is no wait — you can already purchase a plenoptic camera. German company Raytrix is the first to offer plenoptic cameras that allow you to choose focus points in post processing and capture 3D images with a single sensor.

Their “entry-level” R5 camera shoots at 1 megapixel, while the high-end R11 shoots at 3 megapixels. They can also convert any existing digital camera into a plenoptic camera, creating a lens array for it within 6-8 weeks.

Processing the resulting images is done through proprietary software written by Raytrix. How much these cameras cost isn’t published on their website, and is only provided on request. If you have any idea, leave a comment letting us know!

via The First Plenoptic Camera on the Market.

BigShot Kit Camera, Like Crack for Kids | Gadget Lab | Wired.com

The BigShot, still in testing, is a super-simple digicam from the Computer Vision Lab at Columbia University. It comes in parts, ready to be assembled (by kids, but I can’t wait to get my hands on one), and teaches you along the way how these things work. It’s not quite the transparent view you get from making an old analog camera, where you can see how everything works, but it’s as close as you can get from a machine that uses circuit boards.

via BigShot Kit Camera, Like Crack for Kids | Gadget Lab | Wired.com.

Computational Photography May Help Us See Around Corners – NYTimes.com

That’s not true. In most cameras, lenses still form the basic image. Computers have only a toehold, controlling megapixel detectors and features like the shutter. But in research labs, the new discipline of computational photography is gaining ground, taking over jobs that were once the province of lenses.

via Computational Photography May Help Us See Around Corners – NYTimes.com.

The Space Beyond Me on Vimeo

This video is showing the work “The Space Beyond Me” by Julius von Bismarck at Transmediale 2010. the video is shot by Andreas Schmelas and Julius von Bismarck and edited by Julius von Bismarck. The sound is the original sound in the exhibition space, so it is effected by other pieces in the exhibition. more information at: JuliusVonBismarck.com

via The Space Beyond Me on Vimeo.

Suggested by Blackili Rudi Milhose

Fulgurator-tech : Julius von Bismarck

Fulgurator-tech : Julius von Bismarck.

Technically, the Image Fulgurator works like a classical camera, though in reverse. In a normal camera, the light reflected from an object is projected via the lens onto the film. In the Image Fulgurator, this process is exactly the opposite: instead of an unexposed film, an exposed and developed roll of slide film is loaded into the camera and behind it, a flash. When the flash goes off, the image is projected from the film via the lens onto the object. 

Also see http://www.youtube.com/watch?v=EAX_3Bgel7M

Suggested by Blackili Rudi Milhose

Visualizing video at the speed of light — one trillion frames per second – YouTube

MIT Media Lab researchers have created a new imaging system that can acquire visual data at a rate of one trillion frames per second. That’s fast enough to produce a slow-motion video of light traveling through objects. Video: Melanie Gonick.

via Visualizing video at the speed of light — one trillion frames per second – YouTube.