Depth: the Future of Imaging – YouTube.
An interesting video from Pelican Imaging folks advocating new forms of 3D imaging.
Capturing depth is the next step in photography. Experts weigh in: Kartik Venkataraman (Pelican Imaging), Raj Talluri (Qualcomm), and Hao Li (USC). What does the future of imaging look like when we can use our phones to capture photos and video in 3D?
An interesting and practical example of computational photography from CMU. An example of projector/cameras (PROCAM) system
A camera senses oncoming cars, falling precipitation and other objects of interest, such as road signs. The one million light beams can then be adjusted accordingly, some dimmed to spare the eyes of oncoming drivers, while others might be brightened to highlight street signs or the traffic lane. The changes in overall illumination are minor, however, and generally not noticeable by the driver.
via Carnegie Mellon’s Smart Headlights Spare the Eyes of Oncoming Drivers-Carnegie Mellon News – Carnegie Mellon University.
It looks like the wait for plenoptic cameras to hit the market is shorter than we thought when we reported earlier today on Adobe’s interesting demonstration on the technology. In fact, there is no wait — you can already purchase a plenoptic camera. German company Raytrix is the first to offer plenoptic cameras that allow you to choose focus points in post processing and capture 3D images with a single sensor.
Their “entry-level” R5 camera shoots at 1 megapixel, while the high-end R11 shoots at 3 megapixels. They can also convert any existing digital camera into a plenoptic camera, creating a lens array for it within 6-8 weeks.
Processing the resulting images is done through proprietary software written by Raytrix. How much these cameras cost isn’t published on their website, and is only provided on request. If you have any idea, leave a comment letting us know!
via The First Plenoptic Camera on the Market.
That’s not true. In most cameras, lenses still form the basic image. Computers have only a toehold, controlling megapixel detectors and features like the shutter. But in research labs, the new discipline of computational photography is gaining ground, taking over jobs that were once the province of lenses.
via Computational Photography May Help Us See Around Corners – NYTimes.com.
Fulgurator-tech : Julius von Bismarck.
Technically, the Image Fulgurator works like a classical camera, though in reverse. In a normal camera, the light reflected from an object is projected via the lens onto the film. In the Image Fulgurator, this process is exactly the opposite: instead of an unexposed film, an exposed and developed roll of slide film is loaded into the camera and behind it, a flash. When the flash goes off, the image is projected from the film via the lens onto the object.
Also see http://www.youtube.com/watch?v=EAX_3Bgel7M
Suggested by Blackili Rudi Milhose
MIT Media Lab researchers have created a new imaging system that can acquire visual data at a rate of one trillion frames per second. That’s fast enough to produce a slow-motion video of light traveling through objects. Video: Melanie Gonick.
via Visualizing video at the speed of light — one trillion frames per second – YouTube.