Computational Photography Class Projects Exhibition 2011.
Date: April 28, 2011
Location: Klaus 1116E
The Computational Photography class for this term will be doing an exhibition to showcase their final term projects. Come join us to see the creative and technical abilities of our students as they showcase their creative photography projects powered with computing technologies. See digital image and video artifacts that include efforts in high dynamic range imaging, photomosaics, panoramas, tilt-shift photography, image morphing, color manipulation, photo and video montages, interactive video manipulation, model building, photo/video tourism, subliminal imaging and painterly rendering. Some students are exploring technical and developmental issues, others are using existing tools to explore newer creative aspects with photography.
For further information on the class, topics covered, and other projects done by students during the course of the term, see the class website at https://compphotography.wordpress.com/.
The spring term offering of Computational Photography (CS 4475HP) is offered under the Georgia Tech Honors Program. Some graduate students are also participating with a special graduate offering of CS 8803CP. This term’s class has many majors from all over GA Tech, with one common interest, photography.
This showcase is hosted by the GVU Center, RIM @ GT Center, and the School of Interactive Computing of the College of Computing at GA Tech.
The 175 photos that follow look like they might have been taken during the day. But it’s not day. It’s night. The photos are lying, thanks to long exposures that soak in the colorful nightlife.
via 175 Photos of Day Taken at Night. Thanks to Luis Cruz for the pointer.
Computer Vision Applications in Planetary Mapping and Robotics at NASA Ame
Ara Nefian and Xavier Bouyssounouse
Tuesday Feb 1, 2011, CoC 102
Generating accurate planetary maps and terrain models is becoming increasingly more important as NASA plans new robotic missions to Moon and Mars in the coming years. This talk describes the research at the Intelligent Robotics Group at NASA Ames on building large scale planetary maps using stereographic and photometric techniques from imagery captured by the Apollo missions as well as current NASA and international missions. These maps are used by planetary scientists, mission planers and are publicly released in Google Earth/Moon/Mars as well as Microsoft World Wide Telescope. The second part of the talk focuses on the vision systems used by the Mars rover missions as well as Earth bound aerial vehicles.
Dr. Ara Nefian is a Senior Scientist with Carnegie Mellon University and NASA Ames Research Center where he leads the planetary mapping and modeling team.
His current research is focused on 3D Lunar mapping from Apollo era images and the vision system of the Mars Science Lab. In the past he was with Intel Research Labs in Santa Clara, CA involved in several research projects including the OpenCV library, face and gesture recognition, audio-visual speech processing, web image clustering and bioinformatics. In 2005 Dr. Nefian was part of the computer vision group within the Stanford racing team (Stanley) that won the DARPA Autonomous Navigation Grand Challenge. His general interest are in the area of 3D computer vision, machine learning and robotics. He co-authored more than 40 research papers and holds ten US and international patents. Ara Nefian holds a BS from Politehnica University Bucharest (1993) and a MSEE and PhD from Georgia Institute of Technology (1999).
Xavier Bouyssounouse leads the Pipeline Threat Detection Project at NASA Ames Research Center, and is currently working in 2D and 3D computer vision algorithms and robot locomotion. Xavier has a Master’s Degree in Electrical Engineering from the University of California, and Bachelor’s Degrees in Physics and Mathematics from the University of California at Santa Cruz.
YouTube – snapfactorys Channel.
Some noce video tutorials for photography.
Computational Photography May Help Us See Around Corners – NYTimes.com.
ANYONE who has witnessed the megapixel one-upmanship in camera ads might think that computer chips run the show in digital photography.
That’s not true. In most cameras, lenses still form the basic image. Computers have only a toehold, controlling megapixel detectors and features like the shutter. But in research labs, the new discipline of computational photography is gaining ground, taking over jobs that were once the province of lenses.