3-D Photo Visualization and Beyond: ‘Vibrant Media’

To add pizzazz to his photos of family vacations, Horst Haussecker recently bought a GPS-enabled camera that identifies the location where each shot was taken. Instead of being excited about the latest feature, however, he found himself wanting more.

“It was the most disappointing experience, actually,” said Haussecker, who works for Intel in Santa Clara, Calif. “The camera had the capability but no way to see the photos on a map without a major effort, and even then it didn’t visualize the photos in a 3-D setting. It merely pinpointed the location on a Google map.”

“Vibrant Media” transforms flat pictures of a conventional photo album into multi-dimensional data collections.

For most people, returning the camera for a full or partial refund is pretty much the only solution, and Haussecker might still make a trip back to the store. But as director of Intel’s Experience Technology Lab, he is in the rare and enviable position to actually be part of the solution.

At the Intel Developer Forum in San Francisco, Haussecker and his team demonstrated the results of their efforts to develop tomorrow’s photo collections where flat pictures transform into multi-dimensional data collections. The hope for this new technology is to “open new windows into perception, memory and imagination,” according to the 10-year Intel Labs veteran.

Haussecker and Intel call it “Vibrant Media,” and within 2 years users will see some aspects of it on their PCs, including Ultrabooks and tablets, and eventually on their smartphones.

The prize of Vibrant Media is a captured moment that the user can return to and explore. For example, a photo of a batter taken in a baseball stadium would not have just the hitter as the subject, but a peanut vendor in the background, also in focus, making an amazing behind-the-back toss. Such detail might be lost in today’s cameras.

Technology that captures photos and makes 3-D models is already in the marketplace in the form of plain optic cameras and like devices that can fire off a sequence of images with miniscule delay. However, that’s but one aspect of what Intel and other developers are working on.

“There’s much more to photos than just having models – that’s only the first step,” Haussecker said. “The next step is to create something that isn’t static. There’s much more to this world than a static 3-D shape. The real world is live. Things are moving. Things are happening.”

The ultimate challenge, and one in which Intel is seeking industry collaboration, is to create media capabilities that allow consumers to extract details from a photo to give the user more information.

“We need to stop chasing the perfect shot. There is no right or wrong answer in capturing a scene,” Haussecker said. “We must change the paradigm. Today, creation and consumption are considered two different things. The future paradigm will combine these two aspects.”

By virtue of the new paradigm, the user will browse media in more interactive ways. Creation and consumption will become one.

“You’re not creating an artifact that you save for eternity and do nothing else with it,” he said. “Instead, you’re capturing very rich data sets with a large number of images that tell you something about the orientation of the camera, who is in the photo and when you’re taking the photo. You’re not just passively watching.”

The concept is so novel that Intel and its development partners are still figuring out how consumers might use the technology.

“We have a toolbox of brand new hardware and software capabilities. What we don’t have is full knowledge of what technologies people need,” Haussecker said. “That’s why we need to work with social scientists and ethnographers to help us understand the needs and desires of what people want in the first place, and then designers to work on the interaction and device capabilities. Next is working with the tech team to build algorithms to create the perfect photographic experience.

Intel, taking on the role of computerizing all this, is hopeful that “vibrant media” becomes a feature people want for a new class of ‘personal computing’ and a range of new devices, including Ultrabooks, tablets, and smartphones..

“Intel knows that people are yearning to be more creative with their photography, and we want to deliver the goods that satisfy this need.” Haussecker said. “We’re long past the day when people only took pictures for documenting important events. Today, people are using cameras in much more playful ways. You’re in a bar and take a photo that might never be used.”

With “vibrant media,” that random picture taken at the bar could be as rich as a potent mudslide.

“A simple photo of friends having a good time wouldn’t be simple at all,” Haussecker said. “It would be more than a photo or collection of photos. It would truly capture the moment to be re-experienced in vivid ways. The day of the fleeting, random photo is coming to an end, and that’s very exciting.”

Vibrant Media: Future 3-D Photography

3-D Photo Visualization and Beyond: ‘Vibrant Media’

To add pizzazz to his photos of family vacations, Horst Haussecker recently bought a GPS-enabled camera that identifies the location where each shot was taken. Instead of being excited about the latest feature, however, he found himself wanting more.

“It was the most disappointing experience, actually,” said Haussecker, who works for Intel in Santa Clara, Calif. “The camera had the capability but no way to see the photos on a map without a major effort, and even then it didn’t visualize the photos in a 3-D setting. It merely pinpointed the location on a Google map.”

“Vibrant Media” transforms flat pictures of a conventional photo album into multi-dimensional data collections.

For most people, returning the camera for a full or partial refund is pretty much the only solution, and Haussecker might still make a trip back to the store. But as director of Intel’s Experience Technology Lab, he is in the rare and enviable position to actually be part of the solution.

At the Intel Developer Forum in San Francisco, Haussecker and his team demonstrated the results of their efforts to develop tomorrow’s photo collections where flat pictures transform into multi-dimensional data collections. The hope for this new technology is to “open new windows into perception, memory and imagination,” according to the 10-year Intel Labs veteran.

Haussecker and Intel call it “Vibrant Media,” and within 2 years users will see some aspects of it on their PCs, including Ultrabooks and tablets, and eventually on their smartphones.

The prize of Vibrant Media is a captured moment that the user can return to and explore. For example, a photo of a batter taken in a baseball stadium would not have just the hitter as the subject, but a peanut vendor in the background, also in focus, making an amazing behind-the-back toss. Such detail might be lost in today’s cameras.

Technology that captures photos and makes 3-D models is already in the marketplace in the form of plain optic cameras and like devices that can fire off a sequence of images with miniscule delay. However, that’s but one aspect of what Intel and other developers are working on.

“There’s much more to photos than just having models – that’s only the first step,” Haussecker said. “The next step is to create something that isn’t static. There’s much more to this world than a static 3-D shape. The real world is live. Things are moving. Things are happening.”

The ultimate challenge, and one in which Intel is seeking industry collaboration, is to create media capabilities that allow consumers to extract details from a photo to give the user more information.

“We need to stop chasing the perfect shot. There is no right or wrong answer in capturing a scene,” Haussecker said. “We must change the paradigm. Today, creation and consumption are considered two different things. The future paradigm will combine these two aspects.”

By virtue of the new paradigm, the user will browse media in more interactive ways. Creation and consumption will become one.

“You’re not creating an artifact that you save for eternity and do nothing else with it,” he said. “Instead, you’re capturing very rich data sets with a large number of images that tell you something about the orientation of the camera, who is in the photo and when you’re taking the photo. You’re not just passively watching.”

The concept is so novel that Intel and its development partners are still figuring out how consumers might use the technology.

“We have a toolbox of brand new hardware and software capabilities. What we don’t have is full knowledge of what technologies people need,” Haussecker said. “That’s why we need to work with social scientists and ethnographers to help us understand the needs and desires of what people want in the first place, and then designers to work on the interaction and device capabilities. Next is working with the tech team to build algorithms to create the perfect photographic experience.

Intel, taking on the role of computerizing all this, is hopeful that “vibrant media” becomes a feature people want for a new class of ‘personal computing’ and a range of new devices, including Ultrabooks, tablets, and smartphones..

“Intel knows that people are yearning to be more creative with their photography, and we want to deliver the goods that satisfy this need.” Haussecker said. “We’re long past the day when people only took pictures for documenting important events. Today, people are using cameras in much more playful ways. You’re in a bar and take a photo that might never be used.”

With “vibrant media,” that random picture taken at the bar could be as rich as a potent mudslide.

“A simple photo of friends having a good time wouldn’t be simple at all,” Haussecker said. “It would be more than a photo or collection of photos. It would truly capture the moment to be re-experienced in vivid ways. The day of the fleeting, random photo is coming to an end, and that’s very exciting.”

Vibrant Media: Future 3-D Photography