Washington, Dec 8 (IANS) Researchers at University of Washington have developed a new system that can capture the 'persona' of a real person out of vast number of images and create a digital model of that person.
The algorithms can also animate the digital model of a famous personality to deliver speeches that the real person never delivered.
The technology relies on advances in 3D face reconstruction, tracking, alignment, multi-texture modelling and puppeteering that have been developed over the last five years.
The team's latest advances include the ability to transfer expressions and the way a particular person speaks onto the face of someone else - for instance, mapping former president George W. Bush's mannerisms onto the faces of other politicians and celebrities.
It's one step toward a grand goal shared by the UW computer vision researchers: creating fully interactive, three-dimensional digital personas from family photo albums and videos, historic collections or other existing visuals.
"You might one day be able to put on a pair of augmented reality glasses and there is a 3-D model of your mother on the couch," said senior author Kemelmacher-Shlizerman.
One day the reconstruction technology could be taken a step further, researchers said.
"Imagine being able to have a conversation with anyone you can't actually get to meet in person -- Barack Obama, Charlie Chaplin -- and interact with them," said co-author professor Steve Seitz.
"We're trying to get there through a series of research steps. One of the true tests is can you have them say things that they didn't say but it still feels like them? This paper is demonstrating that ability," Seitz said.
The results are scheduled to be presented at the International Conference on Computer Vision in Chile on December 16.
This website uses cookies.