How much do you trust what you see? How about what you hear?
Several months ago, I posted about Face2Face, a program that allows for the manipulation of video in real time to change and animate facial expressions. It’s an interesting technology, but also concerning – all you have to do is watch this video to see how it might be used in the wrong hands.
Well, today we have something new and exciting to fuel the fires of conspiracy.
According to Engadget, this month Adobe unveiled their experimental Project VoCo tool, which allows users to insert dialogue into preexisting voiceover recordings simply by typing in the words or phrases you want to hear. The new dialogue will sound more or less just like the original voice.
The software does this by analyzing the sound of the original voice (about 20 minutes worth of audio will do), then synthesizing the new words to match.
In the above video, you can watch Adobe’s new tech in action, in this case using an audio sample of Keegan-Michael Key. It’s not perfect, but the real magic begins at around 4:20, when the presenter shows off the ability to add small phrases.
As usual, the first question to arrive with such a technology is: Can it be used for evil? The answer, of course, is yes. But Adobe is also apparently researching ways to detect audio forgeries, through things like watermarks.
That said, like any software, you can imagine that other variations will pop up in the future without said watermarks. The technology is here, one way or the other.
The future might be a very confusing place.