Ut Oh…perhaps there is WAY more out there than we know about challenging our “known” belief systems. If powers (we do not know about) are now capable of creating very “life-like” videos with which to fool us whenever and wherever they want…how do we determine what is real? Thanks to Galactic Connection for this one!
This is a very troubling concept for me as it challenges our observation of reality. Really? Can we accurately determine what is “fake’ and what is “real”? we are getting down to the wire here on Earth, folks and ALL of our belief systems are going to be challenged. Life itself is being challenged now!
But…still we can all relax, as we KNOW the outcome of this game. Although I think it important to stay aware, I DO NOT want to have current events drive my own personal belief system. I AM aware of the current situation on Earth (and it is troubling if I dwell there for too long), I like to kind of be an observer and let all of this just roll past because I KNOW Love Wins!
Now…let’s find out more about AI producung “fake” video’s. Please read this article, stay aware, stay calm, and…
(ANTIMEDIA) — A new artificial intelligence (AI) algorithm is capable of manufacturing simulated video imagery that is indiscernible from reality, say researchers at Nvidia, a California-based tech company. AI developers at the company have released details of a new project that allows its AI to generate fake videos using only minimal raw input data. The technology can render a flawlessly realistic sequence showing what a sunny street looks like when it’s raining, for example, as well as what a cat or dog looks like as a different breed or even a person’s face with a different facial expression. And this is video — not photo.
We’re revolutionizing the news industry, but we need your help! Click here to get started.
For their work, researchers tweaked a familiar algorithm, known as a generative adversarial network (GAN), to allow their AI to create fresh visual data. The technique involves playing two neural networks against each other, but Nvidia’s new program requires far less input and no labeled datasets. In other words, AI is getting much, much better at mimicking reality.
Nvidia researcher Ming-Yu Liu says it would normally require multiple pairs of datasets for an ‘image translation’ AI to generate this kind of information. The new iteration of GAN is a massive improvement and allows for the unsupervised growth of AI functionality.
“[And] there are many applications,” says Liu. “For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.”
The researchers note that in addition to uses in self-driving cars, realtors could also use the technology to show prospective homebuyers what properties might look like in different seasons. One can imagine a myriad of similar applications that could be integrated into existing industries or spawn entirely new services.
— Oli Franklin-Wallis (@olifranklin) December 4, 2017
Of course, there are also fears that the technology portends a dystopian future in which mega-corporations or governments can manipulate news media, eliminate or alter visual evidence of crimes, or even manufacture events that didn’t happen. We’re now beyond the phase of questioning whether something was photoshopped. Adding yet another wrinkle to the era of “fake news,” we may soon have to wonder whether or not video clips are AI generated.