A new artificial intelligence system can take still images and generate short videos that simulate what happens next similar to how humans can visually imagine how a scene will evolve, according to a new study.
Humans intuitively understand how the world works, which makes it easier for people, as opposed to machines, to envision how a scene will play out. But objects in a still image could move and interact in a multitude of different ways, making it very hard for machines to accomplish this feat, the researchers said. But a new, so-called deep-learning systemwas able to trick humans 20 per cent of the time when compared to real footage.
Researchers at the Massachusetts Institute of Technology (MIT) pitted two neural networks against each other, with one trying to distinguish real videos from machine-generated ones, and the other trying to create videos that were realistic enough to trick the first system. [Super-Intelligent Machines: 7 Robotic Futures]
This kind of setup is known as a "generative adversarial network" (GAN), and competition between the systems results in increasingly realistic videos. When the researchers asked workers on Amazon’s Mechanical Turk crowdsourcing platform to pick which videos were real, the users picked the machine-generated videos over genuine ones 20 percent of the time, the researchers said.
No comments:
Post a Comment