What if you could not capture images of what is happening, but what will happen? Researchers of Laboratory of Computer Sciences and Artificial Intelligence of the Boston MIT have developed a learning algorithm capable of generating video based on the prediction of what will happen in the future. In practice, the complicated technology developed at MIT allows you to “get ahead” of 1.5 seconds from a static image or a video; all thanks to the prediction of what will happen in the near future.

After a long training work (over 2 million video data fed into the system), New I.A. is now able to create a video “predictive” pitting two neural networks with one another. It generates a true and proper sequence, determining which elements will move through the frames in what was seen, and what kind of movement will perform. The other network acts as a “quality control”, trying to figure out if the video generated by the first I.A. whether real or fictitious: when the second I.A. is deluded, the experiment is considered successful.

Of course it is a technology still limited by rigid boundaries. The system, first of all, is not able to create videos that go in the future more than 1.5 seconds, and the results seem realistic up to a certain point. However, researchers at the Massachusetts Institute of Technology seem to have satisfied for their ability to render future scenes of low complexity, as the motion of the waves on the shore, or the slow walk of a few people on the lawn. One thing is certain: if tomorrow the I.A. designed to Boston were to extend significantly the scope of its predictive video, could have extraordinary utility applications. Think, for example, to an automotive infotainment system capable of video prevent movement of pedestrians and other vehicles.

SHARE