To capture an event, it suffices to use the appropriate tool for shooting conditions. The last can act as an ordinary smartphone or camera, and any other gadget with an integrated camera module. However, due to technical limitations, as well as a number of other reasons it is not always possible to remove an event action from beginning to end. For cases where behind the scenes are the most interesting moments, ready to come to the aid of a software solution by scientists at the Massachusetts Institute of Technology.

CSAIL efforts of laboratory professionals was developed self-training algorithm that can not simply use “computer vision” for the identification of objects and their environment on the displayed image, and generate on its basis a short video. After analyzing the individual components of the image and potential options for their interaction, the software can predict what would happen with objects or people in the next frame.

The basis of the “training course” system based on neural networks and used to calculate the “computer vision”, has laid down a detailed study of 2 million a variety of videos. At this stage, the technology needs of the global improvement, as the process of converting the image in the video is far from perfect and is limited by several criteria. The duration of created video program is limited to 1.5 seconds, and the end result looks even having the right to life, but not too realistic. In addition, the neural network is often mistaken with scaling objects. However, the technology in question handled the re-creation of complex scenes, like washing the coast waves or people marching on the grass.

In the future, the MIT scientists algorithm is able to improve the quality avtopilotirovaniya system, namely -sposobnost in real time to assess not only the current situation on the road, but also to anticipate the possible maneuvers of other road users.

Source:


SHARE