A group of researchers at MIT have discovered a way to recover lost details in blurred images or videos and create a clear image out of it.
The research team managed to develop a “visual deprojection” model that uses a neural network to combine data from both clear and blurred parts of the photo/video to learn its patterns and create a clearer image. For example, if there is a blurry moving object in the video, the visual deprojection model will be able to create a version of the video that shows the blurry regions clearly.
In experiments, the model was able to create clear video frames out of blurry footage of people walking by, extracting information from single, one-dimensional lines. The model created 24 frames of a video showing a particular person’s walking pattern, their size and the position of their legs.
The researchers hope that their model could someday be used to create 3D body scans out of 2D medical images to help benefit medical imaging in underdeveloped nations.
Guha Balakrishna, the research paper’s lead author said:
If we can convert X-rays to CT scans, that would be somewhat game-changing. You could just take an X-ray and push it through our algorithm and see all the lost information.