Combining multiple photographs to create a new image isn't uncommon, but researchers at the University of California, Santa Barbara (UCSB) in partnership with NVIDIA have come up with a a pretty wild new technique to create entirely new compositions. It's called "computational zoom," and it promises to let photographers adjust the focal length (which basically amounts to the magnification of an image when you shoot) after the fact. The UCSB researchers have basically come up with a way to combine images that merge both telephoto and wide-angle shots to come up with entirely new compositions.
The quick video below gives an example of the kinds of results you can get with this technique:
As the research paper notes, the woman in the images didn't move at all throughout the photo shoot, but the background was able to be manipulated from a wide-angle shot into a close-up with software after the fact.
Of course, those images have to come from somewhere. To get computational zoom to work, you'll need a "stack" of images captured a fixed focal length at different distances. In layman's terms, that means you'll need to use your feet and move through the scene; you can't cheat by using a zoom lens. So no, computational zoom can't magically create scenes without having the image data to start with -- but once it does have those images, it can do some pretty creative things.
Once those photos are shot, they're fed into the computational zoom system and run through its algorithm, which can figure out the camera's orientation based on the rest of the images -- ultimately it can build out the entire scene in 3D from a variety of viewpoints, which lets the photographer create a final image combining multiple perspectives. There's no word on when this technology might be available to photographers to try themselves, but it's easy to imagine professionals using this to give themselves a lot more flexibility in adjusting image composition after the fact.