Discussion about

June 30th 2014 1:59 pm

Why do dedicated cameras take such horrible panoramas?

Kalalau Lookout: 9 images from my Nikon D300 stitched together in Photoshop back in 2007.

Back before we had this newfangled technology built into our cameras and phones that automatically create panoramas, we previously had to stitch together such a photo inside apps like Photoshop. It used to be a tedious process but has gotten much easier.

Once the smartphone revolution happened and everyone started developing apps, a number of interesting companies were launched that tried to make it easier to create panoramas from your mobile device. One of the first ones I can remember was Cloudburst Research, who wrote the awesome Autostitch app for iOS way back in 2009. It worked similar to how you'd put together a panorama in Photoshop -- snap or import a bunch of overlapping photos and it cuts out the most relevant pieces and stitches them together.

Farallon Islands off of San Francisco. I stitched this together using an iPhone 3GS and the Autostitch app in 2009.

When Apple released iOS 6 in 2012, they included the ability to create panoramas on your phone in real time by panning your phone left or right. Other phones had the ability to do this before, but I'd argue that this is when the popularity of panoramic photos really took off, thanks to the quality and ease of creation.

Dodger Stadium panorama: Taken in 2013 with an iPhone 5 and iOS 6's built-in panorama app.

I've consistently been impressed with the quality of panoramas that are coming straight out of my phone with no additional post-processing needed.

San Francisco skyline: Taken with an iPhone 5 and iOS 7 on a moving ferry. The ferry's movement cause some issues with generating the panorama -- you can see this in wavy / uneven lines on the Bay Bridge.

Anyway, this brings me all the way to cameras. The RX100 MII is generally thought to be one of the best pocketable cameras that you can buy today. Great image quality and super portable too. It includes a number of interesting features, among them, the ability to generate panoramas on the camera itself.

This is why I'm genuinely surprised that the RX100's ability to generate panoramas is really poor. In fact, it's so bad, that I refuse to use it, so I don't have an personal photos to share. This one below is from a user in the DP Review forums:

Taken with an RX100 and stitched together using the camera's built in panorama feature. Notice the color banding and uneven lines on the horizon between each frame.

Anyway, what gives? I really love panoramas for their ability to capture an extended scene. But even a camera as awesome as the RX100, with a great lens, sensor, and outstanding image quality, has trouble generating these sorts of images. One obvious answer is that our phones have much more powerful processors that are capable of doing real-time analysis and creation of these types of images. Of course, if you want to do a serious picture, you'll probably set up a tripod and take multiple frames and stitch them together in post-processing (and looking at my Hawaii image at the top of this post, it's obviously the highest quality panorama of any shown here).

But bottom line, if I want to take a quick panorama, it's easier and more reliable to just pull out the phone. It's really amazing how powerful these pocket sized computers are.

sort by

8 replies

Simple answer: Your phone has an accelerometer and probably even a gyroscope. It knows its exact orientation, as well as the motion of the device as you are taking a picture. By using this data, panorama software is able to do a much more precise job of combining images.

Even camera rigs costing thousands do not have this data. The individual images of such camera are vastly superior to the anything any phone (at any price) can produce, but when it comes to combining images together, you are left to doing that in software, and relying upon the pixel data alone.
4 like dislike

Actually, the RX100 can detect motion: it knows when it's being panned too fast, and it detects drops (and shuts the camera off mid-fall to protect the lens).
1 like dislike

While a gyroscope might be rare, I'd imagine that most cameras these days have accelerometers...

I'm also not convinced that having a gyroscope is what makes the difference. Photoshop and ICE don't need this information to make their perfect stitches.
0 like dislike

I really do think it comes down to the processing power of the camera. In fact, the camera has to be even more powerful because most people are less patient with their cameras than their phones. If you do a panorama on Android, it'll do a good job but you have to wait a little bit for the scene to render. On your camera's slower processor, you'd probably have to sit there with your camera on, not able to do anything, while you wait for this to render.

I think on-camera panoramas are mostly just features the camera companies put on there to tick off a check box. I really don't think they expect you to actually use it. It's like digital zoom - they include it because they know that some people are going to want it. But they know that the people who care about the results the most are going to process the image on a computer. Camera manufacturers don't, for the most part, have to deal with direct sharing of the photos, after all. And they know that the more expensive the camera the more likely that person really cares about the results enough to do the post processing.

The last reason is that sometimes different companies just have better software. For example, IMO the best Microsoft software that doesn't get any attention is the Microsoft Image Composite Editor. I used this software to combine five photos I took of a sunset in Santorini. I then submitted it to a canvas printing company, and now we have a gorgeous 70" wide panorama on canvas over our bed. The results from that software are stunning.
3 like dislike

I think some of what you guys are referring to as banding are more exposure issues than anything. I have noticed even on my cell phone, that sometimes when you shift your position to shoot if it re-meters and finds a difference in light, you will get what you guys are describing..
0 like dislike

I actually find it faster to pull out my camera since I have to wait for the camera app on my phone to load up (the GS3's camera app isn't... great). But I have noticed the same issue you've pointed out on my RX100 occasionally. It tends to happen when there are weird light variations.

A white sky above US Cellular Field, no banding:

The greens of New Jersey, no obvious banding:

Though this shot of the Bay Bridge is mostly blue, I haven't noticed any banding:

Cellular Field at night, here you can see some banding above the display:

But this one of Coney Island is particularly bad, probably because of the sunset:

1 like dislike

Holy cow. Maybe I just really suck at taking panoramas with the RX100. These look great!
1 like dislike

Oh, there's a discard pile. Though most of my discard pile is panoramas that cut off too soon (so you get that annoying grey strip at the end) or ones that have people who were walking really fast so they look like a film strip in the middle of my otherwise gorgeous shot.
0 like dislike