Advertisement

About that iPhone touchscreen accuracy test

Recently, you may have seen a report from the automated testing company OptoFidelity regarding the accuracy of the touch panel on the iPhone 5c, 5s and the Samsung Galaxy S 3. The test was performed by having an artificial robot finger make hundreds of taps across each phone's display. The tap location was then compared against where the touch panel registered the tap. If the tap was registered within 1mm of where the robot finger actually hit, the tap was shown on OptoFidelity's display as a green dot. You can see the results in the image above.

But that may not be the end of the story. Nick Arnott over at Neglected Potential has written a blog post questioning the roughly 75 percent inaccuracy rate OptopFidelity's testing showed for the iPhones. Someone reading their study would get the impression that the iPhone 5c and 5s have incredibly inaccurate screens, but something about the results didn't sit well with Arnott. So he took a finer look.

He noticed that the main areas where the iPhone was getting the lowest scores were the left and right edges of the keyboard. OptoFidelity itself actually specifically mentions these areas in its study. Taking a finer look at where exactly the issues arose, Arnott noticed something that could easily explain the robot finger's odd tapping issues.

Looking at the displacement of taps as you move away from the green area, there's a definite pattern. The more you move away from the easily-tappable area, the greater the "inaccuracy" of the tap. But the inaccuracy skews in a way that would make the target slightly closer to starting position of your thumb (which is likely the most frequently used digit for tapping). As your thumb stretches out from your hand, likely positioned near the bottom of the phone, the portion of your thumb that actually comes into contact with the screen when you tap changes. Your perception of the screen also changes slightly, as when you move higher on the screen, it's less likely that you're viewing the screen at exactly a 90 degree angle. These are factors that this automated test does not account for. The robot doing the test is viewing its tap target at a perpendicular angle to the screen. It is also tapping at a perpendicular angle every time. This isn't generally how people interact with their phones.

While OptoFidelity's test indeed raises some questions about the iPhone's touch panel, Arnott raises an interesting point. Testers should always be asking questions. When OptoFidelity researchers received the test results, did they look at the methodology and how it affected the touch recognition on the panel? Did the OptoFideltiy test take into account the different sizes of the iOS devices and the Galaxy S3 and how they fit differently in users' hands?

Arnott is not saying that OptoFidelity's tests are necessarily wrong, though he has issues with the results. He is merely reminding all of us that when we are testing a product or even just reading a report, we should be questioning the findings. That's how products get better, and how reports that raise questions can help answer them.