Twitter uses smart cropping to make image previews more interesting

The site has trained a neural network to identify the best bits of a picture.

Twitter's recent character limit extension means we're spending more time reading tweets, but now the site now wants us to spend less time looking at pictures. Or more specifically, less time looking for the important bit of a picture. Thanks to Twitter's use of neural networks, picture previews will now be automatically cropped to their most interesting part.

In a blog post yesterday, the engineers behind the feature explained that the tool has been developed from basic facial recognition software. But while that was great for pictures of people, it didn't help with images of objects, landscapes or animals. They then began exploring research into eye-tracking, which can be used to train neural networks and other algorithms to predict what people want to look at.

Once a neural network was able to pick out these salient areas, the team needed to find a way to make it work in real-time on Twitter. Picture cropping on the site is fairly broad -- only a third or so of an image needs to be previewed -- so it used a process called "knowledge distillation" to simplify the process, which made the neural network 10 times faster than its initial design. Saliency detection and optimized cropping now happens instantaneously.

The feature is being rolled out on iOS, Android apps and desktop now, so next time you upload a picture of Mittens you can be sure your followers will see his little furry face in all its adorable glory, whether they want to or not.