Advertisement

Computers are learning to size up neighborhoods using photos

Us humans are normally good at making quick judgments about neighborhoods. We can figure out whether we're safe, or if we're likely to find a certain store. Computers haven't had such an easy time of it, but that's changing now that MIT researchers have created a deep learning algorithm that sizes up neighborhoods roughly as well as humans. The code correlates what it sees in millions of Google Street View images with crime rates and points of interest; it can tell what a sketchy part of town looks like, or what you're likely to see near a McDonald's (taxis and police vans, apparently).

Once a computer teaches itself using the algorithm, it's surprisingly effective. While humans are still quicker at finding their way to a given location, machines are better at gauging how close they are based on individual photos. You sadly won't see this technology used in the real world any time soon, since it's just a proof of concept at this stage. However, it's already good enough that MIT's team believes it could help navigation apps steer you around crime-ridden areas, or give retailers a sense of where to set up shop. Eventually, you may not have to set foot in an unfamiliar neighborhood before you get a feel for what it has to offer.