Crowdsourced knowledge bases like Wikipedia encompass a lot of knowledge, but humans can only add to them so quickly. Wouldn't it be better if computers did all the hard work? The University of Washington certainly believes so. Its LEVAN (Learn EVerything about ANything) program is building a visual encyclopedia by automatically searching the Google Books library for descriptive language, and using that to find pictures illustrating the associated concepts. Once LEVAN has seen enough, it can associate images with ideas simply by looking at pixel arrangements. Unlike earlier learning systems, such as Carnegie Mellon's NEIL, it's smart enough to tell the difference between two similar objects (such as a Trojan horse and a racing horse) while lumping them under one broader category.