The Web Gets Smarter

Last Wednesday, with relatively little fanfare, Google introduced a new technology called Google Knowledge Graph. Type in “François Hollande,” and you are offered a capsule history (with links) to his children, partner, birthday, education, and so forth. In the short-term, Knowledge Graph will not make a big difference in your world—you might get much the same information by visiting Hollande’s Wikipedia page, and a lot of people might still prefer to ask their friends. But what’s under the hood represents a significant change in engineering for the world’s largest search-engine company. And more than that, in a decade or two, scientists and journalists may well look back at this moment as the dividing line between machines that dredged massive amounts of data—with no clue what that data meant—and machines that started to think, just a little bit, like people.

Since its beginning, Google has used brute force as its main strategy to organize the Internet’s knowledge, and not without reason. Google has one of the largest collections of computers in the world, wired up in parallel, housing some of the largest databases in the world. Your search queries can be answered so quickly because they are outsourced to immense data farms, which then draw upon enormous amounts of precompiled data, accumulated every second by millions of virtual Google “spiders” that crawl the Web. In many ways, Google’s operation has been reminiscent of I.B.M.’s Deep Blue chess-playing machine, which conquered all human challengers not by playing smarter but by computing faster. Deep Blue won through brute force and not by thinking like humans do. The computer was all power, no finesse.

Sometimes, of course, power has its advantages. Google’s immense computing resources have allowed them to revolutionize how classical problems in artificial intelligence are solved. Take the process of spell-checking. One of the features that first made word processing really popular was the automatic spell-checker. Engineers at places like Microsoft catalogued the most common errors that people made, such as doubled letters and transpositions (“poeple”), and built upon these patterns to make educated guesses about users’ intentions. Google solves the spelling-correction problem entirely differently—and much more efficiently—by simply looking at a huge database of users correcting their own errors. What did users most often type next after failing to find what they wanted with the word “peopple”? Aha, “people.”

Google’s algorithm doesn’t know a thing about doubled letters, transpositions, or the psychology of how humans type or spell, only what people tend to type after they make an error. The lesson, it seemed, was that with a big enough database and fast enough computers, human problems could be solved without much insight into the particulars of the human mind.

For the last decade, most work in artificial intelligence has been dominated by approaches similar to Google’s: bigger and faster machines with larger and larger databases. Alas, no matter how capacious your database is, the world is complicated, and data dredging alone is not enough. Deep Blue may have conquered the chess world, but humans can still trounce computers in the ancient game of Go, which has a larger board and more possible moves. Even in a Web search, Google’s bread and butter, brute force is defeated often, and annoyingly, by the problem of homonyms. The word “Boston,” for instance, can refer to a city in Massachusetts or to a band; “Paris” can refer to the city or to an exhibitionist socialite.

To deal with the “Paris” problem, Google Knowledge Search revives an idea first developed in the nineteen-fifties and sixties, known as semantic networks, that was a first guess at how the human mind might encode information in the brain. In place of simple associations between words, these networks encode relationships between unique entities. Paris the place and Paris the person get different unique I.D.s—sort of like bar codes or Social Security numbers—and simple associations are replaced by (or supplemented by) annotated taxonomies that encode relationships between entities. So, “Paris1” (the city) is connected to the Eiffel tower by a “contains” relationship, while “Paris2” (the person) is connected to various reality shows by a “cancelled” relationship. As all the places, persons, and relationships get connected to each other, these networks start to resemble vast spiderwebs. In essence, Google is now attempting to reshape the Internet and provide its spiders with a smarter Web to crawl.

Although semantic networks were quite popular in the nineteen-seventies, by the mid-eighties, research into them had tapered off, supplanted by “neural networks” that model the brain as simple statistical accumulators. Neural nets apply a blanket learning rule that treats all associations as equal, differentiated only by how often they appear in the world. Instances of Paris the place get muddled together with instances of Paris the person. This battle between structured knowledge and huge databases of statistics echoes one of the longest debates in psychology and philosophy, the debate between “nativists” (like Plato, Kant, and, in recent times, Noam Chomsky and Steve Pinker) that believe the mind comes equipped with important basic knowledge, and “empiricists” (like John Locke and B. F. Skinner) who believed the mind starts as blank slate, with virtually all knowledge acquired through association and experience.

Google used to be essentially an empiricist machine, crafted with almost no intrinsic knowledge, but endowed with an enormous capacity to learn associations between individual bits of information. (This is what has led to Google's infinite appetite for information, which has in turn led to questions about its violations of privacy.) Now, Google is becoming something else, a rapprochement between nativism and empiricism, a machine that combines the great statistical power empiricists have always yearned for with an enormous built-in database of the structured categories of persons, places, and things, much as nativists might have liked. Google’s search engines still track all the trillions of occurrences of the word “Paris” and what it is associated with in user queries and documents, but now tries to relate those words not just to each other, but to categories, like people, places, and corporations.

There’s very good reason for Google to move in this direction. As the pioneering developmental psychologist Elizabeth Spelke (profiled recently in the New York Times) put it: “If children are endowed [innately] with abilities to perceive objects, persons, sets, and places, then they may use their perceptual experience to learn about the properties and behaviors of such entities… It is far from clear how children could learn anything about the entities in a domain, however, if they could not single out those entities in their surroundings.” The same goes for computers.

Machines have long since become faster and more reliable than us at everything from arithmetic to encoding and retrieving vast storehouses of information. And I personally have long envied the way in which Google and its main competitor, Microsoft's Bing, keep so much information at their virtual fingertips, but we humans have a few tricks left. It’s refreshing to see that computer engineers still occasionally need to steal a page from the human mind.

Illustration by Arnold Roth.