Google Books corpus

13 May 2011 - 00:00

The Brigham Young University (in Provo, Utah) is pleased to announce a new corpus -- the Google Books (American English) corpus:

This corpus is based on the American English portion of the Google Books data (see and especially It contains 155 billion words (155,000,000,000) in more than 1.3 million books from the 1810s-2000s (including 62 billion words from just 1980-2009).

The corpus has most of the functionality of the other corpora from (e.g. COCA, COHA, and our interface to the BNC), including: searching by part of speech, wildcards, and lemma (and thus advanced syntactic searches), synonyms, collocate searches, frequency by decade (tables listing each individual string, or charts for total frequency), comparisons of two historical periods (e.g. collocates of "women" or "music" in the 1800s and the 1900s), and more.

This American English corpus is just one of seven Google Books-based corpora that are supposed to be created in the next year or two (contingent on funding, which we are applying for in June 2011). If funded, the other corpora will include British English, English from the 1500s-1700s, and corpora of Spanish, French, and German (see the listing at  Each of these corpora will be based on at least 50 billion words of data, and they should represent a nice addition to existing resources.

The Google Books (American English) corpus is freely-available at


Mark Davies
Professor of (Corpus) Linguistics
Brigham Young University
(phone) 801-422-9168 / (fax) 801-422-0906