In the past three years, natural language processing research has been revolutionized by a new technology: contextual embeddings such as ELMo, GPT-2, and especially BERT.

These state-of-the-art methods can understand human language significantly better than previous methods. For example, BERT can disambiguate between multivalent words — bank accounts and river bank — with never-before-seen levels of accuracy.

What impact might BERT-like models have in the field of the digital humanities? What impact might digital humanists have on our understanding and application of BERT-like methods?

The BERT for Humanists project is developing resources to help answer these questions and enable DH scholars to explore how BERT-like models can be used in their research and teaching. Find an annotated bibliography of research papers and tools, a glossary of relevant terms, code tutorials, and information about our virtual workshop in June 2021.

The BERT for Humanists project is generously supported by the National Endowment for the Humanities.