Abstract
Language corpora are increasingly being based on data from social and educational online platforms, and the size of these new corpora allows researchers to analyze language use in ways that were not previously possible. However, these platforms generally do not collect data with linguistic research in mind, so their data is often “messy” or “dirty” in various ways. Researchers must therefore develop new approaches for organizing and cleaning this type of data. Such approaches should generally be scalable, due to the size of these datasets, so they should rely primarily on quantitative and NLP-based techniques. Here, I present a case study based on working with a large-scale English learner database—the EFCAMDAT. I provide insights into the kinds of challenges that researchers may encounter when working with such corpora, and the kind of solutions that they may use.