Krippendorff’s Work Recognized by the Association for Education in Journalism and Mass Communication

Category: 

An article written by Professor Klaus Krippendorff titled “Systematic and Random Disagreement and the Reliability of Nominal Data,” which first appeared in Communication Methods and Measures (Vol. 2, Issue 4, December 2008), has been recognized as one of the top three papers ever published in that journal.

This award of honorable mention was bestowed upon Prof. Krippendorff, the Gregory Bateson Professor for Cybernetics, Language, and Culture, by the Theory & Methodology division of the Association for Education in Journalism and Mass Communication (AEJMC). “Starting this year the division and the journal [honors] the best articles of the previous year […] This year the committee considered all articles since the initial edition of the journal and not just the ones published in 2010. Having read the paper we must agree that it is indeed terrific and contributes to the methodological advancement of the field,” said Andrew Hayes, Editor-in-Chief of the journal and a Professor with the School of communication at The Ohio State University; and Hernando Rojas, division head and a Professor with the School of Journalism and Mass Communication at the University of Wisconsin-Madison. The award certificate will be presented at the division's business meeting in St Louis later this month. More about the paper:  

Abstract: Reliability is an important bottleneck for content analysis and similar methods for generating analyzable data. This is because the analysis of complex qualitative phenomena such as texts, social interactions, and media images easily escape physical measurement and call for human coders to describe what they read or observe. Owing to coders inescapable individual differences in background, the data they generate for subsequent analysis are prone to errors not typically found in mechanical measuring devices. However, most agreement measures designed to indicate whether data are sufficiently reliable to warrant subsequent analysis do not differentiate among kinds of disagreement that make data unreliable. This paper distinguishes two kinds of disagreement, systematic disagreement and random disagreement, and suggests measures of them in conjunction with the agreement coefficient α (alpha) (Krippendorff, 2004a, pp. 211–256). These measures, previously proposed for interval data (Krippendorff, 1970), are here developed for nominal data. Their importance lies in their ability to not only aid the development of reliable coding instructions but also warn researchers about two kinds of errors they face when using imperfect data.