Abstract

Content analysis is a systematic research technique that provides a method for the qualitative and quantitative analysis of a corpus of information, generally text. This section introduces content analysis, describes applications of the technique, the types of content measured, along with sampling considerations. The reliability and validity of the study and research results are discussed, especially when applied by human coders and computer analysis. The similarities and differences between quantitative and qualitative content analysis are explored and outlined. Finally, the section concludes with a methodological assessment of two peer-reviewed articles that used the content analysis method to obtain answers to specific research questions.

Citation

Ward, J.H. (2012). Managing Data: Content Analysis Methodology. Unpublished manuscript, University of North Carolina at Chapel Hill. (pdf)

Creative Commons License
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.

 

Table of Contents

Abstract

Introduction

Applications of Content Analysis

Content Types—Manifest Versus Latent

Population, Sampling and the Unit of Analysis

Reliability and Validity

Human Versus Computer Coding—Reliability and Validity Examined

Quantitative Versus Qualitative Content Analysis

The Steps: Quantitative Data Analysis
The Steps: Qualitative Data Analysis

Examples

Example 1: “Cataloging Professionals in the Digital Environment: A Content Analysis of Job Descriptions”
Example 2: “Research Anxiety and Students’ Perceptions of Research: An Experiment. Part II. Content Analysis of Their Writings on Two Experiences”

Conclusion

References

 

Introduction

Content analysis is a research technique that involves a systemic analysis of text, including images and symbolic matter, which makes replicable valid inferences from the material examined (Krippendorf, 2004; Weber, 1990). The method may be used in qualitative, quantitative, or mixed-methods studies with a multitude of research objectives and questions. It “is the study of recorded human communications” (Babbie, 2001) with a “systematic, objective, quantitative analysis of message characteristics” (Neuendorf, 2002). The flexibility and objectives of this process make it particularly suitable for Information Science research, given that the domain is the “study of gathering, organizing, storing, retrieving, and dissemination of information” (Bates, 1999).

A researcher applying content analysis methods would be interested in the “aboutness” of the content, more so than the content itself. For example, how often is a particular word used or not used? What can one infer from the text that is not directly stated? What themes or trends do the data indicate? How does the sample population feel about X, Y, or Z based on an analysis of the text? Thus, an Information Science researcher may utilize content analysis to answer questions about the underlying structure, form, and organization of the information contained in survey responses, books, transcribed interviews, journal articles, newspapers, web content, recorded conversations, etc.

While it is primarily a product of the 20th Century, content analysis has some long historical roots. Precursors to content analysis range from the analysis of texts by the Catholic Church in the 1600s in order to monitor and enforce orthodoxy, to the dissection of hymns in 18th Century Sweden, on to the statistical evaluation of news and novels in the late 1800s to early 1900s.

The rise of mass communication during the 1920s in the form of radio and, later, in the 1950s in the form of television, combined with the 1929 economic crash, the Depression, World War II, and the start of the Cold War, created the conditions ripe for the evolution of content analysis from a journalism-driven quantitative analysis to an established and codified research method with both qualitative and quantitative variants (Krippendorf, 2004). The public and researchers wanted answers to questions related to everything from the buying trends of a particular demographic to an analysis of Soviet propaganda. Berelson (1952) provided the first consolidated text about content analysis, and, as a result, its use spread beyond newspapers, espionage, and sociology to other disciplines and fields as diverse as psychology, anthropology, and history.

The development of computers in the mid-20th century and the rise of computer-aided text analysis (CATA) further integrated content analysis into mainstream human communications research. Over the past half-century, researchers have repeatedly demonstrated that computers using a variety of software may be used to reliably process large tracts of text much faster than humans. Computer software is available to support both quantitative (deductive) and qualitative (inductive) content analysis.

Computer-aided text analysis works by providing a standard dictionary against which the software processes the text. Alternately, a researcher may create a custom dictionary based on variables relevant to the study (Neuendorf, 2002). The computer may perform a quantitative analysis of word count, for example, or a more nuanced “analysis” of textual patterns (Evans, 1996). One example of pattern analysis would be to predict stock market fluctuations by analyzing Twitter posts (Bollen, Mao & Zeng, 2010). However, in spite of more than 50 years of computer-aided text analysis, the results of human- vs computer coding of the same text have had markedly different findings (Spurgin & Wildemuth, 2009). While some studies have concluded that computers and humans may code and analyze text equally badly or well (Nacos, et al., 1991), at this point in time, computers are viewed as aids to the human process of coding and analysis, not a substitute (Krippendorf, 2004; Spurgin & Wildemuth, 2009).

Whether or not a researcher or research team chooses to use computer software to aid in the analysis of text, they must follow the scientific method and be systematic in their approach. The quantitative approach to content analysis requires a deductive method in which a hypothesis is formed and valid and replicable inferences may be made from the text (White & Marsh, 2006; Krippendorf, 2004). The investigator will choose the data via random, systematic sampling and all data will be gathered prior to coding. The researchers will develop the coding scheme a priori, and she may re-use existing coding schemes. The coding objective is to test for reliability and validity using statistical analysis (White & Marsh, 2006).

A researcher who applies the qualitative content analysis method will use an inductive, grounded theory approach where the research questions guide the iterative data gathering and analysis. The investigator uses purposive sampling and may continue to gather data after coding has begun. As themes arise in the course of coding and analyzing the data, the researcher will determine the important patterns and concepts, and may add additional coding schemes as needed. This is a subjective method that still requires the systematic application of techniques to ensure the credibility, transferability, dependability, and confirmability of the eventual results (Lincoln & Guba, 1985; White & Marsh, 2006). Thus, the results of a qualitative content analysis are subjective and descriptive, but they are systemically grounded in the themes and concepts that emerge from the data. Weber (1990) writes that the best content analyses use both quantitative and qualitative operations, while Krippendorf (2004) states that both methods are indispensable to the analysis of texts.

Applications of Content Analysis

A common application of content analysis in ILS research is the study of position announcements. White (1999) analyzed electronic resource position announcements posted between 1990 and 1998 to determine whether or not position requirements had changed with the rise of the World Wide Web in the mid-1990s. He quantitatively examined the words that appeared in the postings to produce tables of salaries offered, position titles, job responsibilities, required skills and qualifications, and educational requirements. The results of the study indicated that technology-related skills were increasingly important, and that salaries increase above inflation and are higher than average. Over the long term, this type of study may inform LIS curricula, as well as provide information to practitioners on what skills they need to develop and/or maintain in order to remain relevant.

A similar LIS study by Park, Lu, & Marion (2009) ten years later examined cataloging professionals’ job descriptions to re-assess the current skill set requirements. Study researchers applied the quantitative content analysis method, but with the added “layer” of additional statistical analysis to check the results. That is, in addition to the straight “count” of terms, Park, Lu & Marion (2009) converted the category counts to co-occurence similarity values to “compensate for large differences in counts for commonly occurring terms”. Similar to White (1999), the results of this study indicate that technological advances in the 2000s have influenced job responsibilities, position titles, job descriptions, skills, and the qualifications required for a cataloging position.

Researchers may also use content analysis to gauge users’ perceptions of the phenomenon of interest or to predict how those sentiments may drive changes to other indicators. Kracker and Wang (2002) qualitatively compared participants’ perceptions of a past research project with their current perceptions of a research project. The investigators examined students’ emotional states, perceptions, and like or dislike of the various research stages; they cross-referenced feelings and thoughts with regards to demographics. The results confirmed Kuhlthau’s Information Search Process (ISP), and the study will be discussed in more depth later in this paper.

One application of sentiment analysis is to examine Twitter posts to determine if the emotions expressed predict the direction of the Dow Jones Industrial Average (DJIA) (Bollen, Mao, & Zeng, 2010). The researchers in this study used two software programs, plus a Granger causality analysis and a Self-Organizing Fuzzy Neural Network, to determine the collective mood of Twitter users. They then compared the results to the up and down movement of the DJIA. The data indicates that the collective mood as determined via Twitter can predict the direction of the stock market. The team cross-validated the results by examining users’ moods on Twitter prior to the Presidential Election and Thanksgiving 2008. Again, communal Twitter sentiment predicted the 2008 Presidential Election and events around Thanksgiving 2008. The results of the content analysis indicate that public mood may be correlated to and predictive of economic events.

Content Types—Manifest Versus Latent

Initially, content analysis operations focused on manifest content — that is, communications that are objective, systematic, and quantitative (Berelson, 1952). Researchers focused on those facets of the text that were present, easily observable, and countable. For example, White’s (1999) study of electronic resource position announcements considered primarily manifest comment. Either a word or phrase appeared and was counted, or it did not and, therefore, could not be counted.

As the method evolved, content analysis researchers have examined the latent meanings held within the text, not just manifest content. For example, during World War II, intelligence agents were able to predict Axis military campaigns by examining the underlying meanings of manifest communications to the public that were designed by Axis governments to build popular support for a forthcoming political or military campaign (Krippendorf, 2004). Allied intelligence agents determined these campaigns were impending by reading between the lines of seemingly innocuous news stories and announcements.

Two modern applications of latent content analysis are the analysis of sentiment, mentioned previously in Kracker and Wang’s (2002) analysis of students’ perceptions of research, and Bollen, Mao, and Zeng’s (2010) analysis of Twitter posts to predict the DJIA. In both studies, manifest content was examined to determine latent content. Strictly speaking, content analysis should only consider manifest content (Berelson, 1952), but leading content analysis methodologists such as Neuendorf (2002) and Krippendorf (2004) agree that study results obtained via latent analysis of manifest content can produce results that are both reliable and valid. However, the researcher who examines latent content must be sure to pay strict attention to the issues of reliability and validity to ensure a solid study design (Spurgin & Wildemuth, 2009).

Population, Sampling and the Unit of Analysis

When a researcher is designing a content analysis study, she must first determine the sample population from which she will draw her data. Then, she must determine the unit to be examined. The unit of analysis, sometimes referred to as the recording or coding unit, is “distinguished for separate description, transcription, recording, or coding” (Krippendorf, 2004) so that the population may be identified, the variables measured, or the analysis reported. The unit itself may be physical, temporal, or conceptual (Spurgin & Wildemuth, 2009). An example of a physical unit of analysis is a word, sentence, or paragraph. If the unit of analysis is temporal, then an investigator would count some amount of time, i.e., minute, hour, etc., of an audio or video recording. A conceptual count involves examining every instance of an argument or statement.

For example, when Nacos, et al. (1991) chose to compare human- versus computer coding of content, they chose a sample that consisted of articles about the invasion of Granada from The New York Times and The Washington Post. The date range of the news articles ranged from January 1, 1983 to November 25th, 1983. The team only examined articles in the first sections of the newspapers, and excluded Op-Ed pieces. Within each article, the unit of analysis they chose to examine was the paragraph. Park, Lu, & Marion (2009) examined cataloging job descriptions posted on a listserv between January 2005 and December 2006 as the sample. As part of a pilot study, they coded several dozen job descriptions and determined the unit of analysis to be the most frequently occurring categories, such as responsibilities, job titles, required job qualifications and skills, and preferred job qualifications and skills. An alternate unit of analysis within the latter study might have been the job posting itself.

There are as many as nine types of sampling methods that could be applied to an examined text as part of a quantitative content analysis — random, systematic, stratified, varying probability, cluster, snowball, relevance, census, and convenience (Krippendorf, 2004). The sampling technique the investigator will use does depend on the type of content analysis to be performed on the material chosen. If a researcher applies the quantitative method, then one type of systematic sampling is used to provide for the generalization of the results to a larger population. In this instance, random sampling is preferred (White & Marsh, 2006). If a researcher applies the qualitative content analysis method, then purposive sampling is applied. Data may continue to be gathered throughout the project as themes and patterns emerge.

Reliability and Validity

The reliability of a content analysis depends on whether and to what extent agreement can be achieved among coders, judges, observers, or measuring instruments (Krippendorf, 2004). Inter-coder reliability implies, for example, that all coders have consistently and repeatedly coded material the same way, regardless of which or what texts they examined. Reliability provides an empirical grounding for the confidence that the interpretation of the data will mean the same to anyone who analyzes it, and that as much bias as possible has been removed from the interpretation. Reliability ensures that the results of a study may be replicated when the same research procedure is applied; it ensures that a measurement is consistently the same throughout a study. A researcher may check the reliability of a variable by using Spearman’s rho, Scott’s pi, or Pearson’s r (Neuendorf, 2002). Krippendorf (2004) has also developed an alpha to aid reliability testing.

Validity ensures other evidence available for scrutiny that is independent of the study itself may corroborate research results. The accuracy of the measurement is gauged — it must measure what the researcher intends to measure (Neuendorf, 2002). This evidence may be in the form of new observations, other available texts, open data, or competing theories and interpretations. The quality of the study results must be “true” — they must be what the researcher states they are.

If a measure is not reliable, then it cannot be considered valid (Neuendorf, 2002). It can be challenging for a researcher to balance reliability and validity; however, if the measurement is not accurate (valid), then it is less important that it has been consistently measured (reliable). Thus, it is better for an investigator to aim for high validity rather than high reliability.

Human Versus Computer Coding—Reliability and Validity Examined

One question researchers have considered as part of CATA is whether or not human coding is more reliable and valid than computer coding. After all, in spite of easy access to computers and the Internet, human coders often perform content analysis. This makes it a labor-intensive, costly, time-consuming, and tedious operation. Software that aids in content analysis while providing high validity and reliability would be highly desirable as a way to cut costs and increase the speed at which a corpus may be measured. Evans (1996) examined the available tools and techniques for computer-supported content analysis, but he did not evaluate the effectiveness of the tools against human coders.

When Nacos, et al., (1991) took an existing corpus that had already been examined by human coders, and compared the results of a computer analysis of the same data set, they concluded that computers have the advantage when it comes to processing large volumes of text consistently, accurately and quickly, especially when the goal of the study is a combined measure of content. However, they found that the advantages of using human coders over computers are not trivial. For example, when it comes to coding text, computers cannot recognize when there is a problem — such as ambiguity — when the rules and data dictionaries are not as precise or comprehensive as needed. Nor can a computer determine when a particular paragraph that is being coded does not make sense within the context of the preceding paragraphs, and adjust accordingly. The study results indicate that the computer provides high reliability at the expense of validity, while the human coders provide high validity at some expense to reliability.

However, a previous study by Rosenberg, Schnurr, & Oxmann (1990) concluded that human-scored methods provided less validity when compared to the computerized method when used to make inferences about the psychological states and traits of a writer or speaker. These researchers compared one simple and one sophisticated computerized approach with a context-sensitive, human-scored system. Their final recommendation is that a simple, computerized content analysis should be the first procedure of any content analysis study design. These conflicting results are reinforced by Morris’ (1994) comparison of human and computer coding results in the management research domain — she found no significant difference overall between the results of human and computer coding regardless of the unit of analysis.

Whereas Naco, et al. (1991), compared human and computer coding at the paragraph level, and Rosenberg, Schnurr, & Oxmann (1990) compared human and computer coding of speech samples of five minutes in length, Morris compared human versus computer coding at the sentence, word or phrase, paragraph, sentence density, paragraph density, and hit density units of analysis. She designed the study to compare not just human and computer coding, but to determine whether or not the unit of analysis affects the results. She drew her sample population from the mission statements and letters to shareholders of Fortune 500 firms. She found that there was no significant difference between the results obtained by human coders when compared to the results of the computer analysis by unit of analysis.

When there were insignificant differences between the results of human and computer coding, the differences were due to two possible sources of error. Either the human coders did not receive accurate training and coding instructions, or there was an error in the computer’s coding instructions. In both instances, human and computer coding errors may be minimized by revising the computer analysis programs during the study, in the same way that human coders sometimes receive additional training and experience during the course of an investigation.

There are several advantages to using CATA over human coders, among them: high reliability, quantitative results (word counts, etc.) that would be time-consuming to produce manually, and the ability to process large volumes of data quickly and inexpensively (Morris, 1994). However, she also recognizes that computers have limitations that may impact validity, such as:

  •  an inability to recognize unambiguous language and the intent of the communication within context;
  • the inability of the computer to resolve references to words appearing elsewhere, such as pronouns referring to nouns in other sentences;
  • word crunching that produces quantitative data may produce spurious results; and,
  • the reliability and validity of the computer results will have to be validated by a human and computer inter-coder reliability pilot test.

Her final conclusion is that although there was no significant difference between the results of human and computer coding in her study, if a researcher wishes to use machine coding over human coders, the study design and research question must be appropriate for CATA.

In a more recent study, King & Lowe (2003) studied the results of human versus machine coding of international conflict events data, including automatically generated events data. The results of their study, like Morris, found no significant difference between the results obtained by human coders when compared to computers. Unlike Morris (1994), King & Lowe (2003) recommend using computers over humans in all studies because of the reduced expense. They do not recommend making the choice on a case-by-case basis.

In conclusion, while there are advantages to using computers over humans because of the high reliability, Spurgin and Wildemuth (2009) caution that if the rule sets are not consistent there may be questions about internal consistency (re: reliability). Therefore, in order to use CATA for a content analysis, a researcher must have the appropriate research questions, study design, must understand the software she is using, and choose the right software for the job. If a sample size is small enough, it may be faster for two people to code the data, rather than set up the software and data dictionary to process it. Again, the decision to use human or computer coders should be made on a case-by-case basis.

Quantitative Versus Qualitative Content Analysis

The most basic form of text analysis is quantification of text, yet to do so reduces text analysis to a simple tallying activity. The value of a content analysis lies with discovering any context and meaning that may be hidden within the categorized message. However, while the best content analyses should apply both quantitative and qualitative methods (Krippendorf, 2004; Weber, 1990), each method is based on a slightly different process. A quantitative content analysis is based on the inductive, scientific method, while the qualitative approach is based on a deductive, grounded theory process.

The Steps: Quantitative Data Analsis

The core steps of the scientific method applied to any study in any domain are: theory, operationalization, and observation. The scientific method operationalizes deductive logic, which goes from the more general to the more specific in the following order: theory, hypothesis, observation, and empirical generalization (Babbie, 2001). As applied in ILS, Crawford and Stucki (1990) identified eight steps:

  1. Establish a question.
  2. Devise a hypothesis or question to be tested.
  3. Design the study methodology.
  4. Create a research team, write a proposal, and receive funds.
  5. Set up the research team.
  6. Gather the data, code the data, and test the hypothesis.
  7. Analyze the data to determine if it supports the hypotheses or provides an answer to the research question.
  8. Report the results to the larger community for peer-review and to contribute to the field.

In theory, a quantitative content analysis follows the general outline of the scientific method. According to Neuendorf (2002), a typical content analysis process is comprised of nine steps.

  1. Theory and rationale: What are the questions? The hypotheses? What body of work will be examined, and why? Why is this important? Does the current literature address this question or these questions
  2. Conceptualization: What dictionary-type definitions will you use with what variables? What will you sample, and what sample will you gather and why?
  3. Operationalization: What type of a priori coding scheme will the researcher use? Do the measures match the conceptualizations? What units will be sampled? How do you determine and verify validity and reliability?
  4. Develop the Coding Scheme: If human coders are used, what codebook and coding form will be used? If a computer is used, then what dictionary will be created or re-used?
  5. Sample: What sample size does the researcher need to be valid? How will the researcher randomly sample the data?
  6. Run a Pilot Test and Check Inter-coder Reliability: How much do the coders agree during a pilot test? Are the variables reliable? Have the codebook and form been revised as needed? Has the researcher run a spot test of humans versus the computer to check for human-computer reliability?
  7. Code the Data: If human coders are used, are there at least two coders? Does the data overlap by at least 10% to check for reliability? If a computer is used, has the researcher spot-checked for validity?
  8. Calculate the Reliability: What reliability figure is used for each variable? Pearson’s r, Spearman’s rho, Krippendorf’s alpha, Cohen’s kappa, or Scott’s pi? And why?
  9. Tabulate and Report the Results: What statistical operation is appropriate for the data? Univariate? Cross-tabulation? Are there other bivariate and multivariate techniques that may be run on the data? Why were these techniques used?

Content analysis as a method examines not just the mere count of some unit within a corpus; a researcher applying the method must also be concerned with latent meaning within the text. The study design itself must be rigorous and follow a logical design. It is entirely possible to design a study that examines both latent and manifest content, yet also follows the reasoning of the scientific method.

The Steps: Qualitative Data Analysis

There are multiple approaches to qualitative data analysis. Miles & Huberman (1994) identified three: interpretivism, social anthropology, and collaborative social research. An investigator applying a qualitative content analysis design would be applying the second, because she is examining both manifest and latent content for patterns. A researcher applying a qualitative, inductive content analysis uses the same four steps as with the scientific method, except that instead of moving from the general to the specific, the process flows from the specific to the general (Babbie, 2001).

The steps involved are almost the reverse of the deductive method — the researcher begins with an observation, discovers patterns, creates a hypothesis, and then proposes a theory. Glaser (1965) named this approach the constant comparative method, and it became the foundation of Glaser & Strauss’ (1967) Grounded Theory, which is a systematic, iterative method for developing a theory from raw data. The research questions guide the data gathering and analysis, but as patterns and themes arise from the data analysis additional questions may be proposed (White & Marsh, 2006) and new categories may be coded until the saturation point is reached (Glaser, 1965). If an investigator is attempting to develop a theory, then the coding scheme develops from the data, but a priori schemes may be used if the researcher is not verifying an existing theory or describing a particular phenomenon (Zhang & Wildemuth, 2009; White & Marsh, 2006). The use of a qualitative design for a content analysis study does not preclude the use of deductive reasoning, or the re-use of concepts or variables from previous studies (Zhang & Wildemuth, 2009).

A qualitative content analysis follows a systematic series of steps, some of which overlap with quantitative content analysis. Krippendorf (2004) writes that both quantitative and qualitative content analysis sample text; unitize text; contextualize the text; and have specific research questions in mind. Zhang & Wildemuth (2009) outlined the process of qualitative content analysis as a series of eight steps, once the initial research question has been developed.

  1. Prepare the data: Can your data be transformed into written text? Is the choice of content justified by what the researcher wants to know?
  2. Define the Unit of Analysis: What theme is the coding unit? How large is the instance of that theme? Is the theme reflected in a paragraph or within an entire document?
  3. Develop Categories and a Coding Scheme: Will the coding scheme be developed as patterns and themes emerge, or will it be developed from previous studies or theories?
  4. Test Your Coding Scheme on a Sample of Text: How consistent is the inter-coder agreement in the pilot test?
  5. Code All the Text: Has the researcher repeatedly checked the consistency of the inter-coder agreement
  6. Assess Your Coding Consistency: As new coding categories are added, are the coders still in agreement for the entire corpus?
  7. Draw Conclusions from the Coded Data: What themes and patterns have emerged from the data? What sense can you make of these patterns?
  8. Report Your Methods and Findings: How well can the study be replicated? Has the researcher presented all of the necessary information to replicate the study? Are the results important? If so, why?

Similar to validity, the study design, data gathering, and results of a qualitative content analysis must have a degree of “truth” so that a peer-review of the results will provide confidence to other researchers and students that the study results are accurate. Lincoln & Guba (1981) describe this “truth” as having four dimensions: credibility, transferability, dependability, and confirmability. Credibility is similar to internal validity, in that the data gathered accurately reflects the research question. That is, the study data will measure what the research questions seek to measure. Transferability is similar to external validity, wherein the results of a study are applicable from one frame of reference to another. Dependability ensures that a study may be replicated, and examining intercoder reliability assesses confirmability. It ensures the objectivity of the researchers such that there is “conceptual consistency between observation and conclusion” (White & Marsh, 2006).

Examples

Two examples of content analysis are discussed in this section. The first study is a job description analysis, which is a fairly common application of content analysis in ILS. Park, Lu, & Marion (2009) examined job descriptions for catalogers over a two-year period to determine what skills and competencies are desired by employers. The authors provided an analysis that used both straight frequency counts and statistical analysis to examine the data. The second study (Kracker & Wang, 2002) analyzed students’ perceptions of research and research anxiety by using a mixed-methods (qualitative and quantitative) design. The study results confirmed Kuhlthau’s Information Search Process (ISP) model.

Example 1: “Cataloging Professionals in the Digital Environment: A Content Analysis of Job Descriptions”

As noted previously, Park, Lu, & Marion (2009) applied a quantitative content analysis to assess the current skill requirements for catalogers. The authors identified emerging technology-related roles and competencies and discussed how these new requirements related to traditional cataloging skills.
Park, Lu, & Marion (2009) gathered 349 distinct cataloging job descriptions from an established online listserv over a two year period. The researchers followed procedures for data analysis used by Marion in previous peer-reviewed publications, which included co-term and co-citation analysis. The investigators determined the coding scheme a priori, and no additional categories were added once the data gathering phase began after the initial pilot study. They achieved intercoder agreement initially by manually coding 55 job descriptions. The authors used content-analysis software, and created the dictionary based on a combination of sources, including counts of the most frequently occurring terms, a literature review, and their own combined professional knowledge.

The research team entered all complete job descriptions into the content-analysis software. The initial output of the software was a frequency count of terms. The researchers then converted this count of terms to a matrix of co-occurence similarity in order to offset any large differences in commonly occurring terms. More importantly, the co-occurence similarity provided more useful information about the structure of the cataloging profession. Finally, the team created a visual graph of the data and cluster analysis to explore a co-occurence profile for each category term. The researchers also used hierarchical cluster analysis and multi-dimensional scaling to identify clusters of categories. Using these clusters, they generated a map to determine patterns in the data.

The authors presented the results based around four categories: job titles, required qualifications and skills, preferred qualifications and skills, and responsibilities. The tables that detail the most frequently occurring job titles, categories and skills, and responsibilities using frequency, percentage, and terms and phrases, are clearly rendered. The dendrograms and map of the same information provided an easy visual clue as to where the clusters are in the data.

There are four areas where the study could be improved. First, the authors mention manually coding 55 job descriptions, but they do not describe how the results of these manual codings compared to the output of the content analysis software during the pilot phase. Did they pilot test the software results against human coders? The researchers do not mention spot-checking the results from the software against a human during the main study, either. Second, the coding scheme consisted of eight categories based on commonly used terms in job descriptions (i.e., “background information”, “job responsibilities”, etc.), and the dictionary for the content analysis software was also custom built. Are there existing schemes available via psychology, business, or human resources that could have been used in place of these custom schemes? The authors do not say whether or not they looked for existing job-related categorizations prior to custom building the manual coding scheme and dictionary.

Third, do the results of the study correlate with any existing theories on job changes over time, for example? The authors do not state whether or not they were trying to support an existing theory or could have. Fourth, the authors did not provide any indication that they statistically determined the sample size. How do we know that 349 is a valid sample of the population? The authors did cite this as a limitation in the conclusion of the paper.

In conclusion, Park, Lu, & Marion (2009) used quantitative techniques to perform a content analysis of cataloging job positions in order to inform current catalogers and LIS curricula developers of evolving skill sets. The authors performed a basic term frequency count (which provided for a manifest content analysis) supplemented by established statistical techniques such as occurence similarity values (which provided for latent content analysis). They used content analysis software to analyze the full texts, thus aiding in reliability. The investigators manually coded the text during the pilot phase. The sample population was chosen from a publicly available listserv, so the study may be replicated. The authors provided a sample job description and a list of digital environment job titles in the appendices. While this content analysis has some limitations, overall, the authors achieved their goal of assessing the (then) current state of cataloging skill sets and responsibilities.

Example 2: “Research Anxiety and Students’ Perceptions of Research: An Experiment. Part II. Content Analysis of Their Writings on Two Experiences”

Kracker and Wang (2002) conducted a two-part experiment that examined both quantitative and qualitative data. The results of the quantitative study were presented in a separate first paper and will not be discussed in this section. The second paper, which is described here, presented the results of the qualitative analysis. That content analysis examined study participants’ descriptions of both a past memorable research experience and a current research paper in order to determine student’s perceptions of research.

The researchers’ sample consisted of 90 students from a technical and professional writing course. Each student was assigned either to a control group or an experimental group. Each person in both the control and experimental groups completed a pre-test questionnaire that asked the students to recall their most memorable research experience to date, and to write a paragraph describing their thoughts and feelings as they each worked through the assignment from start to completion. The students in the experimental group attended a lecture on Kuhlthau’s ISP model; the control group attended a placebo lecture. The students in the class were required to complete a research paper as part of the course. Once the research paper was turned in at the end of the term, the students from both the placebo and control groups were asked to describe their thoughts and feelings about this recent research experience.

Content analysis techniques were used initially to assign the 16 feelings identified in Kuhlthau’s ISP model as categories. The researchers added categories as themes emerged from the data and classified feelings into three meta-groups — emotional states related to the process, perceptions of the process, and affinity to research. The units of text were coded at the subcategory level; the authors provided examples of the coding schemes and classifications as part of the appendices. The two coders crosschecked their coding, and achieved a 90% intercoder agreement in two rounds for eight of the thirteen categories.

However, methodologists such as Krippendort (2004) and Neuendorf (2002) are firm that percentage agreement is a misleading measure that overstates the real value of the intercoder agreement. In addition, the authors determined intercoder agreement for affective and cognitive coding by using Holsti’s (1969) method. This method, similar to percent agreement, does not take chance into account and is not as useful a measure as other intercoder agreement statistical methods (Spurgin & Wildemuth, 2009). Therefore, the study design could have been improved by using Cohen’s kappa, since it is often used in behavioral research and is a modification of Scott’s pi. Scott’s pi is applicable to nominal data with two coders, as well.

The numbers and percentages for feelings and thoughts, respectively, as well as the relationship between thoughts and feelings by participants, were clearly presented in tables. The authors used the content analysis data to examine feelings in relationship to demographic factors, groups, and broke down negative emotional states into clusters. Kracker and Wang (2002) also examined thoughts across groups in relation to Kuhlthau’s ISP model. The results of the analysis confirmed this model. While this study is a simple content analysis, the authors did examine manifest content, discovered latent content, and integrated both qualitative and quantitative content analysis methods into their study.

This experiment is an example of using both qualitative and quantitative content analysis methods to measure perceptions — that is, the feelings and thoughts of study participants about a particular topic. The authors performed basic quantitative content analysis to count words related to the study participant’s thoughts and feelings. The researchers began with a defined coding scheme (quantitative content analysis), but added to it as themes emerged (qualitative content analysis). They took the themes that emerged and mapped those themes to an existing theory (Kuhlthau’s ISP model), which is an example of using deductive reasoning to support an existing theory and add to the current body of knowledge. The authors provided coding words, classifications, and other schemes in the appendices, which adds to the validity and reliability, as well as replicability and generalization, of the results. This example included both quantitative results and description, providing the reader with information about the factors that affect students’ perceptions of research.

Conclusion

Content analysis is a systematic approach to the analysis of a corpus of information categorized as data. The approach offers both an inductive, quantitative approach for researchers who wish to prove an existing theory, yet it is flexible enough to be used by an investigator who wishes to establish a new theory grounded in data. The qualitative and quantitative content analysis methods overlap somewhat in their operationalization, but each is grounded in established theory. The quantitative approach is based on the deductive scientific method, and the qualitative approach is based on the inductive grounded theory model. Both sample texts, unitize the text, contextualize what is being read, and seek answers to defined research questions (Krippendorf, 2004). Both approaches to content analysis require the evaluation of reliability and validity, i.e., trustworthiness, and may use human and/or computer coding and analysis. Through careful study design, data gathering, coding, analysis and reporting, content analysis can provide valuable insight into the examination of both manifest and latent content.

 

References

Babbie, E. (2001). The Practice of Social Research (9th Edition). Belmont, CA: Wadsworth/Thomson Learning.

Bates, M.J. (1999). The Invisible Substrate of Information Science. Journal of the American Society for Information Science, 50(12), 1043-1050.

Berelson, B. (1952). Content analysis in communications research. New York, NY: Free Press.

Bollen, J., Mao, H., & Zeng, X. (2010). Twitter mood predicts the stock market. Journal of Computational Science, 2(1), 1-8.

Crawford, S. & Stucki, L. (1990). Peer Review and the Changing Research Record. Journal of the American Society for Information Science, 41(3), 223-228.

Evans, W. (1996). Computer-Supported Content Analysis: Trends, Tools, and Techniques. Social Science Computer Review, 14(3), 269-279.

Glaser, B.G. (1965). The Constant Comparative Method of Qualitative Analysis. Social Problems, 12(4), 436-445.

Glaser, B.G. & Strauss, A.L. (1967). The Discovery of Grounded Theory Strategies for Qualitative Research. Chicago, IL: Aldine Publishing Company.

Holsti, O.R. (1969). Content Analysis for the Social Sciences and Humanities. Reading, MA: Addison-Wesley.

King, G. & Lowe, W. (2003). An Automated Information Extraction Tool for International Conflict Data with Performance as Good as Human Coders: A Rare Events Evaluation Design. International Organization, 57(3), 617-642.

Kracker, J. & Wang, P. (2002). Research Anxiety and Students’ Perceptions of Research: An Experiment. Part II. Content Analysis of Their Writings on Two Experiences. Journal of the American Society for Information Science, 53(4), 295-307.

Krippendorf, K. (2004). Content analysis an introduction to its methodology (2nd ed.). Thousand Oaks, CA: Sage Publications.

Lincoln, Y.S. & Guba, E.G. (1985). Naturalistic Inquiry. Beverly Hills, CA: Sage Publications.

Miles, M.B. & Huberman, A.M. (1994). Qualitative Data Analysis (2nd ed.). Thousand Oaks, CA: Sage Publications.

Morris, R. (1994). Computerized Content Analysis in Management Research: A Demonstration of Advantages and Limitations. Journal of Management, 20(4), 903-931.

Nacos, B.L., Shapiro, R.Y., Young, J.T., Fan, D.P., Kjellstrand, T., & McCaa, C. (1991). Content Analysis of News Reports: Comparing Human Coding and a Computer-Assisted Method. Communication, 12(2), 111-128.

Neuendorf, K.A. (2002). The Content Analysis Guidebook. Thousand Oaks, CA: Sage Publications.

Park, J., Lu, C. & Marion, L. (2009). Cataloging Professionals in the Digital Environment: A Content Analysis of Job Descriptions. Journal of the American Society for Information Science, 60(4), 844-857.

Rosenberg, S. D., Schnurr, P. P., & Oxman, T. E. (1990). Content Analysis: A Comparison of Manual and Computerized Systems. Journal of Personality Assessment, 54(1/2), 298-310.

Spurgin, K.M. & Wildemuth, B.M. (2009). Content Analysis. In Applications of Social Research Method to Questions in Information and Library Science (pp. 297-307). Westport, CT: Libraries Unlimited.

Weber, R.P. (1990). Basic Content Analysis (2nd Ed). Newbury Park, CA: Sage Publications.

White, G.W. (1999). Academic Subject Specialist Positions in the United States: A Content Analysis of Announcements from 1990 through 1998. The Journal of Academic Librarianship, 25(5), 372-382.

White, M.D. & Marsh, E.E. (2006). Content Analysis: A Flexible Methodology. Library Trends, 55(1), 22-45.

Zhang, Y. & Wildemuth, B.M. (2009). Qualitative Analysis of Content. In Applications of Social Research Method to Questions in Information and Library Science (pp. 308-319). Westport, CT: Libraries Unlimited.

If you would like to work with us on a content analysis or data analysis and analytics project, please see our services page.

Blog post | Social media | Content Analysis Methodology Literature Review

Reader Interactions

Trackbacks

  1. […] The first citation is to a dissertation proposal. The second example is to the follow up dissertation. Both examples are real and N.V. Ivankova used them to fulfill the requirements for graduation. I have listed my own literature review of content analysis, below. You may find the HTML version of my content analysis literature review here. […]

Leave a Reply

%d bloggers like this: