Improving Comprehension of Science Content: Generating Self-Explanation Questions
and Creating Explanatory Answers
Peggie Clelland and Kay Camperell
The emphasis on learning in many content area classrooms is heavily dependent on remembering facts and memorizing definitions. Because of this, students often achieve shallow levels of comprehension and are deficient in the skills necessary to achieve deeper comprehension. Teaching students to generate self-explanation questions and answers can improve comprehension related to teacher lectures and from reading text. However, there are challenges related to implementing a research study to examine the use of self-explanation questions/answers in classrooms. That is to say that translating instructional strategies described in research literature into classroom practice is problematic and challenging. The authors share lessons learned about working with teachers and students as well as the amount of time it takes to implement a self-explanation strategy into heterogeneously grouped classrooms using a gradual release instructional model.
Ms. Johnson (pseudonym) begins her science class by asking her students to write an explanation to the science question she posted on the board. “Explain how commensalism and parasitism relationships may affect a population.” Her students read the question and then discuss it with their peer group while Ms. Johnson takes roll. She quickly walks around the classroom to monitor student engagement. Her students then write their explanatory answer following a rubric they were provided. They are encouraged to include evidence, use their science vocabulary, and to use reasoning to link the evidence to their claim (Moje, Peek-Brown, Sutherland, Marx, Blumenfeld, & Krajcik, 2004). Once students finish writing their explanations, Ms. Johnson encourages two or three volunteers to read their explanations to the class. She follows each explanation with a brief discussion of the primary points and/or misconceptions. Students save their work in their science notebook.
Next, Ms. Johnson quickly reviews students’ homework assignments, answering their questions. She has taught her students to use a self-explanation strategy as a way of helping themselves better understand reading assignments from their textbook (Chi, deLeeuw, Chiu, & LaVancher, 1994). Ms. Johnson encourages her students to write down their questions and to attempt explanatory answers. Sam shared one of his questions and explanations with the class: “Explain how a prey and a host are similar. Prey and hosts are alike in the fact that both give food to another organism. Both are harmed, but the prey is killed and the host usually is not. A tapeworm takes digested food from a host but does not kill it. The only way the host organism is harmed is in its lack of food because the tapeworm takes it. In parasitism the parasite lives off the host for its entire life. In predation, the predator lives off many prey. Both give food to another.” Ms. Johnson recognized that Sam understood the basic premise of each relationship, but he was not able to clearly identify the fact that in predation, the prey does not give itself to its predator, but is involuntarily taken.
Ms. Johnson uses students’ questions and explanations such as Sam’s to introduce and connect previous discussions to the day’s science lesson on the energy pyramid. Students are asked to read a short section in their science text and to discuss it briefly with a peer group (3-5 minutes). Ms Johnson then presents some new materials and follows it with a short demonstration of an energy pyramid. She concludes her lesson with a scenario in which students are asked to search the Internet for examples and then to write questions and to answer at least one of their questions with an explanation as their homework.
During a recent dissertation study (Clelland, 2006), we worked with middle school science teachers to integrate the type of self-explanation instruction described above into their own lessons. We had high expectations that the teachers would see the value and importance of integrating a self-explanation strategy into their daily instruction. However, even though we worked with the teachers for over six weeks, they never seemed to completely invest in the strategy. Moreover, we rarely saw instruction like Ms. Johnson’s in the classrooms we observed when we were trying to identify teachers who would be willing to work with us on the study. Emphasis in most of the classes was on rote learning, which leads to shallow knowledge. Shallow knowledge results from classroom practices in which teacher lectures are the primary delivery system and memory tests are the primary assessment tool (Graesser, Person, & Hu, 2002). In most cases, the teachers’ questions required only single word or short phrase responses that tapped factual recall (e.g., multiple-choice, true-false, fill-in-the-blank) rather than deeper conceptual understanding that involves students in generating inferences, problem solving, reasoning, and connecting ideas they were learning with their background knowledge (Graesser, Swamer, Baggett, & Sell, 1996; Graesser, Person, et al., 2002; Graesser & Olde, 2003).
At our 2005 ARF session, we set out to present the results of Peggie’s dissertation research. Interactions with the audience led to an expanded discussion on translating theory and research into practice. The majority of our session focused on the difficulties of conducting research in classrooms. The major purpose of the article, therefore, is to reflect back on this discussion and present here what we learned about implementing self-explanation question/answer instruction in middle school classrooms. We begin, however, by briefly reviewing the comprehension model that served as a theoretical framework for the study and past research on self-explanations and self-questioning.
To frame our research, we used Graesser’s and his colleagues’ theory of comprehension (Graesser & Olde, 2003; Graesser, León, & Otero, 2002; Graesser, Swarmer, et al, 1996) because of its emphasis on the importance of why-questions, or higher order questions in comprehension. The theory corresponds with an extant group of hybrid processing models that incorporate both symbolic and connectionist computational approaches to simulating comprehension processes (Graesser, Leon, et al., 2002; Graesser, Swarmer, et al., 1996; Kintsch, W., & Kintsch, E., 2005; McNamara & Kintsch, 1996; Singer & Kintsch, 2002). In these theories, readers construct the meaning of a text at multiple levels that are used to differentiate remembering from learning. Graesser and his colleagues stress the importance of readers asking and answering why-questions to get at the meaning of text beyond the words on the page (i.e., the surface code) and the literal level meaning of a text (i.e., the text base). They explain that asking students low-level, factual questions induces superficial processing because it focuses students’ efforts on remembering facts and ideas (i.e., the surface code and text base) rather than learning new ideas.
Students who achieve deep levels of comprehension, on the other hand, are able to link multiple ideas within and across sections of text with their background knowledge, particularly prior knowledge related to the subject matter of the text (Graesser, McNamara, & Louwerse, 2003). To help students achieve deeper understanding, Graesser, Person, et al. (2002) argue that teachers need to ask and students need to learn how to ask themselves why-questions that induce them to actively explain text ideas to themselves in order to form situation models. Students who engage in this type of processing learn from reading rather than simply remembering what they read: They are able to analyze and synthesize text information and make inferences that help them integrate what they read with other related ideas.
Connecting this back to our classroom scenario, students who focus their comprehension efforts on remembering what they read may simply memorize definitions for the words predator and prey. In contrast, students who learn conceptually are able to decontextualize the information, link it to relevant concepts they have learned previously, and apply their new knowledge. An example of this may be a student who is able to explore the predator/prey relationship more broadly, explaining that when a food source within a predator/prey relationship decreases, other predators within the food chain loose their food sources, thus creating new problems.
Self-Explanation and Self-Questioning
Recently, the National Reading Panel (NRP, 2000) report suggested that self-questioning is one approach that promotes deeper understanding. Our review of the self-questioning literature led us to research by Chi (2000) concerning the effects of self-explanation on learning and to King’s research (1989, 1990, 1992, 1994) on the effects of teaching students to ask themselves questions as they read.
Chi, Basok, Lewis, Reimann, and Glaser (1989) and Chi, de Leeuw, et al., (1994) found that students who generated self-explanations learned more from reading. These researchers examined the effects of having undergraduate students think aloud as they worked out examples of physics problems and having junior high students think aloud as they read a biology text. In both instances, the researchers found that students who explained how text ideas connected with prior knowledge learned more because they stopped and attempted to reconcile what they already knew with what they read. Chi (2000) suggests that self-explanations mediate learning if the explanations lead students to interpret what they read so that they process the information as new concepts and principles rather than as facts to be remembered.
King (1990, 1992), on the other hand, studied the effects of having elementary and middle school students generate self-questions over content presented via lectures. Results of her research demonstrate that an explicit instructional model can be employed successfully in classrooms to teach students how to ask and answer explanatory questions. She taught students how to generate questions that integrated new information with prior knowledge, such as how and why questions, and how to explain new ideas to each other. Students asked and answered each others’ questions in peer discussion groups. They were provided question stems to help them generate questions. King found that prompting students to explain new ideas and to ask and answer questions that required them to integrate new ideas with prior knowledge achieved higher scores than untrained students on both literal and complex understanding (i.e., inferential, explanatory, and knowledge extending) questions.
Together, results of this research (Chi, Bassok, et al., 1989; Chi, de Leeuw, et al., 1994; King 1989, 1990, 1992, 1994) suggest that students who generate self-explanations achieve deeper levels of understanding and learning. Instruction that involves this form of self-explanation creates opportunities for students to (a) interact independently with text (Graesser, Person, et al., 2002; Chi, 2000; King, 1989), (b) collaborate with peers (King, 1990; King, Staffieri, & Adelgais, 1998), and (c) ask and answer their own questions (Graesser & Olde, 2003; King, 1989, 1992).
Self-explanation is a self-directed cognitive process that requires learners to consider what they hear or read and to connect it to their background knowledge – forming a deeper level of understanding (Graesser, Person, et al., 2002). Answers to self-explanation questions include a statement that illustrates students’ understanding of a concept or principle, evidence to support their claim, and reasoning that links the evidence to the claim (King, 1994; Moje, et al., 2004). When students use this strategy, they build their comprehension of text at a deeper level.
In many of the middle schools that we observed, teachers primarily lectured and gave memory tests. We were interested in an approach that would engage students more actively in their own learning and that promoted more conceptual learning. Thus, we attempted to combine Chi’s (2000) work with self-explanation and King’s (1990, 1992, 1994) self-questioning approach to examine their effects on students’ comprehension.
Examining Self-Explanation in Classrooms
Our purpose is not to present a full report of Peggie’s study (Clelland, 2006). We will, however, present a brief overview of the study and results. We began with a pilot study in 4 eighth-grade science classrooms in which students were taught how to generate self-explanation questions for reading assignments about the rock cycle. We found that students answered essay questions with one or two words and rarely explained enough to demonstrate they could apply information that they had studied about rocks and minerals. In the subsequent formal quasi-experimental study (Clelland, 2006), we worked with 3 groups of heterogeneously grouped eighth-grade science students who studied an ecology unit. Teachers of each group presented some of the information via lectures and by having students read from their science textbooks. Students also engaged in other classroom activities such as creating posters, watching teacher demonstrations, and viewing videos related to the science content. One group was taught only to generate self-explanation questions. A second treatment group was taught how to generate self-explanation questions and how to create answers following a rubric as a guide. Our third group served as a comparison group. They received the same science instruction as the other 2 groups, but they were not taught how to generate self-explanation questions or answers. They were, however, instructed by their teacher to write some questions (without instruction on how to write them) that they could use in a class review at the end of the unit. Students in each class worked in small groups to generate and answer each other’s questions. The study was conducted over a 6-week period in which four quizzes were administered that included both multiple-choice memory questions and short essay questions that required students to apply ideas they had studied. A maintenance test over different content was administered two weeks following the study.
A pretest was used to assess students’ prior knowledge of science content, and these scores served as covariate in analyses of the data. Here, we focus on the analyses of the essay quiz scores. A MANCOVA on the essay quiz scores revealed a significant difference existed among the groups (Wilk’s Lambda = .671, F(8,120) = 3.308, p = .05). Thus, individual ANCOVA tests were performed and revealed statistically significant differences on the third quiz (F(2,63) = 4.453, MS = 102.384, p < .05) and the fourth quiz (F(2,63) = 6.876, MS = 120.790, p < .05). Individual ANCOVA for the maintenance essay test was also significant (F(2,63) = 4.853, MS = 193.028, p < .05). Bonferonni post hoc tests revealed that students in the question/answer group achieved higher scores on the third quiz and students in both the self-explanation question-only group and the question/answer group outperformed the comparison group on the fourth quiz and the maintenance essay quiz. By the third quiz, we began to see consistent improvement on our essay tests in both the question/answer and question-only groups. Although our results were mixed, we were encouraged by them because they suggest that teaching students to ask self-explanation questions improves their comprehension of science content learned via teacher lectures and from reading their science text. Nevertheless, we experienced many challenges implementing a model such as this in heterogeneously grouped science classrooms. The remainder of this article focuses on these challenges.
Translating instructional strategies described in the research literature (e.g., King, 1992, 1994; Chi, 2000) into classroom practice is problematic and challenging. Between our preliminary studies and our formal study, we worked with 10 middle-school science classrooms in both Utah and Washington, and we encountered similar problems in all of them. Teachers were unprepared to implement questioning approaches that required more than simple recall. Students resisted doing the work involved in asking and answering conceptual questions, and it took more time than anticipated before the effects of using the strategy were reflected in students’ scores on essay tests.
What We Learned About Working with Teachers
The most encouraging aspect of both our pilot efforts and the formal study was how willing the teachers were to be part of our research. We worked with 3 wonderful teachers who were excited about collaborating with university faculty and expressed how pleased they were to be able to offer us their help and support. Still, the teachers we worked with were not well prepared to teach students how to generate self-explanation questions. Furthermore, we were not sure if the teachers themselves knew the difference between memory level questions (i.e., recall or simple knowledge questions) and Bloom’s (1956) higher-order questions (i.e., application, analysis, synthesis, and evaluation), even though we had spent several hours showing them examples of different question types and talking about differences among them. The teachers struggled with asking higher-level questions. This became strikingly apparent when they had difficulty using student-generated questions to model examples and provide feedback about the questions at different levels. The teachers also often presented examples of higher-level questions that actually were questions that could be answered easily by explicit statements from the text, lectures, or videos.
We also spent several hours over 3 days explaining and demonstrating strategy instruction and talking about a gradual release model, which involves informing students of a particular strategy, including how to employ the strategy and in what cases the strategy might be useful. (Duffy, 2002; Pressley, et al., 2003). Nevertheless, the teachers we worked with seemed uncomfortable employing strategy instruction using a gradual release model. They instead told students to ask higher-level questions without showing them how to do so. Indeed, the teachers were not used to modeling effective questions, guiding student practice, and monitoring student progress while providing feedback to help students improve their questions. Clearly, the length of instruction we provided on asking higher-level questions and the training we gave on strategy instruction were not sufficient. Once we realized that the teachers were unable to provide this instruction effectively, Peggie made the decision to provide instruction related to the self-explanation strategy, and the classroom teachers focused on delivering content. Surprisingly, in the studies we reviewed, classroom teachers were rarely provided with more than a few hours of training.
What We Learned about Working with Students
According to Wigfield and Tonks (2004), intrinsically motivated students possess a desire to learn and prefer to be challenged and engaged. We found, however, that students in Peggie’s study (Clelland, 2006) resisted the extra effort required to ask and answer questions that involved explanations. They were more familiar and comfortable with lectures and teachers telling them what they needed to know (e.g., science facts and vocabulary definitions). The students we worked with in both our preliminary efforts and Peggie’s formal study wanted to know only what they were expected to know for the tests. They often commented that what we were asking them to do was too hard, and the novelty of participating in a study kept them on task for only the first week of the formal study. One student, for example, withdrew from the study thinking that he would not have to write and answer any more questions. He failed to realize that he would still be responsible for learning the science content covered in the remainder of the study. Another student claimed that he could not believe a university would “let some one do research like this”! Eventually, we had to give students participation points for the questions they generated, with memory questions receiving one point and self-explanation questions higher points. Indeed, besides the participation points, the only leverage we had was that students knew they had to pass tests over the content as part of their science grade. Perhaps researchers in the studies we read did not encounter this resistance, but we find it difficult to think that this strategy was more effortful than the think-aloud procedure students were taught in studies by Chi (2000). Indeed, in the one of these studies, (Chi, de Leeuw, et al., 1994) eighth-grade students had to stop and think-aloud after every sentence they read!
In addition to the resistance we encountered from students, we also found that it took four weeks before students’ comments in the treatment groups indicated that they no longer required prompt cards to help them generate questions. Moreover, it took two weeks longer to see improvement in students’ answers to essay questions. As previously noted, one treatment group was taught how to generate self-explanation questions only, and the other treatment group was taught how to generate self-explanation questions and how to write explanatory answers. In both instances, students essentially were required to write in a new genre (an explanatory answer), and it took the entire six weeks of the study before we saw consistent improvements on students’ essay tests. Again, we could not have anticipated this from the studies we reviewed (Chi, 2000; King, 1990, 1994) in which the longest intervention only lasted four weeks. Thus, students in Peggie’s study (Clelland, 2006) required more time to earn how to produce in writing effective explanatory responses.
Discussion and Implications
Conducting research in classrooms appears straightforward and even simple in published articles on classroom research. We found just the opposite to be the case. Teachers were unprepared to implement a new instructional approach without more extensive opportunities to practice than we provided. Once the novelty of participating in a study wore off, students resorted to previous learning routines, and it took more time than in other studies for the effects of using the self-explanation strategy influenced student performance on essay tests. Of course, all obstacles cannot be anticipated when conducting classroom research. Nevertheless, researchers, particularly doctoral students, need to be prepared for problems like the ones we encountered.
We selected Graesser’s (Graesser & Olde, 2003; Graesser, León, et al. 2002; Graesser, Swamer, et al., 1996) theory of comprehension as a theoretical framework because his research highlights the importance of students’ asking themselves explanatory questions. Although there are important distinctions among the various comprehension theories (e.g., Kintsch, 1989; Van den Brock, Virtue, Everson, Tzeng & Sung, 2002), most distinguish between understanding as remembering text information, a shallow level of comprehension, and understanding as learning from text, a deep level of comprehension. This distinction was another reason for drawing on Graesser’s theory. His research, as well as other psychologists’ and cognitive scientists’ research, is designed to examine if effects they have obtained occur across different subjects and contexts. Their applied research is not designed to present and assess fully fleshed-out methods for classroom instruction. Thus, such theories are inherently limited as a resource for addressing motivational issues that may be central to sustaining student engagement over lengthy time periods.
Designing research that addresses student motivational issues, however, is still problematic. Concept Oriented Reading Instruction (CORI), for example, is based on the premise that hands-on activities and other types of complex and engaging instruction will motivate students to use specific learning strategies meant to improve reading comprehension (Guthrie, Wigfield, & Perencevich, 2004). Although Guthrie and his colleagues (Guthrie, et al., 2004) have provided numerous descriptions of the CORI framework to enhancing student engagement and motivation for reading, the time and money required to implement such an approach in secondary classrooms is probably prohibitive for many teachers and for doctoral students. The teachers we worked with were concerned about student motivation and interest, and they included a variety of science demonstrations and short videos during the course of the study. They also allowed students to work collaboratively in small groups to ask and answer questions. But, the resources to purchase the wide variety of material needed to conduct the kinds of student-centered inquiry described in the research on CORI were simply not available. If doctoral students or other classroom researchers are not involved with funded research programs that finance the time and material it takes to teach strategies within a CORI framework (Guthrie, 2003), then they need to pinpoint other ways they can foster student engagement when they design instructional research. As noted by Wigfield and Tonk (2004), extrinsic rewards may not undermine intrinsic motivation if the extrinsic rewards inform students about how well they are doing on assigned tasks. Thus, point systems like the one Peggie employed might be useful as long as students do not view the points as only a method for controlling their behavior.
We are still frustrated that the science teachers we worked with experienced so much difficulty in asking higher-level questions and providing comprehension strategy instruction. The teachers Peggie worked with needed much more preparation before the study was begun. Their struggle, however, is another example that many students are not being taught comprehension strategies to help themselves learn from content area materials because the teachers do not know how to implement such instruction in their classrooms (e.g., Kamil & Bernhardt, 2004; Sweet & Snow, 2003). Thus, we need to intensify either our staff-development efforts with content area teachers or recruit more of them into graduate reading programs that will provide them with more in-depth preparation for teaching students strategies for learning content area material.
Bloom, B. (1956). Taxonomy of education objectives, handbook 1: Cognitive domain. New York: David McKay.
Chi, M. T. (2000). Self-explaining expository texts: The dual processes of generating inferences and repairing mental models. In R. Glasser, Advances in Instructional Psychology, Vol. 5. NJ: Erlbaum.
Chi, M. T. H., de Leeuw, N., Chiu,
M.H., & LaVancher, C. (1994). Eliciting self-explanations improves
understanding. Cognitive Science, 18, 439
Chi, M. T., Bassok, M., Lewis, M.W., Reimann, P., & Glaser, R. (1989). Self explanations: How students study and use examples in learning to solve problems. Cognitive Science, 13, 145-182.
Clelland, P. L. (2006). Improving comprehension of science content: Generating self-explanation questions and creating explanatory answers. Unpublished doctoral dissertation, Utah State University, Logan, Utah.
Duffy, G. G. (2002). The case for direct explanation of strategies. In C. C. Block, & M. Pressley (Eds.), Comprehension instruction: Research-based best practices (pp.28-41). New York, NY: The Guilford Press.
Graesser, A. C., León, J. A., & Otero, J. (2002). Introduction to the psychology of science text comprehension. In J. Otero, J. A. León, & A. C. Graesser (Eds.), Psychology of comprehension of science reading (pp. 1-15). Mahwah, NJ: Erlbaum.
Graesser, A. C., McNamara, D., & Louwerse, M. M., (2003). What do readers need to learn in order to process coherence relations in narrative and expository text? In A. P. Sweet & C. E. Snow (Eds.), Rethinking reading comprehension (pp. 82-98). New York: Guilford Press.
Graesser, A. C., & Olde, B. A. (2003). How does one know whether a person understands a device? The quality of the questions and the person asks when the device breaks down. Journal of Education Psychology, 95(3), 524-536.
Graesser, A. C., Person, N. K., & Hu, X. (2002). Improving comprehension through discourse processing. New Directions for Teaching and Learning. 89, 33-44.
Graesser, A. C., Swamer, S. S., Baggett, W. B., & Sell, M. A. (1996). New models of deep comprehension. In B. K. Britton & A. C. Graesser (Eds.), Models of understanding text (pp. 1-27). Mahwah, NJ: Erlbaum.
Guthrie, J. (2003). Concept-oriented reading instruction: Practices of teaching reading for understanding. In A. P. Sweet & C.E. Snow (Eds.), Rethinking reading comprehension (pp.115-140). New York: The Guilford Press.
Guthrie, J.T., Wigfield, A., & Perencevich, K. (2004). Motivating reading comprehension: Concept-oriented reading instruction. Mahwah, NJ: Erlbaum.
Kamil, M.L., & Bernhardt, E.B. (2004). The science of reading and the reading of science: Successes, failures, and promises in search for prerequisite reading skills. In W. Saul (Ed.), Crossing borders in literacy and science instruction: Perspectives on theory and practice (pp.123-139). Newark, DE: International Reading Association.
King, A., Staffieri, A., & Adelgais, A. (1998). Mutual peer tutoring: Effects of structuring tutorial interaction to scaffold peer learning. American Psychological Association, Inc., 90(1), 134-152.
King, A. (1989). Effects of self-questioning training on college students’ comprehension of lectures. Contemporary Educational Psychology, 14, 366-381.
King, A. (1990). Enhancing peer interaction and learning in the classroom through reciprocal questioning. American Educational Research Journal, 27(4), 664-687.
King, A. (1992). Comparison of self-questioning, summarizing, and notetaking-review as strategies for learning from lectures. American Educational Research Journal, 29(2), 303-323.
King, A. (1994). Guiding knowledge construction in the classroom: Effects of teaching children how to question and how to explain. American Educational Research Journal, 31(2), 338-368.
Kintsch, W. (1989). Learning from text. In L. B. Resnick (Ed.), Knowing, learning, and instruction (pp. 25-46). Hillsdale, NJ: Erlbaum.
Kintsch, W. (1998). Comprehension: A paradigm for cognition. New York: Cambridge.
Kintsch, W., & Kintsch, E. (2005). Comprehension. In S.G. Paris & S. A. Stahl (Eds.), Children’s reading comprehension and assessment (pp. 71-92). Mahwah, NJ: Erlbaum.
McNamara, D. S., & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and text coherence. Discourse Processes, 22, 247-288.
Moje, E. B., Peek-Brown, D., Sutherland, L.M., Marx, R.W., Blumenfeld, P., & Krajcik, J. (2004). Explaining explanations: Developing scientific literacy in middle school project-based science reforms. In D. S. Strickland & D. E. Alvermann (Eds.), Bridging the literacy achievement gap grades 4-12 (pp. 239-247). NY: Teachers College Press.
National Reading Panel. (2000). Teaching children to read: An evidence based assessment of the scientific research literature and its implications for reading instruction. Washington, DC: National Institute of Child Health and Human Development.
Pressley, M., El-Dinary, P. B., Gaskins, I., Schuder, T., Bergman, J. L., Almasi, J., & Brown, R. (1992). Beyond direct explanation: Transactional instruction of reading comprehension strategies. Journal of Experimental Psychology, 13(2), 291-300.
Singer, M., & Kintsch, W. (2002). Text retrieval: A theoretical exploration. Discourse Processes, 31, 27-59.
Sweet, A. P., & Snow, C. E. (Eds.). (2003). Rethinking reading comprehension. NY: Teachers College Press.
Van den Brock, P., Virtue, S., Everson, M.G., Tzeng, Y., & Sung, Y. (2002). Comprehension processes and the construction of a mental representation. In J. Otero, J. Leon, & A. Graesser (Eds.), The psychology of science text comprehension. Mahwah, NJ: Erlbaum.
Wigfield, A., & Tonks, S. (2004). The development of motivation for reading and how it is influenced by CORI. In J. T. Guthrie, A. Wigfield, & K. C. Perencevich (Eds.), Motivating reading comprehension (pp. 254-257). Mahwah, NJ: Erlbaum.