Citation
Identifying the relationship between feedback provided in computer-assisted instructional modules, science self-efficacy, and academic achievement

Material Information

Title:
Identifying the relationship between feedback provided in computer-assisted instructional modules, science self-efficacy, and academic achievement
Creator:
Mazingo, Diann Etsuki
Publication Date:
Language:
English
Physical Description:
xv, 179 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Self-efficacy ( lcsh )
Computer-assisted instruction ( lcsh )
Academic achievement ( lcsh )
Feedback (Psychology) ( lcsh )
Academic achievement ( fast )
Computer-assisted instruction ( fast )
Feedback (Psychology) ( fast )
Self-efficacy ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 169-179).
General Note:
School of Education and Human Development
Statement of Responsibility:
by Diann Etsuko Mazingo.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
263685211 ( OCLC )
ocn263685211
Classification:
LD1193.E3 2006d M39 ( lcc )

Full Text
rr
IDENTIFYING THE RELATIONSHIP BETWEEN FEEDBACK PROVIDED IN
COMPUTER-ASSISTED INSTRUCTIONAL MODULES, SCIENCE SELF-
EFFICACY, AND ACADEMIC ACHIEVEMENT
by
Diann Etsuko Mazingo
B.A., University of Colorado at Boulder, 1999
M.A., University of Colorado at Denver, 2001
A thesis submitted to the
University of Colorado at Denver and Health Sciences Center
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Educational Leadership and Innovation
2006


2006 by Diann Etsuko Mazingo
All rights reserved.


This thesis for the Doctor of Philosophy
degree by
Diann Etsuko Mazingo
has been approved
Joanna Dunlap
Laura Goodwin
Rodney Muth


Mazingo, Diann E. (Ph.D., Educational Leadership and Innovation)
Determining the Relationship between Computer-Assisted Feedback, Self-Efficacy,
and Academic Achievement
Thesis directed by Associate Professor Joanna Dunlap
ABSTRACT
Feedback has been identified as a key variable in developing academic self-
efficacy. The types of feedback can vary from a traditional, objectivist approach that
focuses on minimizing learner errors to a more constructivist approach, focusing on
facilitating understanding. The influx of computer-based courses, whether online or
through a series of computer-assisted instruction (CAI) modules require that the
current research of effective feedback techniques in the classroom be extended to
computer environments in order to impact their instructional design.. In this study,
exposure to different types of feedback during a chemistry CAI module was studied
in relation to science self-efficacy (SSE) and performance on an objective-driven
assessment (ODA) of the chemistry concepts covered in the unit. The quantitative
analysis consisted of two separate ANCOVAs on the dependent variables, using
pretest as the covariate and group as the fixed factor. No significant differences were
found for either variable between the three groups on adjusted posttest means for the
ODA and SSE measures (.95^(2,106) = 1.311,/?= 0.274 and .95^(2,106) = 1.080,
p = 0.344, respectively). However, a mixed methods approach yielded valuable
qualitative insights into why only one overall quantitative effect was observed. These
findings are discussed in relation to the need to further refine the instruments and
methods used in order to more fully explore the possibility that type of feedback


might play a role in developing SSE, and consequently, improve academic
performance in science. Future research building on this study may reveal
significance that could impact instructional design practices for developing online and
computer-based instruction.
This abstract accurately represents the content of the candidates thesis. I recommend
its publication.
Signed
Joanna Dunlap


DEDICATION PAGE
I would like to dedicate this dissertation to my husband. He tirelessly and lovingly
supported me through three degrees spanning eleven consecutive years of attending
college.


ACKNOWLEDGMENT
I would like to acknowledge the School of Education programs at the University of
Colorado on both the Boulder and Denver campuses for the multitude of ways they
offer for full-time working K-12 teachers to continue their education. I would also
like to thank my Masters and Doctoral advisor, Joanna Dunlap. Her superior teaching
and careful mentoring over the last seven years have been essential to the completion
of both degrees. Finally, I have the utmost love and respect for my family, many of
whom have dedicated their lives to teaching, for encouraging me to follow their
example to pursue a career in teaching and to continue my own education.


TABLE OF CONTENTS
Figures .......................................................... xii
Tables ........................................................... xiv
CHAPTER
1. INTRODUCTION, OVERVIEW, AND REVIEW OF THE
LITERATURE .......................................................... 1
Conceptual Definitions ........................................ 4
Computer Assisted Instruction (CAI) ...................... 4
Feedback ................................................. 5
Social Cognitive Theory .................................. 5
Self-Efficacy ............................................ 5
Science Self-Efficacy (SSE) .............................. 5
Background and Significance 6
Research Problems ............................................. 9
Theoretical Framework ........................................ 12
A Review of Selected Literature 14
Self-Efficacy Defined and Explained .......................... 16
The Development of Academic Self-Efficacy ............... 18
Factors Affecting Academic Self-Efficacy ................ 21
Connecting Self-Efficacy Development to Feedback in CAI .... 24
Early Research Leading to Programmed Instruction ............. 25
Feedback as Reinforcement ............................... 25
Doubts Concerning Feedbacks Ability to Promote
Understanding ................................................ 27
Bangert-Drowns: Response Certitude and Mindfully
Processed Feedback ...................................... 27
vm


Feedback Approaches and Connections to Computer-Assisted
Instruction ............................................... 33
Classification and Research on the Effects of
Feedback Types and CAI ............................... 34
Relative Effects of Feedback Complexity on
Academic Achievement ................................. 41
Conclusion ................................................ 42
Structure of the Dissertation ............................. 42
2. METHODOLOGY ..................................................... 44
Study Participants ........................................ 47
Quantitative Analysis ..................................... 48
Design ............................................... 49
Data Collection and Analysis Procedures .............. 65
Limitations ....t..................................... 67
Qualitative Analysis ...................................... 68
Overall Approach and Rationale ....................... 71
Data Collection Methods .............................. 75
Data Management Procedures ........................... 76
Data Analysis Procedures ............................. 77
Summary ................................................... 78
3. A SUMMARY OF THE QUANTITATIVE ANALYSES AND
PRESENTATION OF THE FINDINGS .................................... 80
A Review of the Quantitative Study Design, Method, and
Hypotheses ................................................ 80
Analysis of Covariance ............................... 82
First Dependent Variable, Posttest Scores on an Objective-
IX


Driven Assessment .......................................... 83
Testing the Assumptions of the ANCOVA Models for
the ODA Measure ....................................... 84
Second Dependent Variable, Posttest Scores on a Measure of
Science Self-Efficacy ...................................... 89
Testing the Assumptions of the ANCOVA Models for
the SSE Measure ....................................... 90
Overall Summary of the Quantitative Results ............. 94
4. QUALITATIVE ANALYSIS AND INTERPRETATION .......................... 96
Identification of Themes ................................... 97
Extraction of Significant Statement for Each Theme ...... 105
Developing Meanings from the Significant Statements ..... 108
Textual Description of Learners who Liked the CAI
Module Learning Experience ........................... 109
Textual Description of Learners who Disliked the CAI
CAI Module Learning Experience ....................... 110
Triangulation Evidence from the Follow-up Interviews ...... 113
Selected Statements from the Interviews .............. 113
Discussion ................................................ 118
Conclusion ................................................ 121
5. SUMMARY OF THE RESEARCH FINDINGS AND
INTERPRETATIONS ................................................. 123
Summary of the Research Design and Findings ............... 123
Treatment ............................................ 125
Qualitative and Quantitative Data .................... 126
Connecting the Results to the Theoretical Framework ....... 129
x


Limitations and Design Flaws ................. 132
Analyzing the Non-Guessing Learners .......... 134
Design Recommendations for CAI ............... 135
Suggestions for Future Research .............. 138
APPENDIX
A. PARENT/GUARDIAN CONSENT FORM ................ 141
B. STUDENT CONSENT FORM ........................ 143
C. BUILDING CONSENT FOR STUDY SITE ............ 144
D. HUMAN SUBJECTS REVIEW COMMITTEE APPROVAL
DOCUMENTATION ................................ 145
E. OBJECTIVE-DRIVEN ASSESSMENT ................. 146
F. SCIENCE SELF-EFFICACY MEASURE ............... 150
G. JOURNAL 1 QUESTIONS ......................... 154
H. JOURNAL 2 QUESTIONS ......................... 157
I. JOURNAL 3 QUESTIONS ......................... 160
J. JOURNAL 4 QUESTIONS ......................... 163
K. INTERVIEW QUESTIONS ......................... 166
L. CHEMISTRY UNIT OBJECTIVES ................... 167
M. THREE-WEEK SYLLABUS ......................... 168
BIBLIOGRAPHY ......................................... 169
xi


LIST OF FIGURES
1.1 The State of the Learner Receiving Feedback ......................... 28
2.1 Sample Question from the First CAI Module ........................... 53
2.2 Example of KOR Feedback ............................................. 54
2.3 Example of KCR Feedback ............................................. 54
2.4 Example of KCR+Feedback ............................................. 55
2.5 Example of TC/RC Feedback, 1 of 4 56
2.6 Example of TC/RC Feedback, 2 of 4 57
2.7 Example of TC/RC Feedback, 3 of 4 58
2.8 Example of TC/RC Feedback, 4 of 4 59
2.9 Split-Half Reliability Plot for the ODA ............................. 64
3.1 Residual Plot for Group C, ODA Measure ............................ 85
3.2 Residual Plot for Group D, ODA Measure ............................ 85
3.3 Residual Plot for Group E, ODA Measure ............................ 85
3.4 Normal Quartile Plot for Group C, ODA Measure .................... 86
3.5 Normal Quartile Plot for Group D, ODA Measure .................... 87
3.6 Normal Quartile Plot for Group E, ODA Measure .................... 87
3.7 Correlation of the Pretest and Posttest Scores by Group ............. 88
. 3.8 Residual Plot for Group C, SSE Measure ............................. 91
3.9 Residual Plot for Group D, SSE Measure ............................ 91
3.10 Residual Plot for Group E, SSE Measure ............................ 91
3.11 Normal Quartile Plot for Group C, SSE Measure ....................... 92
3.12 Normal Quartile Plot for Group D, SSE Measure ....................... 92
3.13 Normal Quartile Plot for Group E, SSE Measure ....................... 92
Xll


3.14 Correlation of the Pretest and Posttest Scores by Group, SSE
Measure ....................................................... 93
4.1 Open-Ended Question Sample, Journal 1 ...................... 99
4.2 Specific Question Sample, Journal 3 ........................... 99
xiii


LIST OF TABLES
1.1 Example of KOR feedback ............................................. 36
1.2 Example of AUC feedback ............................................. 36
1.3 Example of KCR feedback ............................................. 36
1.4 Example of KCR+feedback ............................................. 37
1.5 Example of TC feedback (cognitivist approach) ....................... 39
1.6 Example of RC feedback (cognitivist approach) ....................... 40
2.1 Demographic breakdown of the experimental groups .................... 48
2.2 Feedback provided according to response and group ................... 50
2.3 Sample questions from the objective-driven assessment ............... 60
2.4 Breakdown of items according to textbook objective .................. 62
2.5 A table of specifications for the ODA ............................... 63
2.6 Graphic depiction of the experimental design ........................ 67
2.7 Sample journal questions from the SSE measure ....................... 70
2.8 Sample journal questions about the CAI modules ...................... 71
3.1 Summary of unadjusted posttest means for the ODA .................... 84
3.2 Summary of the adjusted posttest means for the ODA .................. 84
3.3 ANCOVA summary table for the ODA .................................... 84
3.4 ANCOVA with group by pretest interaction ............................ 88
3.5 Summary of unadjusted posttest means for the SSE measure ............ 89
3.6 Summary of adjusted posttest means for the SSE measure .............. 89
3.7 ANCOVA summary table for the SSE measure ............................ 90
3.8 Group by SSE pretest interaction 94
4.1 Preference for CAI modules .......................................... 98
xiv


4.2 Seven commonly provided reasons for why participants expressed a
positive preference for using the CA1 modules ..................... 100
4.3 Percent of learners admitting to guessing by group .................... 103
4.4 Categories for the learner guessed theme .............................. 104
4.5 Significant statements of positive comments ........................... 106
4.6 Significant statements of negative comments ........................... 106
4.7 Significant statements pertaining to guessing ........................ 106
4.8 Significant statements pertaining to not guessing ..................... 107
4.9 Significant statements pertaining to confidence, positive ............. 107
4.10 Significant statements pertaining to confidence, negative ............. 107
4.11 Significant statements of feedback not affecting confidence ....... 108
5.1 Summary of non-guessers unadjusted posttest means
fortheSSE ......................................................... 135
5.2 Summary of non-guessers adjusted posttest means for the SSE ... 135
5.3 ANCOVA summary table for the SSE measure .......................... 135
xv


CHAPTER 1
INTRODUCTION, OVERVIEW, AND REVIEW OF THE LITERATURE
Since the 1980s, the decrease in the percentage of college graduates with
science, math, engineering, and technology (SMET) majors has fueled a number of
educational initiatives directed at increasing these numbers (Seymour, 2002). These
proposed changes focused not only on increasing the overall number of SMET
degrees earned by undergraduate students in the United States but also on increasing
the representation of women and minorities among the SMET graduates. Sims (1992)
noted that the National Science Foundation and the National Institute of Health
allocated over two-billion dollars towards increasing the participation of women and
minorities in the sciences. These early programs had a positive effect on the number
of women and minorities entering higher education with a SMET major declared.
However, even with these deliberate interventions, the number of graduates from
these programs continued to decline, regardless of gender or race. Over the next two
decades, educational researchers continued to identify the underlying reasons for the
observed attrition of SMET graduates.
Seymour (2002) outlined the various research endeavors undertaken in the
1990s that strove identify the reasons why so few high school graduates in the United
States continued on to complete a SMET degree. These reasons ranged from a lack of
quality SMET education, elementary through high school, to the traditional lecture
1


approach of SMET education in undergraduate programs. When Banduras (1986)
social cognitive theory was applied to the problem, the construct of self-efficacy, or
an individuals perception about her or his capability to complete a given task,
highlighted the role that an individuals personal identity played in determining her or
his likelihood to pursue a SMET major. Thus, many of the proposed solutions to this
problem include both pedagogical and psychological recommendations for changes in
the classroom to increase the number of students who choose to enter an
undergraduate SMET program and continue on to graduate successfully (Seymour,
2002).
As a high school chemistry teacher, I have a personal interest in ensuring that
my students receive a high quality education that leaves them feeling empowered to
continue with their science education. I even would argue that it is my responsibility
to keep open (or force open) this door to a future in science and that, if I fail, I am
guilty of perpetuating the decades-old problem of disproportionately few SMET
graduates from United States higher education institutions. Thus, throughout my
career in the classroom, I have actively sought innovative ideas for increasing
students interest and understanding in and about chemistry. To this end, I use many
of the best practices of science teaching recommended in Seymours (2002) summary
of the multitude of activities and methods designed to improve both the access to and
the quality of SMET education. For example, I employ clearly stated learning
2


objectives for each unit of study, and I use assessments that facilitate students
engaging in their own learning. Additionally, my efforts to stay abreast of the best
practices for teaching science have kept me on the cutting edge of educational
technology, so I am constantly looking for new ways to integrate technology into
learning and assessment processes.
Unfortunately, I continually have been unimpressed with the quality of
computer-assisted instructional (CAI) materials available for my content area. The
department where I teach recently reviewed many new textbooks from various
publishers. The problem was that, while each textbook was marketed as having an
interactive CD-ROM for the students, the quality and depth of the learning
experiences available on these CDs were uninteresting and only moderately engaging.
Notably deficient in these examples were learner-feedback prompts. In each software
title, the program used feedback for multiple-choice questions that was limited to
correct or incorrect. Occasionally, incorrect responses were followed by a page
reference for students to use to determine the correct response. In addition, some
programs explained the right answer after the learner selected correctly. This banal
approach to feedback does not encourage learners to engage in higher-order thinking
skills, even though chemistry is a highly complex subject with many interrelating
concepts and ideas.
3


My experiences in the classroom have taught me that appropriate feedback is
essential for establishing an engaging and successful learning environment. Further,
as a teacher I believe that many learners need positive, specific, and constructive
feedback to improve self-efficacy (i.e., gain confidence in their abilities as a
chemistry student). These instincts and experiences combined to ignite my passion for
understanding how to design better CAI so that learners received a more engaging
experience, fueled by the type of feedback that is successful in face-to-face
environments. Thus began my journey that led to this dissertation and the research
surrounding my initial interests in feedback, self-efficacy, and CAI.
Conceptual Definitions
I based the following conceptual definitions on those proposed by prominent
researchers as they laid the foundation of knowledge for the constructs and ideas
presented in this dissertation.
Computer Assisted Instruction (CAI)
Computer assisted instruction is defined as any computer-based learning
application that supplements a classroom environment. These applications can be
delivered via CD-ROM, the World Wide Web, or other electronic sources. Typically,
learners interact with the computer alone without the assistance of a teacher to answer
questions.
4


Feedback
Feedback is defined as information presented to the learner after any input
with the purpose of shaping the perceptions of the learner (Sales, 1993). For example,
when the learner chooses a response in a multiple-choice style question, the computer
program automatically provides information to the learner that will somehow inform
him or her of the correctness of her or his response for the purpose of helping the
learner to better understand a particular problem or concept.
Social Cognitive Theory
Social cognitive theory is the theoretical framework initial proposed by Albert
Bandura. This theory hypothesizes that achievement depends on how a persons
behaviors, thoughts, beliefs, and environmental conditions interact with each other
(Bandura, 2001b; Schunk & Pajares, 2001).
Self-Efficacy
Self-efficacy is a persons beliefs about her or his ability to successfully
complete a task. These beliefs can impact how a person feels, thinks, motivate
themselves, and behaves (Bandura, 1994).
Science Self-Efficacy (SSE)
Self-efficacy, or an individuals beliefs about his ability to successfully
complete a task, is content-specific (Bandura, 1994; Bong, 1997). Thus, because this
study was performed using science students, science self-efficacy is the specific
5


construct of interest. Science self-efficacy is the specific set of beliefs that learners
have regarding their perception of their abilities to successfully complete science-
related tasks.
Background and Significance
The evolution classrooms to include more CAI materials raises many issues
about what constitutes best practices of teaching in this environment (Torrisi &
Davis, 2000). Modem textbooks are commonly marketed with attention to the type
and quality of CAI ancillary materials, which can be Web-based or stand-alone
applications running from a CD-ROM. However, this drive towards including more
instruction that is computer-based tends to ignore many of the established best
practices of face-to-face teaching in terms of the quality and quantity of feedback
provided to learners during instruction (Papanastasiou, Zembylas, & Vrasidas, 2003;
Steinweg, Williams, & Warren, 2006).
The full potential of CAI to offer individualized, engaging, and effective
learning experiences is rarely realized, particularly in how feedback can be used to
enhance the learning experiences and achievement outcomes. Multiple levels of
feedback can be programmed into CAI to enhance learner understanding and
performance, separated by increasing complexity, or how much and what type of
information is contained in feedback messages. The simplest forms, knowledge of
response (KOR) and knowledge of correct response (KCR), both emphasize the
6


correct response and do not provide further information to the learner about why the
response chosen is correct or incorrect. A slightly more complex form of feedback,
often termed KCR+, combines both KOR and KCR with additional elaborative
information on why the correct answer is appropriate. Some KCR+ strategies also
provide information about why a chosen answer is incorrect. All three of these
feedback levels are designed primarily to reinforce the correct answer and do not
necessarily challenge the learner to think independently to generate meaning and
understanding.
Because feedback in CAI is often limited to KOR, KCR, and KCR+, the
applications using these feedback styles do not directly facilitate how learners
increase their knowledge and understanding from the feedback provided.
Consequently, it is easy for learners to become disengaged. More complex forms of
feedback may include (a) topic contingent (TC), containing KCR and topic-specific
elaborative feedback; and (b) response contingent (RC), containing KCR and item-
specific elaborative feedback (Jang, Kim, & Baek, 2001). These levels of feedback a
form of coaching that requires more learner participation to process the feedback
mindfully to increase understanding instead of simply reinforcing the correct answer
(Jonassen, 1991).
Another factor that often contributes to the perception that CAI is
uninteresting and boring is the removal of a teacher. In traditional classroom settings,
7


the teacher influences learners academic self-efficacy (ASE) levels. Self-efficacy is
the set of beliefs about ones capabilities to learn or perform at designated levels
(Bandura, 1994). Levels of self-efficacy contribute to a persons choices, effort,
persistence, resilience, and achievement (Bandura, 1997). Self-efficacy has been
studied in detail for a wide range of behaviors and situations, and ASE has been
identified as a key factor in academic success (Marsh, Byrne, & Shavelson, 1988;
Pajares & Miller, 1994; Schunk, 1982, 1984; Schunk & Pajares, 2001; Vrugt,
Langereis, & Hoogstraten, 1997).
Academic self-efficacy is developed in many ways and in the realm of CAI
mastery experience and feedback play major roles. Mastery experience is the most
dominant source of ASE (Pajares & Schunk, 2001b), and learners who have answered
questions correctly in the past have greater confidence in their ability to answer future
questions correctly.
Feedback can be used to inform the learner of goal progress. Further, feedback
has the potential to strengthen a learners self-efficacy while sustaining motivation
(Schunk & Pajares, 2001). Thus, well-designed CAI with meaningful feedback has
the potential to affect learners self-efficacy by providing opportunities for mastery
experiences and motivating learners to further their individual understanding by
increasing confidence in their abilities.
8


Though the areas of feedback and self-efficacy are extensively documented,
what remains to be explored is how CAI feedback complexity affects levels of
academic self-efficacy and achievement. Academic achievement can be measured in
many ways. For example, a commonly used method of measuring achievement uses
learner performance on objective-driven assessments that test the learners ability to
recall knowledge, solve problems, and apply old knowledge to new problems. Thus,
in an effort to create CAI that provides an engaging learning environment that
maximally increases academic achievement, it is necessary to explore further the
interrelatedness of feedback in CAI, self-efficacy beliefs, and achievement.
Research Problems
The primary interest of this research is to investigate how different levels of
feedback complexity in CAI affect both science self-efficacy and academic
achievement. In theory, both feedback and high levels of self-efficacy have been
linked to increased academic achievement. In addition, feedback and self-efficacy
have been shown to interact with each other within a learners cognitive state.
However, the body of research concerning CAI feedback lacks a clear investigation
into how the three interact overall. Additionally, due to conflicting results, previous
research in the area of CAI feedback has not yielded any generalizable statements
about how feedback complexity and achievement are related. Finally, a deficiency
9


exists in the body of research due to the lack of investigations studying the effects of
TC and RC feedback.
I prepared for this research by conducting two pilot studies using similar
testing conditions. These studies helped me refine my research methods and forced
me to narrow the focus of my research questions. After the first pilot, I discovered
that I needed more than just numbers to understand the effects of the different
treatment groups. I determined that some sort of follow-up investigation with
participants was essential to explain the quantitative results. Thus, the second pilot
utilized a mixed-methods design in an attempt to understand the effects of the
different types of feedback on the learner. The combination of quantitative and
qualitative methods facilitated a richer analysis of the data. So, I decided that the
actual dissertation study should also follow a mixed-methods approach.
The overarching questions I address in this study are (a) How does feedback
in chemistry CAI affect students levels of science self-efficacy? and (b) How does
feedback in science CAI affect student achievement on an objective-driven
assessment? I narrowed these questions for the sake of clarity, specificity, feasibility,
and importance to include: (a) Do different types of feedback in science CAI, namely
KOR, KCR, KCR+, topic contingent, and response contingent, affect learners levels
of science self-efficacy? (b) Do different types of feedback in science CAI, namely
KOR, KCR, KCR+, topic contingent, and response contingent, affect learners scores
10


on an objective-driven science assessment? (c) How do learners use different levels of
feedback provided in science CA1 modules? and (d) How do different types of
feedback affect how confident a learner is in her or his ability to understand science?
Overview of Methodology
For this study, I used a mixed-methods approach, and I collected the
quantitative and qualitative data concurrently. For the quantitative approach, I used a
true-experimental design. I drew participants from students at the suburban high
school in southeast Denver where I worked at the time of the study. All of the
students were enrolled in general chemistry, but none of the dissertation participants
were currently enrolled in one of the classes that I taught. Participation in the study
was optional and required informed consent from both the student and her or his legal
guardian.
The data I collected for the quantitative portion of the study included two
measures, administered as a pretest and as a posttest. The first measure assessed the
participants level of science self-efficacy using a Likert-type response format on an
established measure developed by Britner and Pajares (2001a). The second measure
addressed academic achievement, as measured using a multiple-choice, objective-
driven assessment of the chemistry concepts covered in the unit.
At the same time, 1 collected qualitative data in the form of journal responses
and follow-up interviews. All learners participating in the study completed each
11


journal response. Based on these responses and the quantitative data, I purposefully
selected several students for a follow-up interview. The purpose of the interview was
to triangulate evidence for the journal responses as well as provide an opportunity to
ask more in-depth questions about the participants learning experiences and what
contributes to developing their science self-efficacy.
It is difficult to quantify the construct of science self-efficacy. Further,
because of my experiences with high school science students, I argue that it is also
very difficult to describe how different learners use feedback presented in CAI. Thus,
by collecting data both quantitatively and qualitatively, I was able to develop a better
understanding of the research phenomenon. By employing a concurrent triangulation
strategy (Creswell, 2003), I was able to confirm, cross-validate, and corroborate
findings within a single study. Also, it allowed me to gain a broader perspective of
how feedback is used by learners in CAI and how it may influence learners levels of
self-efficacy and achievement.
Theoretical Framework
The three main concepts that this study attempts to interrelate are the
development of ASE, feedback levels in CAI, and academic achievement. The body
of literature for each of these topics individually is extremely extensive, but a logical
connection exists between them via Albert Banduras social-cognitive theory (1986)
12


and the five-stage model of feedback processing proposed by Bangert-Drowns, Kulik,
Kulik, and Morgan (1991).
Banduras Social Cognitive Theory (1986) details how individuals self-
efficacy (i.e., beliefs about their ability to complete tasks) can influence their control
and management of learning. Of the various sources of self-efficacy (i.e., mastery
experiences, vicarious experiences, verbal persuasion, and individuals psychological
and emotional states), mastery experiences and verbal persuasion are two facets that
feedback within CAI has the potential to influence. Computer-assisted instruction can
provide a potentially infinite number of questions to promote the positive effects of
mastery experiences through learners engaging in CAI that offers multiple
opportunities for success. Banduras model also specifically targets verbal persuasion
as a source of self-efficacy beliefs and well-programmed CAI can deliver feedback as
elaborate as a human voice giving encouragement to the learners to help them avoid
focusing on personal deficiencies.
The Bangert-Drowns et al. (1991) model focuses on the mindful processing of
feedback by the learner. They posited that learners not only respond to questions with
a particular level of certitude, but also their mindful evaluation of the feedback
provided to the response given can affect several of the learners states, namely self-
efficacy, interests, and goals. These changes to the learners states can affect further
13


learning experiences by altering the initial states of the learners in subsequent, similar
environments.
Additionally, according to these established theories, a learners level of self-
efficacy for a given task can be directly affected by the evaluation of feedback
provided to him or her in a learning environment. Moreover, the learners ability to
evaluate her or his response depends on the feedback provided. It is reasonable, then,
to expect that this feedback must also be of a quality that can encourage the reflective
practices necessary for the learners evaluation of her or his response to promote
positive gains to the various states.
Finally, multiple connections between ASE and academic success have been
widely researched throughout the last two decades (e.g., Pintrich & DeGroot, 1990;
Schunk, 1991). Studies have shown that a students beliefs about her or his ability to
complete specific academic tasks directly affects her or his potential for realizing
academic successes (Bong, 2002, 2004; Pajares & Miller, 1995; Pajares & Schunk,
2001a).
A Review of Selected Literature
A vast body of research focuses on feedback in instruction, computer-assisted
instruction, and self-efficacy. Notable educational psychologists such as Skinner,
Bangert-Drowns, Bandura, Pajares, and Schunk have contributed decades of
14


quantitative and qualitative research aimed at better understanding these constructs.
Because this dissertation focuses on the interaction between feedback in computer-
assisted instruction and learners academic self-efficacy, the following review of the
literature is an attempt to narrow the emphasis of these wide-ranging topics to the
most relevant information related to this dissertation.
I begin with a brief description of Albert Banduras (1986) social cognitive
theory and the construct of self-efficacy, how it is developed, and the importance of
self-efficacy for academic success. Many reviews of the literature focused on
academic self-efficacy research exist (e.g., Albion, 2001; Bandura, 1994; Britner &
Pajares, 2001a; Gecas, 1989; Maddux, Norton, & Stoltenberg, 1986); therefore, 1 only
highlight the essential conclusions of various individual studies and reference the
existing meta-analyses of the larger body of research.
Next, I provide a brief history of the evolution of feedback research.
Following this summary, I present a more thorough discussion of the feedback
processing model proposed by Bangert-Drowns et al. (1991) and its relationship to
the learners cognitive state. Then, I discuss the specific connection to computer-
assisted instruction and the types of feedback provided in these self-regulated
learning environments. Past research has typically focused on the relative effects of
different types of feedback organized according to the complexity of the feedback
response on academic achievement. I provide an example of each of the six types of
15


feedback (knowledge of response, answer until correct, knowledge of correct
response, knowledge of correct response with elaboration, topic contingent, and
response contingent).
Finally, the conclusion of this review guides the reader through the overall
progression of thought leading to my specific interest in connecting feedback to self-
efficacy in computer-assisted instruction. I end by identifying the links between these
three topics and how they relate to the research questions and define the studys
design.
Self-Efficacy Defined and Explained
Banduras (1986) social cognitive theory posited that humans have the
capacity to exercise control over the nature and quality of ones life (Bandura,
2001b, p. 1). This theory is grounded in the ability of one to express personal agency;
therefore, Bandura posits that one must consider peoples beliefs about their own
capabilities when investigating differences between those people. Thus, self-efficacy,
or the set of beliefs about ones capabilities to learn or perform at designated levels,
plays a pivotal role in social cognitive theory (Bandura, 1994). These beliefs directly
affect a persons ability to persevere and ultimately succeed at a given task.
Many factors influence the development of self-efficacy. First, personal
mastery experiences positively affect an individuals self-efficacy because previous
success at a given task raises the individuals perception of her or his ability to
16


accomplish the task again. Even if the two tasks are not directly related, it is possible
that success at something the individual determined was difficult would encourage
the individual to tackle other perceived difficult tasks. Second, vicarious experiences
play a role in the development of self-efficacy. If someone whom an individual
identifies as being similar to herself or himself is successful at a given task, then the
individual is more likely to determine that he or she has a likelihood of success as
well. Third, social persuasion in the form of verbal or written communication
increases an individuals self-efficacy, especially if the persuasion is realistic to the
individuals abilities and talents. Finally, somatic and emotional states, or how
emotional and physical reactions to certain activities are interpreted, can positively or
negatively influence self-efficacy perception.
Levels of self-efficacy contribute to a persons choices, effort, persistence,
resilience, and achievement (Bandura, 1997). Numerous examples show how people
in the face of rejection continue trying and eventually succeed. For example, Thomas
Edison failed 1000 times before successfully inventing the light bulb. Another
example of perseverance is that football coaches Tom Landry, Chuck Noll, Bill
Walsh, and Jimmy Johnson accounted for 11 of the 19 Super Bowl victories from
1974 to 1993. They also share the distinction of having the worst records of first-
season head coaches in NFL history: They did not win a single game (Pajares, 2001).
These are just a couple of testaments to the idea that people with high self-efficacy
17


will choose to continue exerting effort towards a particular achievement and
ultimately succeed because of their persistence and resilience.
The Development of Academic Self-Efficacy
While self-efficacy has been studied in detail for a wide range of behaviors
and situations, it is especially important when exploring academic success. The
development of academic self-efficacy (ASE) is complex in the sense that many
different people and situations influence its development. Academic self-efficacy is
first developed during childhood. The first environment that a child encounters that
affects the development of her or his self-efficacy is the home (Schunk & Pajares,
2001). Familial influences are responsible for a wide range of possible self-efficacy
effects. Households that encourage a childs curiosity through parental interaction and
supplemental materials accelerate the childs development of positive self-efficacy
for various tasks (Meece, 1997). Additionally, when parents provide a wide range of
mastery experiences, the child is more-efficacious than other children who did not
receive the same type of varied experiences (Bandura, 1997).
The Role of Parents
Parents play an important role in providing vicarious experiences by modeling
coping strategies and persistence for their child. A child who is witness to the
communication and troubleshooting processes that are used to solve various
household troubles learns vicariously how to approach other problem-solving
18


ventures on her or his own (Bandura, Barbaranelli, Caprara, & Pastorelli, 2001).
Finally, as sources of persuasive information, parents can steer their child positively
towards higher self-efficacy. For example, if parents encourage their child to meet
lots of different challenges by guiding him or her towards multiple and varied
activities, then they will increase the childs self-efficacy towards approaching
different tasks (Schunk & Pajares, 2001).
The Role of Peers
Outside of the home, peers play an important role in the development of
childrens self-efficacy (Schunk, 1987). First, self-efficacy is greatly impacted by the
vicarious experiences of a childs peers. When a child witnesses a peer succeed or fail
at a particular activity, then the other childs success or failure influences the childs
individual perceptions of her or his likelihood of success or failure.
Peers are also responsible for the probability of academic success for an
individual (Steinberg, Brown, & Dombusch, 1996). In a study monitoring students
from high school entrance through their senior year, researchers observed that
students are greatly influenced in their academic success by the peers with whom they
associate. Upon entering high school, if one student with similar grades to a second
student chooses to associate with academically motivated peers, then he or she will
have higher academic success than a student who chooses a less academically
motivated crowd.
19


The Role of Schools
The role of the school in the development of ASE changes from early
childhood through adolescence in a manner inversely related to the role of peers. The
schools ability to increase self-efficacy declines throughout this time, most likely due
to less individualized attention, more norm-referenced tests, greater competition, and
the impact of school transitions (Pintrich & Schunk, 1996). When students are in
elementary school, teachers typically have 22 children each. In middle school, these
numbers increase to around 80 children per teacher. During high school, each teacher
has around 140 students. Consequently, as a person progresses through the traditional
school system, her or his chance for individual attention from a teacher decreases as
the student-teacher ratio increases. Less personalized time with the teacher negatively
affects the development of academic self-efficacy because teachers provide verbal
persuasion that may affect the individual students level of ASE (Etattie, 2002).
Further, as students advance through traditional schooling, they are exposed to more
and more norm-referenced tests. The comparison of an individual to peers can have a
negative effect on self-efficacy development if the student scores below average on
the various measures. As the number of students in the classes and schools increases,
so does the amount of competition each student must face. A greater probability of
failure is likely when compared to another and may lead to the diminishment or
underdevelopment of academic self-efficacy (Schunk & Pajares, 2001). Finally, the
20


process of moving from the small, safe environment of an elementary classroom to
the large bustle of high school creates uncertainty. Both the environment and the
number of peers that a student knows changes; and in high school, students are forced
to change their expectations for assessment as well as have a highly expanded social
group to navigate. Thus, when reevaluating their academic abilities, many students
reduce their personal expectations given their new surroundings (Harter, 1996).
Factors Affecting Academic Self-Efficacy
Another area of focus for ASE research investigates the different factors that
affect levels of ASE. Most of this research appears to be at the post-secondary level
and often focuses on how different instructional strategies influence self-efficacy. A
study at Indiana-Purdue University at Fort Wayne investigated the effects of a
communication designed specifically to enhance the self-efficacy of introductory
psychology students (Jackson, 2002). By email, the instructor provided students with
efficacy-enhancing messages or neutral replies to student inquiries and monitored the
effect of these communications on test performance. Self-efficacy beliefs were both
significantly related to exam scores and significantly affected by the efficacy-
enhancing communication. Another study at the collegiate level analyzed the effects
of reciprocal peer tutoring (RPT) on self-efficacy and exam performance (Griffin &
Griffin, 1997). While previous research indicated that RPT positively influences
achievement and reduced participants level of stress and anxiety, the Griffin and
21


Griffin study showed no significant differences between RPT and non-RPT group
performances on academic measures or academic self-efficacy.
Self-Efficacy as a Context-Specific Construct
Self-efficacy is known as a context-specific construct (Kiamanesh, Hejazi, &
Esfahani, 2004; Zimmerman, 1995). Numerous studies have investigated the
differences in ASE within specific content areas (Bong, 2002; Joo, Bong, & Choi,
2000; Marsh, 1992; Marsh, Walker, & Debus, 1991; Pajares & Miller, 1994, 1995,
1997). These studies suggest that not only do the predictive abilities of ASE measures
increase when the measure focuses on one content area and performance within that
area, but they also imply that the self-efficacy outcome links are stronger within the
same domain than across different domains (Joo et al., 2000).
Bong (2002, 2004) took this facet of ASE measurement a step further and
investigated three different levels of specificity in two different subjects to analyze
any cross-domain interactions. This study allowed Bong to add additional support to
the argument for specificity within a context domain, and it allowed her to test
whether self-efficacy actually demonstrates stronger relationships with performance
measures in the same subject area than with performance measures in a different area.
Bong concluded that self-efficacy perceptions in a specific school subject were
content specific to achievement. In other words, English self-efficacy predicted only
22


English achievements, and math self-efficacy predicted only math achievements.
Cross-domain predictions were weak and not statistically significant.
Relationship Between Levels of ASE and Academic Achievement
Finally, ASE has direct influence on levels of academic achievement.
Linnenbrink and Pintrich (2002) stated, Experimental and correlational research in
schools suggests that self-efficacy is positively related to a host of positive outcomes
of schooling such as choice, persistence, cognitive engagement, use of self-regulatory
strategies, and actual achievement (p. 315). Further, low academic self-efficacy has
been connected to higher incidences of academic cheating, especially among high-
achieving students (Finn & Frone, 2004).
Numerous studies investigate the correlation between ASE and exam
performance. House (2000a, 2000b) focused his research on how academic
background and self-beliefs can serve as predictors for performance in science,
engineering, mathematics, and health science majors. He found that self-beliefs
accounted for 20% of the variance in students cumulative grade point averages.
Research on self-regulated learning is also closely tied to academic self-
efficacy and suggests that students with high efficacy are more apt to be successful in
self-regulated learning environments (Miller, 2000; Pajares, 2002; Zimmerman,
2002). This area of research also connects to differences in gender and academic self-
efficacy because, in general, girls have more goal-setting and planning strategies,
23


keep records, and self-monitor more frequently than boys, lending them a higher self-
efficacy for those tasks (Pajares, 2002). Research on the malleability of self-efficacy
beliefs and grade goals as predictors of exam performance (Vrugt et al., 1997; Wood
& Locke, 1987) continues to confirm other bodies of research that positively correlate
levels of self-efficacy to levels of achievement.
Connecting Self-Efficacy Development to Feedback in CAI
While many factors influence the development of self-efficacy, because CAI
modules are generally completed in isolation from other learners, most of the self-
efficacy changes in an individual result from mastery experiences. Mastery
experiences, in the form of question practice, integrate the use of feedback to the
learner. Another possible source of self-efficacy enhancement in CAI resides in the
social persuasion influences that come from efficacy-enhancing statements and praise
for correct answers. Therefore, even though CAI removes the human teacher from the
learning environment, it may still be possible to affect changes to an individual
learners level of self-efficacy. Computer-assisted instruction has its roots in the
theories and practices of programmed instruction. Thus, to understand how feedback
in CAI is structured, it is necessary to review the history of programmed instruction
and its behaviorist connections in psychological research.
24


Early Research Leading to Programmed Instruction
Thorndikes (1933) Law of Effect has often been cited as one of the most
influential contributions to early behavioral and academic research (e.g., Hermstein,
1970; Kulhavy & Wagner, 1993; Mory, 2004). Thorndike was one of the first
researchers to recognize the interaction of biology with learned behavior. The
foundation of his law lies in a Darwinian perspective that the connections of neural
synapses connections are strengthened when a behavior results in a positive reward
while these same connections are weakened when behaviors are punished.
This early biological approach to learned behavior became widely accepted as
a foundational premise of psychology and education, as evidenced by a quote in a
letter from B. F. Skinner to Thorndike in 1939, cited in Cummings (1999),
apologizing for not acknowledging Thorndike in the publication of The Behavior of
Organisms: I seem to have identified your view with the modem psychological view
taken as a whole (p. 429). Thorndikes research on instrumental conditioning, or
providing positive and negative feedback to elicit a desired response, fits neatly into a
Skinnerian perspective on behavior modification and learning theory (Salamone &
Correa, 2002).
Feedback as Reinforcement
Much of B. F. Skinners (1960) contributions to modem psychology were
encouraged by Thorndikes (1933) Law of Effect. These contributions eventually
25


paved the way for the founding principles of programmed instruction. Programmed
instruction originated as a series of predetermined linear steps for the learner to
progress through for the purpose of learning a particular task or concept (Mory,
2004). Feedbacks primary purpose within the programmed instruction context is to
reinforce answers. Skinners work with rats and pigeons gave evidence that animals
learn behaviors when exposed to various stimuli to elicit a desired response. If that
desired response was given, then a positive reinforcement (e.g., a food treat) was
awarded (Gilbert & Gilbert, 1991). While certain researchers criticized Skinner for
attempting to connect his work with animals to humans, further research showed that
this behaviorist approach has merits for influencing certain behaviors and type of
learning (Mory, 2004).
By the mid 1970s, various researchers began to express doubts as to the
efficacy of feedback as an appropriate and effective reinforcer of correct responses
(Kulhavy, 1977). In his review of the literature on feedback in programmed
instruction, Kulhavy defined feedback as any number of ways used to inform a
learner of the correctness of her or his response. This comprehensive review of the
literature regarding programmed instruction found no significant and repeatable
evidence to suggest that increasing feedback complexity results in corresponding
increases in learning. Further, Kulhavy and Anderson (1972) concluded that feedback
26


does not act as a reinforcer based on evidence of immediate and delayed feedback
comparisons.
Doubts Concerning Feedbacks Ability to Promote Understanding
With previous claims as to the ability of feedback as a reinforcer in
behaviorist approaches to education refuted (Kulhavy, 1977), Kulhavy and Stock
(1989) sought to understand the model of how feedback processing occurs within an
individuals mind, in hopes of gaining a better understanding of why the results from
so many studies conflicted. Presearch availability, or the ability for a learner to find
the answer to a given question without processing the information provided (e.g.,
copying the answers from the back of the book), was blamed for many of the
conflicting results on the efficacy of feedback to serve as a positive reinforcer
(Kulhavy, 1977; Mory, 2004). Bangert-Drowns et al. (1991) furthered the
investigation into the confounding results of previous feedback research by
introducing the idea that feedback is most effective in promoting learning if it is
provided in a context that encourages the learner to mindfully process the feedback
information.
Bangert-Drowns: Response Certitude and Mindfully Processed Feedback
The five-stage model of learning posited by Bangert-Drowns et al. (1991)
acknowledges the importance of mindfully processing feedback to effect a change in
the learners cognitive state (see Figure 1.1). To describe each of the stages, a
27


learners thought processes and responses to the feedback are depicted in terms of his
behaviors and actions. In the first stage, the authors acknowledge that the learners
initial state in terms of his previous experiences, knowledge, individual goals, and
self-efficacy set the tone for whether or not feedback is likely to positively affect his
cognitive state in the form of increased understanding. The initial state also
acknowledges that if the learner has a certain amount of apathy for the type of
instruction, then he may not even attempt to mindfully process the feedback, from
either boredom or general disinterest.
Initial state
Experience affected by:
- prior knowledge
- interests
- goals
- self-efficacy
5. Adjustment
Error correction affects:
- relevant knowledge
- interests
- goals
- self-efficacy
2. Search & retrieval strategies
Information stored in rich context of
elaboration easier to locate.
Learner's
Cognitive State
4. Evaluation
Depends on:
- expectancy
- nature of fe
eedback
3. Response
Degree of certitude
affects expectancy.
Figure 1.1
activate
The State of the Learner Receiving Feedback
Based on Bangert-Drowns et al. (1991; from Dempsey, Driscoll, &
Swindell, 1993). From Interactive Instruction and Feedback (p. 40), by
J. V. Dempsey and G. C. Sales (Eds.), (1993), Englewood Cliffs, NJ:
Educational Technology.
28


Assuming that a learner advances to the second stage, his search and retrieval
strategies are activated by the question posed. How well he is able to access
information related to the question depends on many factors; however, the model
assumes that information that has been previously stored within elaborate contexts is
recalled more easily during this stage than information that was not stored in
elaborate contexts. For example, if a learner had previously learned the order of the
planets from the sun usJhg only a mnemonic (My-Very-Educated-Mother-Just-
Served-Us-Nine-Pizzas to recall Mercury-Venus-Earth-Mars-Jupiter-Satum-Uranus-
Neptune-Pluto), then he would probably be able to retrieve the correct order.
Nevertheless, he may not have access to the more elaborate details such as the
planets relative sizes and distances from the sun.
However, if the mnemonic were combined with an activity that involved
building a scale model of the solar system (an activity which, when done correctly,
involves objects ranging in size from a pea to a beach ball, and a large parking lot to
simulate the sizes and distances), then he would be more likely to be able to retrieve
information with more details intact.
The learners response to the question constitutes the third stage in this
feedback-processing model. At this point, the learners level of certitude about his
response plays an important role in how his cognitive state is affected. If he is very
certain that his response is correct, then he has a preconceived notion as to what type
29


of feedback he will receive as a result of answering the question. After giving his
response, he is provided with feedback as to the correctness of his response.
In the final stages, evaluation and adjustment, a learners response certitude
can affect the learners new initial state for future questions. These final two stages
depend on how a learner responds to the feedback provided. If a learner responds to
the question with a high degree of certitude and has his answer validated by the
feedback for its correctness, then two outcomes are possible. If the feedback is
mindfully processed and it agrees with the learners expectations, then this matched-
pair of learner expectations and actual outcome strengthens the retrieval pathway
used to determine the initial response (Kulhavy, 1977; Kulhavy & Stock, 1989). This
argument is reminiscent of the early theories of Thorndike and his Law of Effect
(1933) in that a reward for a correct response acts as a positive reinforcer of the
learned behavior. Also possible is an increase in the learners self-efficacy for
answering that type of question. However, even if the retrieval pathway and learners
self-efficacy are strengthened, there is no net gain in actual knowledge. In contrast,
when feedback validates a correct response but the learner fails to process the
feedback information meaningfully, he fails to use the feedback in any way to
strengthen his future ability to retrieve similar information. Thus, there is no net gain
from the feedback in the form of strengthening the retrieval pathway or by increasing
actual knowledge.
30


If the learners response is correct, but he was only marginally certain of the
answers correctness, there are again two possible outcomes depending on how the
feedback is processed. A response for which the learner has little confidence that it is
actually correct may result in mindful feedback processing as a result of learning that
he was initially correct. This mindful processing may lead to a greater understanding
and overall net knowledge gain. Additionally, the learner may benefit from an
increase in self-efficacy because of this mastery experience. On the other hand, the
learner may not process the feedback mindfully. Unless the learner is actually
interested in gaining the knowledge, he or she is not likely to devote energy to
mindfully processing the feedback about a correct response for which he did not have
a high degree of certitude. However, he may still benefit from a gain in self-efficacy
due to the mastery experience of getting a correct answer.
In contrast, when the learners response is incorrect, he encounters feedback
that could be discouraging and inhibit her or his willingness to mindfully process the
feedback with the ultimate goal of increasing knowledge. In the situation where the
learner has a high degree of certitude for the correctness of his response, the learner is
forced to realize that he, though highly confident in his response, was actually
incorrect. The cognitive dissonance from this scenario can result in extremely
meaningful reflection, assuming the feedback is mindfully processed. However, the
feedback may not be mindfully processed in this scenario if the learner is inhibited by
31


frustration or anger after learning that the answer he was highly confident about was,
in fact, incorrect. Thus, no net gain in knowledge may result, and it is even possible
that the learner loses self-efficacy for answering similar questions.
Finally, learner may not have a lot of confidence in the correctness of his
answer, and thus, learning that he was incorrect does not cause any true cognitive
dissonance. The learners willingness to mindfully process the incorrect feedback for
a low certitude response depends strongly on his interest in the question and in his
desire to gain understanding and increase his knowledge. A genuinely interested
learner may approach feedback about an uncertain response with the intent to better
understand the gap in his knowledge and strive to use the feedback to fill the
preexisting hole in his understanding. When approached mindfully, an uncertain,
incorrect response can also result in highly beneficial reflective practices, thereby
increasing the individuals understanding and self-efficacy to answer similar
questions correctly in the future. In contrast, if the learner has no stake in gaining
understanding and knowledge related to the question and feedback, he is not likely to
exert the mental energy necessary to mindfully process the feedback thoroughly.
Thus, the feedback has little or no effect on the learners cognitive state.
The cyclical nature of the Bangert-Drowns et al. (1991) feedback processing
model acknowledges that the development of knowledge and understanding is not
only a multi-faceted process, but also affects future learning interactions. How a
32


learner responds to a particular question and how she evaluates and adjusts her or his
understanding based on her or his certitude and correctness in turn alters her or his
initial state for future questions. Thus, it is essential to encourage learners to use any
feedback provided in a meaningful way; and reflective practice should be facilitated
during the evaluation and adjustment stages. Only by mindfully processing feedback
is it possible to increase understanding and gain knowledge. Furthermore, self-
efficacy may be influenced as a result of the feedback provided. Because self-efficacy
and academic performance are inextricably linked, it is also beneficial to the learner
to have opportunities to enhance her or his individual confidence for answering
future, similar questions correctly (Pajares, 1996; Schunk, 1991; Schunk & Pajares,
2001; Schunk & Swartz, 1993; Walker, Greene, & Mansell, 2006).
Feedback Approaches and Connections to Computer-Assisted Instruction
Feedback is defined as information presented to the learner after any input
with the purpose of shaping the perceptions of the learner (Sales, 1993). This
definition closely resembles a behaviorist or programmed instructional approach to
the purpose of feedback as a reinforcement of a desired response (Mory, 2004). The
cognitivist definition emphasizes more than simple reinforcement of correct answers
in that the purpose of feedback is to act as more of a source for information designed
to provide insight and understanding about the question posed (Narciss, 2002).
33


These two approaches to defining feedback (i.e., behaviorist and cognitivist
theories) have driven the research surrounding feedback in computer-assisted
instruction (CAI). The focus of the cognitivist approach is on the information-
processing connection between feedback and learners. Because this type of feedback
must provide a source of information about the question instead of only identifying
the correct response, it requires more knowledge and effort on the part of the CAI
developer.
Classification and Research on the Effects of Feedback Types in CAI
The relative effects of different types of immediate feedback interventions,
classified according to the amount of feedback provided in computer assisted
instruction (CAI) on academic achievement is a commonly investigated topic in
educational research. From the simplest level, which contains the least information, to
the most complex level, feedback research focuses on (a) knowledge-of-response
(KOR), (b) answer-until-correct (AUC), (c) knowledge-of-correct-response (KCR),
(d) knowledge-of-correct-response plus elaboration (KCR+), (e) topic-contingent
(TC), and (f) response-contingent (RC) (summarized from the works of Catania,
1999; Clariana, 2001; Clark & Dwyer, 1998; Gordijn & Nijhof, 2002; Mason &
Bruning, 1999).
34


Feedback from the Behaviorist Perspective
The types of feedback that are most closely related to the behaviorist approach
for the function of feedback are knowledge-of-response (KOR), answer-until-correct
(AUC), knowledge-of-correct-response (KCR), and knowledge-of-correct-response
plus elaboration (KCR+). These various levels of CAI feedback are often classified
together because they use very straightforward feedback prompts to inform the
learner of her or his accuracy after answering a question. The simplest of these
feedback types is KOR, in which learners are provided with prompts such as correct
or incorrect immediately after answering a question. Knowledge-of-response is
sometimes combined with AUC and allows the learner to select additional choices
until he or she answers correctly. Knowledge-of-correct-response feedback provides
the learner with the identity of the correct response immediately after he or she inputs
an answer, whether correct or incorrect, without allowing him or her to try again.
Knowledge-of-correct-response plus elaboration feedback includes additional
information for the learner to process, which often takes the form of a hint to help
guide him or her to the correct answer and includes AUC directions. Tables 1.1-1.4
compare and contrast these four types of feedback as to how the feedback is presented
to the learner and as to how many chances the learner has to answer the question
correctly.
35


Table 1.1 Example of KOR feedback
The learner has one chance to get the question correct.
Question: What is the chemical formula for the ionic compound made from the
elements oxygen and aluminum?
Choice Response text Feedback displayed when the response is selected
A OA1 Incorrect
B A1203 Correct
C AI3O2 Incorrect
D O3AI2 Incorrect
E AlO Incorrect
Table 1.2 Example of AUC feedback
The learner has multiple tries to get the answer correct.
Question: What is the chemical formula for the ionic compound made from the
elements oxygen and aluminum?
Choice Response text Feedback displayed when the response is selected
A OA1 Incorrect. Try again.
B AI2O3 Correct.
C AI3O2 Incorrect. Try again.
D O3AI2 Incorrect. Try again.
E AlO Incorrect. Try again.
Table 1.3 Example of KCR feedback
The learner has one chance to get the question correct. Question: What is the chemical formula for the ionic compound made from the
elements oxygen and aluminum?
Choice Response text Feedback displayed when the response is selected
A OA1 Incorrect. The correct response is B.
B AI2O3 Correct
C AI3O2 Incorrect. The correct response is B.
D O3AI2 Incorrect. The correct response is B.
E AlO Incorrect. The correct response is B.
36


Table 1.4 Example of KCR+ feedback
The learner has multiple tries to get the answer correct.
Question:
What is the chemical formula for the ionic compound made from the elements
oxygen and aluminum?
Choice Response text Feedback displayed when the response is selected
A OA1 Incorrect. See page 293 in your chemistry textbook for help. Try again.
B AI2O3 Correct
C AI3O2 Incorrect. See page 293 in your chemistry textbook for help. Try again.
D O3AI2 Incorrect. See page 293 in your chemistry textbook for help. Try again.
E AlO Incorrect. See page 293 in your chemistry textbook for help. Try again.
Feedback from the Cognitive Perspective
The levels of feedback complexity that relate most closely to the cognitivist
approach all facilitate more complex interactions between the learner and the
feedback provider by focusing on the nature of the response provided by the learner.
Topic-contingent (TC) and response-contingent (RC) feedback interventions contain
specific information to help the learner determine the correct answer and are tailored
to the type of question and the response given. Feedback designed to provide specific
information about a particular topic or concept (TC) is a more elaborate form of
KCR+ because it increases the amount of information provided to the learner during
the feedback interaction. While KCR+ may provide the learner with additional
information (e.g., a page number in the textbook where information about the
37


question can be located), TC provides a specific feedback prompt designed to address
the focus of that particular question. For example, a CAI module may ask a question
about writing a chemical formula from its constituent elements. If the learner selects
an incorrect answer, the KCR+ feedback would prompt the student to review her or
his notes and read page 293 of the chemistry textbook for help on writing the correct
formula. In contrast, TC feedback would prompt the student to use the periodic table
to determine the charge of each of the elements in the ion form and provide a hint on
how to combine the elements together.
Response-contingent feedback adds one more level of elaboration to the
feedback provided when an incorrect response is selected. Instead of giving feedback
that is specific to the topic of the question, RC assumes that the learner made some
sort of cognitive error when he or she selected a particular answer and the feedback
provided is designed to address the error he or she made. Tables 1.5-1.6 compare and
contrast these two types of feedback as to how the feedback is presented to the learner
and as to how many chances the learner has to answer the question correctly.
38


Table 1.5 Example of TC feedback (cognitivist approach)
The learner has multiple tries to get the answer correct.
Question:
What is the chemical formula for the ionic compound made from the elements
oxygen and aluminum?
Choice Response text Feedback displayed when the response is selected
A OA1 Incorrect. Remember, when writing formulas, the charge of the ionic compound must add up to zero and the cation should be written before the anion. Try again.
B AI2O3 Correct
C AI3O2 Incorrect. Remember, when writing formulas, the charge of the ionic compound must add up to zero and the cation should be written before the anion. Try again.
D O3AI2 Incorrect. Remember, when writing formulas, the charge of the ionic compound must add up to zero and the cation should be written before the anion. Try again.
E AlO Incorrect. Remember, when writing formulas, the charge of the ionic compound must add up to zero and the cation should be written before the anion. Try again.
39


Table 1.6 Example of RC feedback (cognitivist approach)
The learner has multiple tries to get the answer correct.
Question:
What is the chemical formula for the ionic compound made from the elements
oxygen and aluminum?
Choice Response text A OA1 Feedback displayed when the response is selected Incorrect. When writing ionic formulas, which element always goes first? What are the charges of the two ions? Think about these hints and try again.
B A1203 Correct. Aluminum, the metal, assumes a 3+ charge in its ionic form. Oxygen, the nonmetal, takes on two additional electrons to form O2' ions. Thus, to make the overall compound neutral, 2 A1 and 3 O ions are required resulting in the correct ionic formula unit AI2O3. Great job!
C A1302 Incorrect. Youre really close... but I think you got confused with the charges of each ion. Aluminum will lose three electrons that makes it what charge? Oxygen gains two electrons. Now, put the elements together so that the net charge of the ionic compound is zero. Try again, you can do it!
D O3AI2 Incorrect. Which of these elements is the metal? What order should the elements in an ionic formula be given? Use your periodic table to identify the metal and the nonmetal in this question and try again youre almost there!
E AlO Incorrect. When forming ionic compounds, the net charge of the overall formula unit must add up to zero. Now, using the periodic table to guide you, what axe the charges of each of the two elements in this question when they form stable ions? Work carefully and youll get it right!
40


Relative Effects of Feedback Complexity on Academic Achievement
Numerous studies have investigated the relative effects of the more simple
feedback types (i.e., KOR, AUC, KCR, and KCR+) on academic achievement (e.g.,
Clariana, 2001; Clark & Dwyer, 1998; Gordijn & Nijhof, 2002). However, while the
designs of these studies often are similar, the researchers fail to combine to create a
body of evidence either in support of or against a hypothesis that states that increasing
feedback complexity also increases academic achievement (see reviews in Azevedo
& Bernard, 1995; Clariana, 1993; Mory, 1996, 2004).
Many studies exist investigating the effects of the more elaborate feedback
types (i.e., TC and RC); however, the researchers conclusions from these studies are
inconsistent and thus, fail to create a convincing argument that the more elaborate
forms of feedback are more effective for increasing academic achievement than less
elaborate forms (see reviews by Bangert-Drowns et al., 1991; Clariana, 1993; Mason
& Bruning, 1999; Mory, 1996, 2004). With such a large body of preexisting literature
examining the effect of feedback on academic achievement that conflicts in the
findings, one is led to seek alternative explanations for why the existing feedback
research does not conclusively support any one type of feedback (i.e., KOR, AUC,
KCR, KCR+, TC, and RC) as superior for having the greatest impact on learning.
41


Conclusion
The problem is that existing research focused on measuring how feedback in
CAI affects achievement conflicts and fails to generate an understanding of the role
that feedback in CAI plays in relationship to student achievement; thus, a different
approach to understanding these failures is needed. The Bangert-Drowns et al. (1991)
feedback processing model illuminates two features of how feedback can positively
affect the learners cognitive state that I believe can lead to a better understanding of
why the previous research was inconclusive.
Thus, I designed my research not only to attempt to explain past research
anomalies, but also to make recommendations for future improvements to the type of
feedback programmed into CAI tools. By examining the relationship between the
mindful processing of feedback and how it varies given different levels of feedback
complexity and how these feedback differences affect the learners level of self-
efficacy, I hope to provide a description of why previous research has failed to find
the best type of feedback for promoting academic achievement in CAI environments.
I used both quantitative and qualitative methods to understand how all the concepts
were connected. The quantitative methods were designed to answer the general
questions of whether or not different levels of feedback complexity in CAI
significantly affected the learners academic achievement and the learners self-
efficacy. Qualitative methods were necessary to more completely describe how
42


different learners approach CAI and the feedback provided within the learning tools,
and whether or not these different approaches played a role in the measured
quantitative changes.
Structure of the Dissertation
This dissertation is composed of five chapters. In Chapter 1,1 have described
the purpose, conceptual and theoretical framework; listed the research questions and
hypotheses; and presented a brief overview of the methodology. Also, I reviewed the
literature surrounding the concepts of feedback and self-efficacy. I describe the
methodology of the study for both the quantitative and qualitative approaches in
chapter two. For each approach, I address the sampling, measures, data collection and
analysis procedures, and limitations. I present the quantitative findings in chapter 3
according to the research questions and hypotheses proposed and the qualitative
findings in chapter 4. Finally, in chapter 5, incorporate the quantitative and qualitative
findings to present conclusions based on these data. I highlight similarities and
differences between this study and existing research, and suggest implications for
future research. As I foreshadowed in the introduction to this dissertation, I also
provide my own recommendations for CAI feedback design based on the findings of
this study. In conclusion, I outline the limitations of this study and provide a
description of how these limitations may have affected the study outcomes.
43


CHAPTER 2
METHODOLOGY
The driving research questions behind this study are (a) How does feedback in
chemistry CAI affect student achievement on an objective-driven assessment? and (b)
How does feedback in chemistry CAI affect students levels of science self-efficacy?
I completed two pilot studies in preparation for this research. The first study had two
independent variables, gender and exposure to different types of feedback during a
chemistry CAI module. The dependent variables were ASE and performance on an
objective-driven assessment (ODA) of the chemistry concepts covered in the module.
No significant changes in ASE across time were found. Also, no significance of the
between-subjects or within-subjects effects for the ODA was observed. The second
pilot investigated the same independent variables. Self-efficacy and achievement
were also investigated; however, the more general ASE was narrowed to the content-
specific science self-efficacy (SSE). A significant within-subjects effect for time was
observed at a 95% confidence interval. Analysis of the means for SSE over time
revealed an increase in SSE from pretest to posttest. No other meaningful significance
of between-subjects or within-subjects effects for the ODA were observed at a 95%
confidence interval. However, a mixed methods approach yielded valuable qualitative
insights into why only one overall quantitative effect was observed.
44


Because of my findings from these pilots, I employed both quantitative and
qualitative research methods. Academic achievement, as measured on a multiple-
choice, objective-driven assessment, can be easily investigated using quantitative
methods. However, self-efficacy is a social construct and thus, is less easily
quantifiable. There exist Likert-type measures of self-efficacy that have yielded high
levels of reliability and validity; but, to fully understand the phenomenon of self-
efficacy and how it is related to feedback provided in CAI, qualitative methods must
also be employed. Therefore, to capitalize on the strengths of each method, I used
both quantitative and qualitative research methods. Additionally, the concurrent use
of both methods of data collection provided unique opportunities for gathering
triangulation evidence and generating a better picture of what types of feedback most
impact the self-efficacy and, subsequently, the academic achievement of the learners.
The mixed-methods approach required me to break down the broader research
questions further into questions that can specifically be answered by the two different
approaches. The quantitative questions are: (a) Do different types of feedback in
computer-assisted instruction modules affect the score of science students on an
objective-driven chemistry assessment? and (b) Do different types of feedback in
computer-assisted instruction modules affect students' levels of science self-efficacy?
Qualitatively, these same questions can be approached from a more exploratory
perspective. Tentatively, the primary qualitative research questions are: (a) How do
45


students use different levels of feedback provided in computer assisted instruction
modules? and (b) How do different types of feedback affect how confident a student
is in her or his ability to learn science?
In this chapter, I address the quantitative and qualitative analyses separately.
The data collection for both approaches occurred concurrently with the primary
emphasis on the quantitative data analysis. I used the qualitative data to provide
evidence of triangulation as well as to generate a more complete picture of how
learners use feedback in CAI and what effects, if any, the feedback had on an
individuals self-efficacy. I begin by discussing the quantitative analysis design,
subject and sampling procedures, setting and materials, independent and dependent
variables, the instrumentation used, data collection and analysis procedures, and the
limitations of the methods chosen. In the qualitative section, I discuss the overall
approach and rationale for my research, the qualitative site and population selection,
my role as the researcher, the data collection methods, management and analysis
procedures, and the limitations of the qualitative approach. Finally, a summary of the
mixed methods approach will highlight the strengths and weaknesses of each design
and how they overlap to provide a more complete analysis of the research questions.
46


Study Participants
The participants for this study were students enrolled in first-year general
chemistry at a suburban high school in Centennial, Colorado. One hundred and ninety
students were enrolled in nine sections of chemistry and 108 returned both the student
and guardian informed consent forms signed. Participation in the study was optional,
and no extra credit or other compensation was awarded to those who chose to
participate. Of the 108 participants, 53 were male and 55 were female. All
participants were sophomores, juniors, or seniors in high school; and their ages
ranged from 16-18, with an overall mean age of 17. The ethnicity breakdown of the
entire sample was 67.6% Caucasian, 12.0% African-American, 11.1% Asian-
American, 8.3% Hispanic, and 1.0% of Middle-Eastern descent. The participants
were randomly assigned to one of three treatment groups. The final data set, omitting
any participants who did not complete one or more of the measures or treatments,
consisted of 95 participants. The demographic information for each group is
displayed in Table 2.1. Because gender, ethnicity, and age were not factors of this
design, these values are provided as qualitative information only to help understand
the limitations of the study for generalizing to a broader population. Permission to
gather data was granted through the Human Subjects Review Committee (HSRC) at
the University of Colorado at Denver and Health Sciences Center (Appendix D).
Additionally, permission from school administrators was obtained on the condition
47


that signed informed consents from both the student and the legal guardian were
obtained before the start of the study (Appendices A-C).
Table 2.1 Demographic breakdown of the experimental groups
Gender Caucasian African- Asian- Hispanic Middle-
M F American American Eastern
Group C (A =36) 21 15 67% 17% 8% 4% 4%
Group D (A =27) 13 14 77% 12% 8% 3% 0%
Group E (A =32) 15 17 60% 12% 14% 14% 0%
Quantitative Analysis
Academic achievement is quantifiable when defined using scores on objective
assessments of knowledge and understanding. Carefully constructed achievement
tests can give a reliable and valid diagnostic evaluation of student progress, especially
when the tests are clearly aligned to specific instructional objectives (Hopkins, 1998).
Thus, the first research question, (i.e., do different types of feedback in chemistry
CAI modules affect the scores of science students on an objective-driven chemistry
assessment?) was addressed by quantifying the students knowledge and
understanding of the chemistry topics deemed essential to the unit on acid and base
chemistry.
Self-efficacy is a social construct that is not as easily quantifiable. However,
this construct has been widely investigated in numerous empirical studies (e.g.,
48


Mone, Baker, & Jeffries, 1995; O'Brien, Kopala, & Martinez-Pons, 1999; Pajares &
Schunk, 2001a; P. L. Smith & Fouad, 1999; S. M. Smith, 2001; Wood & Locke,
1987; e.g., Zeldin & Pajares, 2000). Researchers have shown that self-efficacy can be
measured with responses on a Likert-type scale to carefully worded, content-specific
items (Bandura, 2001a; Pajares, 1996). Therefore, the second research question (i.e.,
do different types of feedback in chemistry CAI modules affect students levels of
science self-efficacy?) was addressed using an established measure of science self-
efficacy that asked specific questions about the students confidence to complete
certain science tasks as well as how they perceived their science abilities.
Design
This study investigated the effects of different levels of feedback in CAI on
participants scores on an objective-driven chemistry assessment and on levels of
science self-efficacy. The design for the study included two separate three-group,
true-experimental designs. I randomly assigned participants to one of three different
feedback groups, which varied in the type of feedback presented in the otherwise
identical CAI modules. The pretests and posttests of the dependent variables occurred
at the beginning and end of a three-week chemistry unit about acids and bases. The
four different CAI modules composing the studys treatment were spread over the
course of the unit, and each module administered a series of objective-driven
chemistry practice multiple-choice questions. The independent variable, level of
49


feedback, had three levels. These groups, labeled groups C, D, and E, varied in the
type of feedback presented upon answering a question.
Group C participants received text-based KOR and KCR feedback. Group D
participants received text-based KOR feedback for incorrect responses and KCR+
feedback for correct responses. The KCR+ feedback was delivered via both audio
accompanying text captions. Group E participants received topic contingent and
response contingent (TC/RC) feedback for incorrect responses and the same KCR+
feedback for correct answers as the group D participants. All feedback for group E
participants was delivered using both audio and text captions. Table 2.2 summarizes
the differences in feedback by group and response. For a more complete picture of
how the modules varied by feedback provided, I have provided screen shots from
each module of the same question in Figures 2.1-2.8.
Table 2.2 Feedback provided according to response and group
Group Response KOR KCR KCR+ TC/RC
C Incorrect S
Correct V
D Incorrect V
Correct V V
E Incorrect V
Correct S
The two dependent variables, investigated separately, were studied using an
objective-driven chemistry assessment and self-reported level of science-self efficacy,
50


as evaluated on a Likert-type measure. The objective-driven assessment (ODA) was
composed of 60 multiple-choice questions that were aligned with the same textbook
objectives as the CAI module questions. It is important to note that, to reduce effects
of pretest sensitization, the questions on the ODA were not identical to the ones
contained in the CAI modules. The 48-question measure of science self-efficacy was
developed by Shari Britner and Frank Pajares and addresses facets of science self-
efficacy such as (a) science anxiety, (b) science self-concept, and (c) self-efficacy for
self-regulation (Britner & Pajares, 2001). The Likert-type scale ranged across six
values, labeled according to the participants individual confidence for completing a
task, or the participants self-beliefs as to how true or false a particular statement was
when describing her or his own feelings and attitudes about learning science. I
describe each measure more thoroughly in the Dependent Variables section of this
chapter.
Settings and Materials
I conducted this study over 18 regularly scheduled class periods spanning 23
calendar days. All pretests and posttests were administered either during the
participants regularly scheduled class or during a study-period in the case of students
who were absent for either the pretest or the posttest dates. The participants attended
four scheduled sessions in the computer lab during which they completed the four
CAI modules that accompanied the content covered previously in class. The computer
51


lab contained 32 Macintosh computers, each equipped with an optical CD-ROM
drive. Participants received a CD for their assigned group; and, if participants were in
the D or E treatment groups, then each participant received a set of headphones. Prior
to the second, third, and fourth visits to the lab, each classroom teacher was provided
with a list of the individuals absent on the previous lab day(s) and instructions for
getting those individuals caught up with the rest of the participants. Any other missed
modules were completed in an optional computer lab session scheduled at the end of
the study, before the posttests. Four different chemistry teachers had students
participating in the study. However, effects due to teacher involvement were taken
into account by the successful random assignment of participants to the different
treatment groups.
Independent Variables
Participants were randomly assigned to one of three groups, which constituted
the independent variable for the study. I provided the classroom teachers with four
chemistry CAI modules for their students to complete throughout the course of the
unit. I designed the CAI modules to consist of 18 to 21 multiple-choice questions that
were aligned to the specific textbook objectives for the chapter on acid and base
chemistry. The three levels of the independent variable differed according to the type
of feedback was presented in response to incorrect and correct answers.
52


The modules for treatment group C were designed to deliver text-based KOR
and KCR feedback only. A sample question from the first module is shown in Figure
2.1. When a participant in group C answered a multiple-choice question incorrectly,
the text-based feedback on the screen simply stated, Incorrect, try again. If the
answer was correct, then the feedback stated, Correct. Advance to the next question.
Once the participant chose the correct answer, a button to advance to the next
question appeared in the lower right-hand corner of the screen. Screen shots of an
incorrect response and correct response example for group C are displayed in Figure
2.2 and Figure 2.3, respectively.
are acids and bases?
Figure 2.1 Sample Question from the First CAI Module
53


15-1.-What are adds and basest
Describe the distinctive properties of acids and bases.
(CB pp 1*2)
QpevvtoufquMiaoin
Figure 2.2 Example of KOR Feedback
lS-l:What are adds and bases?
Describe the distinctive properties of adds and bases.
(CB pp 1-2)
Which of the following is a property of an acid?
EJsour taste
strong color
Qnonefectrofyte
dsllppery feel
Ounresctive
Correct.
Advance to the next question.
<*Mtuon$el2t
rtquMtMjtQ
Figure 2.3 Example of KCR Feedback
Treatment group D received text based KOR feedback for incorrect responses
identical to the group C incorrect feedback (Figure 2.2). However, when group D
participants clicked on the correct answer, feedback that discussed why the chosen
answer was correct (KCR+) was both displayed on the screen and heard from an
audio sound track. The use of captioned audio feedback was employed to prevent the
participant from skipping the feedback by simply clicking to the next question. The
54


button to advance to the next question did not appear on the screen until the end of
the feedback for the correct answer. Thus, to advance to the next question, the
participant must determine the correct answer and listen to the entire KCR+ feedback.
I have provided screen shots of the sequence of KCR+ feedback provided as text and
audio for a correct response in one question from a group D module (Figure 2.4).
lS-l:What are acids and bases?
Describe trie distinctive properties of acids and bases.
(CB pp 1*2)
Wriich of trie following is a property of an acid?
Osour taste
strong color
CJnone/ectro/yte
OsUppery feel Excellent work, I'm proud of youI j
unreaet/ve l
V
QyulM quMtwt question 5 of 23 neTt quetUonO
Figure 2.4 Example of KCR+ Feedback
Group E received captioned audio feedback for all answers, correct or
incorrect. The incorrect answer feedback was individualized to the type of error that
55


the student may have made, either conceptual or mathematical, which may have led
him or her to choose that particular answer (topic contingent or response contingent).
The correct answer feedback for group E participants was identical to that received by
the group D participants (Figure 2.4). I have provided screen shots of the sequence of
TC/RC feedback provided as text and audio for each of four possible incorrect
responses in one question from a group E module (Figures 2.5-2.8).
15-1:What are acids and bases7
lS-l:What are acids and bases?
Describe the distinctive properties of acids and bases.
(CB pp 1-2)
Which of the following Is a property of an acid?
dsour taste
Sstrong color
none 1 ec trolyte
slippery feet Hmin.,. most vinegar Isn't highly colored...
unrectiv'e

Figure 2.5 Example of TC/RC Feedback, 1 of 4
56


Figure 2.6 Example of TC/RC Feedback, 2 of 4
57


lS-l:What are acids and bases?
lS-liWbat are acids and basesf
Figure 2.7 Example of TC/RC Feedback, 3 of 4
58


J5-J:What are acids and bases?
Describe the distinctive properties of acids and bases.
(CB pp l-2}_
(i Which of the following is a property of an acid*
JOsour taste
jjOsf'Ortfl color
Onone/ectro/yte
sWppery feel Acids can be very dangerous because they DO react with lots
BEunreactive of things. t
15-1: What are adds and bases?
Describe the distinctive properties of acids and bases.
(CB ppl-2)
Which of the following Is a property of an acid?
sot/r taste
stronjl color
Qnone/ectrofyte
\3sllppery feel f&unreactive Think carefully about the rest of the choices, which one best describes the property of acids like 'citric" acid?

Figure 2.8 Example of TC/RC Feedback, 4 of 4
Dependent Variables
This study investigated two separate dependent variables: (a) level of science
self-efficacy and (b) score on an objective-driven assessment. Science self-efficacy
was assessed using the 48-item measure developed by Britner and Pajares (2001).
The scale asked students to provide judgments along a six-point, Likert-type
continuum. The questions addressed various facets of science self-efficacy beliefs
such as science self-concept, self-efficacy for regulated learning, science anxiety, and
59


the students value beliefs for science education. This measure was used by other self-
efficacy researchers (Britner & Pajares, 2001b; Pajares, Britner, & Valiante, 2000),
and the Cronbachs alpha coefficient ranged from .79 to .81.
The second dependent variable, score on an objective-driven chemistry
assessment, consisted of 60 multiple-choice questions. These questions were chosen
from the textbook test software and were aligned to the textbook objectives and sub-
objectives. Sample questions, aligned with the textbook objectives, are presented in
Table 2.3.
Table 2.3 Sample questions from the objective-driven assessment
Objective 1.4: The pH of a solution is 9. What is its HaO+ concentration?
a. 10"6 M c. 10'3M
b. 10'7M d. 9M
Objective 2.5: What is the acid-ionization constant, Ka. for the ionization of acetic
acid, shown in the reaction CHjCOOHfag) + ^0(7) > H30+(c/g) + CHsCOCTfag)?
a. [HaOltCHsCOOH-] c. {^o+MCH,COO~}
[CH,CO()H\
b- [H,O ][C7/,COO ] d [CH,COOH\
[CH, COOH] [H20] [H,0+][CH,COO~]
Objective 4.1: The substances produced when KOH(a^) neutralizes HC\(aq) are
a. HClO(a^r) and KH(a^). c. H20(/) and KCl(^).
b. KH2Q+(a(7) and Cr(a^). d. H3Q+(^) and KCl(^).________________________
I met with the four classroom teachers and we developed the daily class
content, homework, laboratory experiments around these same objectives. This team
of teachers was very accustomed to working closely together and it was a standard
60


that all chemistry classes followed the same schedule and covered the same content in
similar manners. Content validity was addressed by the use of expert reviewers. The
measure was reviewed by a panel of three experts with an average of 9 years of
chemistry teaching experience for accuracy and readability. They also evaluated the
alignment of the questions to the textbook objectives, examined the balance of
cognitive processes required, and verified the overall relevancy of the questions.
Based on their expert judgment, revisions to several questions were made to improve
the wording and balance of content. The final version of the measure was reviewed
again, and the result was a 60-item measure that was broken down by objective (see
Table 2.4). Table 2.5 separates the 60 ODA questions into their respective objectives
and taxonomy level. The breakdown makes it obvious that the test represents the
content and process objectives in proportion to their importance, a property of a test
that is important for content validity.
61


Table 2.4
Breakdown of items according to textbook objectives
Textbook objective Sub-objective #
15-1: What are acids and bases? N= 19 15-1.1: Describe the distinctive properties of acids and bases 3
15-1.2: Distinguish between the terms strong and weak as they apply to acids and bases 5
15-1.3: Explain the unusually high electrical conductivities of acidic solutions 2
15-1.4: Name and describe the functional groups that characterize organic acids and bases 1
15-1.5: Use Kw to calculate a solutions hydronium ion or hydroxide ion concentration 8
15-2: Can the strengths of acids and bases be quantified? N= 11 15-2.1: State the Bronsted-Lowry definitions of an acid and a base 3
15-2.2: Differentiate between monoprotic, diprotic, and triprotic acids 3
15-2.3: Identify conjugate acid-base pairs 3
15-2.4: Calculate Ka from the hydronium ion concentration of a weak acid solution 2
15-3: How are acidity and pH related? N= 17 15-3.1: State the definition of pH and explain the relationship between pH and H30+ ion concentration 8
15-3.2: Perform calculations using pH, [HaO*], [OH'], and quantitative descriptions of aqueous solutions 7
15-3.3: Describe two methods of measuring pH 2
15-4: What is a titration? N = 13 15-4.1: Write an ionic equation for a neutralization reaction, and identify its reactants and products 2
15-4.2: Describe the conditions at the equivalence point in a titration 5
15-4.3: Discuss two methods used to detect the equivalence point in a titration 1
15-4.4: Calculate the unknown concentration of an acid or base using titration data 5
62


Table 2.5
A table of specifications for the ODA
Content Strata Number of Questions at Each Taxonomy Level
(Obj ecti ves/T opics) Knowledge Higher Total
15-1 9 8 17
What are acids and bases? (1,2, 5-7, 11-12, 14-15) (3-4, 8-10, 13, 18-19) 28%
15-2 5 6 11
Can the strengths of acids and bases be quantified? (20,23,24, 26-27) (21-22, 25,28, 29-30) 18%
15-3 6 13 19
How are acidity and pH related? (31-32, 34-35,46-47) (16-17, 33,36-45) 32%
15-4 6 7 13
What is a titration? (49-52, 54-55) (48, 53, 56-60) 22%
Totals 26 34 60 100%
Note. Numerals in parentheses refer to specific items on the test
The reliability of this measure was addressed through an item analysis based
on the protocol from Hopkins (1998) of the posttest ODA data from all participants.
This analysis determined that certain questions yielded low item discrimination (D-
index) values. Of the 60 items, five questions were discarded because they yielded a
negative D-index value. Three other questions were examined because of their low D-
index values; however, these items remained in the study. The decision to keep these
three questions in the measure was based on the opinions of the expert panel. They
unanimously agreed that the discarded items were ambiguously worded. However,
the other three questions were described as very difficult; and they were not surprised
63


that nearly all students, regardless of overall performance on the test, missed these
questions. While they were not surprised at the outcome, they felt that the questions
were fair and worded appropriately for the content that was covered in class.
Therefore, of the 55 questions, three had a D-index value indicating poor item
discrimination; 14 were labeled fair; 19 were labeled good; and the remaining 19
items, with D-index values over .40, had excellent discrimination. Because there is a
direct relationship between item discrimination values and a tests internal
consistency reliability, items with higher D-index values increase the instruments
reliability. Further, a corrected split-half reliability coefficient of 0.74 was calculated,
indicating that the instrument was highly reliable (see Figure 2.9).
Split-half correlation of ODA
Figure 2.9 Split-Half Reliability Plot for the ODA
64


Data Collection and Analysis Procedures
Two days before the start of the study, I visited each class and introduced the
study. According to the HSRC approval, the potential participants were made aware
of the purpose of the study, the potential risks and benefits involved in participating,
and the voluntary nature of the study. I also discussed the methods to maintain
confidentiality, such as using a random nine-digit identification number and sealing
the consent envelopes so that their teachers were unaware of which students were
participating and which were not. I provided time for students to ask questions and
then I distributed the informed consents, one for the student and one for her or his
legal guardian, in envelopes for the students to take home, read, and return. As
another layer of protection for confidentiality, I requested that all students turn in
their sealed envelopes, regardless of whether or not the forms were signed. The
envelopes were collected by the classroom teachers over the next several days until
all envelopes from each class were accounted for. I collected the envelopes from the
classroom teachers and compiled a master list of those students who had both forms
signed. Data for the study only included the participants who gave full consent. The
remaining students data were not used.
I measured the dependent variables science self-efficacy (SSE) and academic
achievement as both a pretest and a posttest. The classroom teachers administered the
pretests on the first day of the unit before any in- or out-of-class learning of the
65


content. They also administered the posttests on the last day of the unit, after all of the
units objectives were addressed. To maintain confidentiality, all students, regardless
of their status as participants, completed the pretests and posttests. This practice
ensured that the students who did not give consent were anonymous to their
classroom teacher, so there was no pressure from their teacher or peers to consent.
Students answered the questions on the SSE measure by circling the number
that best matched their judgment for each equation. To maintain confidentiality,
participants did not put their names on the SSE pretest or posttest; and all information
was collected via their randomly assigned nine-digit identification number. To ensure
that all questions were answered, students were reminded verbally and in writing to
review all 48 items to verify that they had circled only one answer for every question
and that no questions were left blank.
The pretest and posttest for the ODA was composed of only multiple-choice
questions. Students were permitted to use scratch paper, a calculator, periodic table,
and a pencil. All answers were recorded on a Scantron, and the scratch paper was
discarded. Students placed their name and identification number on the pretest and
posttest Scantrons. The results were machine scored against the master key.
Unanswered questions were counted as incorrect.
66


Data Analysis Procedures
Two separate analysis of covariance tests were used to analyze the data (for a
graphic depiction of the design, see Table 2.6). The random assignment of
participants to one of three treatment groups was verified by performing a one-way
analysis of variance on the pretest scores for the ODA and SSE measures, where
.95-^(2,107)= 1.003, p = .370 and 95^(2,107) = .790, p- .457, respectively. Because the
random assignment of participants held true, the pretest was only included in the
model to increase power. I present all quantitative findings and discuss their
implications in chapter 4 of this dissertation.
Table 2.6 Graphic depiction of the experimental design
Score on Dependent Variable
Pretest Posttest
Group C
Group D
Group E
Limitations
While this quantitative study has a very strong design, there are many threats
to both internal and external validity. Furthermore, limitations in the form of
weaknesses in the CAI module may have played a role in the results of the
ANCOVAs reported. Threats to internal validity include (a) instrumentation, (b)
testing, and (c) mortality. Instrumentation is a threat to the internal validity of the
study because the participants knew that the pretest ODA did not influence their
67


grade in the course. Therefore, some participants may have not tried their hardest to
answer all questions on the pretest ODA to the best of their ability. Testing is a threat
to internal validity because of the pretest/posttest design. As part of the qualitative
data collection, the participants were asked a series of questions about their science
self-efficacy and their experiences with the CAI modules. These questions may have
changed the participants attitude towards the treatment, possibly affecting the
posttest results. Finally, of the original 113 participants that returned both consent
forms, only 109 were kept for the study analysis. The other students were eliminated
from the study due to multiple absences from their scheduled class period. This
mortality rate is not severe, so this threat is of little consequence.
Threats to external validity include selection bias and pretest sensitization.
Because this study was relatively small (N = 109) and because the sample came
entirely from one suburban high school, the generalizability of the results to a larger
population is weakened. Furthermore, because the participants in this study were
exposed to a pretest of the concepts and the target population would most likely not
have a pretest of the concepts, the target population might respond differently to the
treatment.
Qualitative Analysis
I collected qualitative data from participants throughout the entire unit of
study. The purpose of the qualitative data was to generate a better understanding of
68


how feedback in CAI is related to both achievement and self-efficacy. The main
research questions were subdivided into three more specific, yet still open-ended,
qualitative questions: (a) How do students use different levels of feedback provided in
computer-assisted instruction modules? and (b) How do different types of feedback
affect how confident a student is in her or his ability to learn science? These questions
were explored through a series of journal responses that the participants completed
after each of the four CAI modules. The questions for each of the journals were
focused on understanding each participants perspective on the different facets of
science self-efficacy, such as science anxiety, self-efficacy for self-regulated learning,
and their judgments on the value of science. Each journal also asked at least one
specific question about how the participant interacted with the CAI module and the
feedback provided.
The majority of the journal questions were taken directly from the same SSE
measure used in the quantitative study (Britner & Pajares, 2001). I chose to use these
questions because they had already been established as relevant for understanding the
different facets of science self-efficacy. Further, I wanted to gain more information
about the underlying reasons for why students answered the Likert-type items in the
way they did on the pretest and posttest. Examples of the journal questions taken
from the SSE measure are found in Table 2.7.
69


Each journal response also included at least one specific question on how the
learner processed the feedback provided during the module they just completed. I
developed these questions with then intent to understand not only if but also how the
feedback was being mindfully processed by the learners. Sample questions can be
found in Table 2.8. The full set of journal questions for all four modules are in
Appendices G-J.
Table 2.7 Sample journal questions from the SSE measure
1. Please describe your confidence in your ability to pass science class at the end of
the semester. What grade do you think you will earn? What are your strengths?
What are your weaknesses?_____________________________________________________
2. Please describe how well you are able to study when there are other interesting
things to do. What conditions are best for your learning? What conditions are
worst for your learning?______________________________________________________
3. Please describe what your ideal" environment is for learning. For example, do
you learn best through classroom discussions, reading alone, study groups with
your peers, one-on-one interaction with your teacher, using a computer for
research and/or practice, etc. Please feel free to mention another environment that
I did not just list.__________________________________________________________
4. How well can you motivate yourself to do schoolwork? What role(s) do your
parent(s)/guardian(s), friends, and teachers play in helping you get your work
completed?____________________________________________________________________
70


Table 2.8 Sample journal questions about the CAI modules
Describe your initial reaction to the computer module you just completed. Were there
any features of the tutorial that helped you learn? Were there things that you liked or
disliked? In your response to this question, please think about how you responded to
the other questions in this journal are there any connections that you see between
how you described yourself as a learner and how you felt about this particular
computer learning experience?__________________________________________________
Now, thinking about the 2nd module that you just completed, please reflect on the
following statement. Is it true, false, or somewhere in between? Why do you feel this
way?\__________________________________________________________________________
Using a computer to review helps me feel more confident that I will do better on
future examinations.
At the conclusion of the study, I purposefully selected several students to
participate in a follow-up interview to both triangulate the journal and quantitative
findings as well as to provide the researcher with an opportunity to explore further
certain facets of the CAI learning experience.
Overall Approach and Rationale
Because the goal of the qualitative portion of this study was to understand
better the phenomenon of how feedback is used in CAI and whether or not different
types of feedback influence learners level of academic achievement and science self-
efficacy, I used a phenomenological approach to generating the questions, analyzing
the data, and reporting the findings (Creswell, 1998). The origins of this discipline are
in philosophy, sociology, and psychology because the intent is to understand the
essence of experiences about a phenomenon (p. 65). I analyzed the data to find
significant statements and meanings to generate themes and general descriptions of
71


the experiences of the learners in CAI modules. From this analysis, I developed a
description of the essence of the experience to help explain the quantitative findings
and generate a broader understanding of how feedback is best presented in CAL
Site and Population Selection
Because the qualitative data collection occurred concurrently with the
quantitative data collection, the site and population selection for the qualitative
portion of this study is almost identical to that in the quantitative portion. The only
exception is that, at the conclusion of the study, I purposefully selected eight
participants to complete a follow-up interview. Of the eight, six consented to
participate in a follow-up interview with me about their experiences and perceptions
during the 3-week unit. I selected these students using both stratified purposeful and
extreme or deviant case purposeful sampling strategies. I identified students from
each group who, based on their journal entries, could facilitate comparisons between
the subgroups (i.e., stratified purposeful). Additionally, I identified students as
possible interview participants based on their journal entries that set them apart from
the rest of the group because of their extreme like or dislike of the CAI modules (i.e.,
extreme or deviant case). In summary, I collected journal data from all participants
and interview data from six purposefully selected participants.
72


Researchers Role
As the researcher, I chose to perform this study at the same school where I
currently teach. However, I did not use my own students; so, I needed to establish a
rapport with the participants and explain why I was interested in having their help.
This rapport was essential to the qualitative portion of the study because of the
personal nature of the journal questions. I felt strongly that the participants would
need a reason to answer questions such as Sometimes I get so nervous in science that
even though I think I know something, I cant remember it when I need it. The
participants were asked to state whether this statement was true or false and, then, to
describe why he or she felt that way. If the participants did not know about my
background and why I was interested in their answers to these types of questions, I
feared that they would not take the time to answer the journal questions honestly and
thoroughly.
I addressed each class to introduce the study and distributed the informed
consent letters. At this time, I deliberately attempted to identify myself as a student;
and, like them, I had homework. I also emphasized the fact that having their help with
this research project would help me understand how students learn best. This
introduction was made with the intent of gaining access to more students for the
study. I hoped that by being upfront with them about what I was doing and why and
by emphasizing what I hoped to learn from the research, that more students would be
73


interested in participating and providing meaningful data in the form of thoughtful
journal entries.
Additionally, I was very careful to emphasize how I would maintain
confidentiality by using random identification numbers. Because many of the journal
questions asked the participants to discuss their feelings about themselves in terms of
their confidence to complete certain tasks or how they felt about their individual
abilities to learn, I wanted to make sure that the participants felt safe to tell me the
truth without fear of having others read their responses. In the written directions for
each journal response, I reiterated my promise to keep their responses confidential;
and they only identified themselves by their assigned identification number, not by
name. Also, as I read and coded the journal responses, I did not match names with
identification numbers until I had isolated the eight individuals that I wanted to ask
for a follow-up interview.
To gain access to the individuals selected for follow-up interviews, I requested
their participation in writing. To acknowledge their willingness to participate, the
written invitation asked them to sign their name to the slip and return it to their
classroom teacher. The instructions clearly stated that their participation was optional,
and there would be no penalty if they chose to not consent.
74


Data Collection Methods
Students completed and submitted journal responses via the Web immediately
following each of the four CAI modules completed over a 3-week period. In the
directions, I asked students to respond to each question with 2 to 3 sentences, and not
worry about formatting their responses for correct spelling or grammar. The first
journal contained five questions and immediately followed the completion of the first
module. Four of the questions were designed to explore the participants self-efficacy
for self-regulation. These questions were similar to the items on the SSE measure
used in the quantitative portion of the study. One additional question asked the
participants to describe their initial impressions of the CAI module as a learning
experience.
The second journal contained six questions, five of which were from the SSE
measure designed to explore the participants science self-concept (i.e., how they
judge their self-worth associated with their self-perception as a science student). An
additional question asked the participants to discuss how using a computer for review
affects their confidence to perform better on future examinations. The third journal,
composed of seven questions, emphasized science anxiety using five similarly
worded questions as the SSE measure. The final two questions also addressed science
anxiety but from the context of how using a computer affected their anxiety levels.
The final journal, following the fourth CAI module, was composed of five questions
75


asking the participants to describe how they used the feedback in the CAI modules.
Complete copies of all journal questions can be found in Appendices G-J.
Of the eight students that I asked to participate in the follow up interviews,
three were in Group C, three were in group D, and two were in group E. Of the final
six that gave consent, three were in Group C, two were in group D, and one was in
group E. I conducted the interviews during the participants normally scheduled class
time in the week following the posttests. Prior to the interview, I gave the
interviewees a list of the general questions that I would ask. The interviews took
place in the privacy of my office at the school or in an empty classroom. I recorded
the entire dialogue on my laptop and saved it onto a compact disc. Occasionally, I
asked students to clarify their previous statement, or I asked an additional question to
better understand the meaning of their previous statements.
Data Management Procedures
Students submitted each journal response electronically to my private email
account via a Web-based form. For identification purposes, I asked participants to
give their 9-digit number at the start of each form. I combined all responses into one
large Excel document, separated by journal and group so that all the group C
responses for the first journal could be read from one worksheet, all the group D
responses for the first journal could be read on a second worksheet, and so on.
76


I transcribed interviews from the audio recordings and I included my
observational notes from the sessions. Observational notes included information on
the setting, the body language of the interviewee, and other notes such as the school
bell ringing in the middle of the interview. I tried to generate a verbatim transcript,
but occasionally the recording was insufficient to clearly understand the words
spoken. I noted these instances in the observational notes column.
Data Analysis Procedures
I subjected the journal entries and the interview transcriptions to the four-step
Colaizzi (1978) method referenced by Creswell (1998). (a) I read all subjects journal
entries and the interview transcriptions in order to get the general feeling for the
different themes that may emerge, (b) I noted significant statements from multiple
entries, the statements were considered significant if they directly related to the
phenomena, feedback and science self-efficacy, (c) Next, I extracted and summarized
meanings from the significant statements summarized, (d) Finally, I identified the
themes and organized them according to categories. I determined significance by
comparing the statements to accepted models of feedback processing (Bangert-
Drowns et al., 1991; Mory, 2004) or common themes from the development of self-
efficacy (Bandura, 1997; Schunk & Pajares, 2001).
77


Summary
In this chapter, I outlined the quantitative and qualitative methods use for this
study. The quantitative design used a true experimental approach with three different
treatment groups. The treatment was composed of different types of feedback
provided in CAI modules the participants used during a three-week unit in their high
school chemistry class. The dependent variables measured science self-efficacy and
academic achievement. The science self-efficacy measure was a 48-item Likert-type
measure developed by Britner and Pajares (2001). Academic achievement was
measured using a 60-item objective-driven assessment of the chemistry concepts
covered in the three-week unit. I measured both dependent variables as a pretest and
posttest. The purpose of the pretest data was to ensure the random assignment of
participants to treatment groups. Later, I used the pretest as a covariate in the
quantitative analysis to increase the power of the test. I collected qualitative data in
the form of journal entries and interviews. These data were collected, organized, and
analyzed using the concurrent triangulation methods described by Creswell (1998,
2003) in an attempt to add meaning to the interpretation of the quantitative results.
In the next two chapters, I present my findings and interpretations of the
quantitative (chapter 3) and qualitative (chapter 4) data. For these two chapters, I treat
the data separately. Then, in chapter 5,1 tie the two methods together to show how
the results converge to create a more complete understanding of how feedback
78


provided in computer-assisted instructional modules, science self-efficacy, and
academic achievement are related. Chapter 5 also outlines my recommendations for
future research and design recommendations for creating CAI that maximizes the use
of feedback to promote academic achievement.
79


CHAPTER 3
A SUMMARY OF THE QUANTITATIVE ANALYSES
AND PRESENTATION OF THE FINDINGS
The overarching questions I address in this study are (a) How does feedback
in chemistry CAI affect students levels of science self-efficacy? and (b) How does
feedback in science CAI affect student achievement on an objective-driven
assessment? I narrowed these questions for the sake of clarity, specificity, feasibility,
and importance to include: (a) Do different types of feedback in science CAI, namely
KOR, KCR, KCR+, topic contingent, and response contingent, affect learners levels
of science self-efficacy? (b) Do different types of feedback in science CAI, namely
KOR, KCR, KCR+, topic contingent, and response contingent, affect learners scores
on an objective-driven science assessment? (c) How do learners use different levels of
feedback provided in science CAI modules? and (d) How do different types of
feedback affect how confident a learner is in her or his ability to understand science?
I addressed the first two questions using quantitative methods. This chapter is
dedicated to the interpretation and analysis of those results.
A Review of the Quantitative Study Design, Method, and Hypotheses
I performed this study over the course of a three-week chemistry unit and
study participants were all students enrolled in general chemistry at a suburban high
school in Aurora, Colorado. During the unit, the chemistry classes visited the schools
80


computer lab four times. The quantitative portion of the mixed-methods design for
this study included one independent and two dependent variables. The independent
variable had three levels. During the visits to the computer lab, participants completed
four CAI modules that delivered multiple-choice style questions aligned to the
chemistry unit objectives. Participants in the first level of the independent variable,
group C, received knowledge of response (KOR) and knowledge of correct response
(KCR) feedback. Group D participants received KOR and knowledge of correct
response plus elaboration (KCR+). Group E participants received KOR, KCR+, and
topic contingent and response contingent (TC/RC) feedback.
The dependent variables I investigated were academic achievement and
science self-efficacy (SSE). I quantified academic achievement by administering an
objective-driven assessment (ODA) of the chemistry objectives that accompanied the
unit of instruction (see Appendix E). I measured the second dependent variable, SSE,
with a 48-item Likert-type self-report questionnaire designed by Britner and Pajares
(2001) to measure science self-efficacy (see Appendix F). The data for both measures
were entered into an Excel workbook and later imported into SPSS for the statistical
analyses. Validity and reliability information for each measure indicated that they
both not only measured what they purported to measure, but they also did so with a
high degree of consistency. I provided a thorough description of each test in chapter 2
of this dissertation.
81


Analysis of Covariance
I chose to use an analysis of covariance (ANCOVA) model as the statistical
tool to analyze the quantitative data from the SSE and ODA measures. I chose the
ANCOVA over the repeated measures analysis of variance (RM-ANOVA) for several
reasons, all of which are well documented by statisticians over the last several
decades (Bonate, 2000; Huck & McLean, 1975; Jennings, 1988). The underlying
linear model for the RM-ANOVA design is not completely sound because the pretest
scores were collected prior to the treatment. This is a problem because it is impossible
for the treatment effects or an interaction effect to affect the pretest scores. The linear
model, however, assumes that all measurements are made after the treatment.
Therefore, it is inappropriate to use the RM-ANOVA design to analyze pretest and
posttest data.
The ANCOVA model is a combination of regression and analysis of variance.
The pretest data act as a type of statistical control because the model uses the pretest
scores to adjust the posttest scores in a way that makes all of the pretest scores appear
to have the same baseline. The covariate serves several functions. It is used to
(a) reduce error variance, (b) consider any preexisting mean group difference on the
covariate, (c) consider the relationship between the covariate and the dependent
variable, and (d) yield a more precise and less biased estimate of the group effects.
The assumptions of the ANCOVA are (a) random and independent errors,
82


(b) homogeneity of variance, (c) normality, (d) linearity, (e) fixed independent
variable, (f) covariate measured without error, and (g) homogeneity of regression
slopes (Lomax, 1992). Each of these assumptions can be tested to ensure that they are
not violated and I exhaustively test each later in this chapter, immediately following
the presentation of the ANCOVA results for each measure.
I tested two similar hypotheses with the ANCOVAs of the ODA and SSE
data. Both hypotheses compared the adjusted group means on the measures of the
dependent variables. The null hypothesis for the ODA measure was, in the
population, there is no difference in adjusted posttest means on the ODA between
treatment groups. Similarly, the null hypothesis for the SSE measure was, in the
population, there is no difference in adjusted posttest means on the SSE between
treatment groups.
First Dependent Variable, Posttest Scores on an Objective-Driven Assessment
I measured academic achievement using an ODA, composed of multiple-
choice style questions aligned to the various chemistry objectives for the unit on acids
and bases. An ANCOVA analysis, with the pretest as the covariate, did not reveal any
significant differences between the treatment groups on the adjusted posttest means
(.95^(2,106)= 1.311 ,p- 0.274). Summaries of the unadjusted posttest means, adjusted
posttest means, and the ANCOVA results are presented in Table 3.1, Table 3.2, and
Table 3.3, respectively. Thus, the results failed to reject the null hypothesis that were
83


no significant differences between the adjusted means on the ODA between the three
different treatment groups.
Table 3.1 Summary of unadjusted posttest means for the ODA
Group Mean Std. Deviation N
Group C 31.13 8.370 39
Group D 33.83 8.169 35
Group E 32.64 8.472 36
Total 32.48 8.339 110
Table 3.2 Summary of adjusted posttest means for the ODA
95% Confidence Interval
Group N Mean Std. Error Lower Bound Upper Bound
Group C 39 30.852 1.262 28.349 33.355
Group D 35 33.566 1.332 30.925 36.207
Group E 36 33.193 1.320 30.576 35.810
Covariates appearing in the model are evaluated at the following values: PRETEST = 17.77.
Table 3.3 ANCOVA summary table for the ODA
Source Type III Sum of Squares df Mean Square F Sig.
Adjusted Between 162.400 2 81.200 1.311 .274
Adjusted Within 7579.464 106 61.947
Covariate 877.298 1 877.298 14.162 .000
Corrected Total 7579.464 109
Testing the Assumptions of the ANCOVA Model for the ODA Measure
Using SPSS and Excel, I tested the assumptions of the ANCOVA model to
ensure that they were not violated, thereby voiding the interpretation of the results.
No assumptions were violated for the ODA ANCOVA. The first assumption, random
and independent errors, was tested by generating three residual plots, one per
84


treatment group (see Figure 3.1, Figure 3.2, and Figure 3.3). All three plots appeared
to be random; thus, I concluded that the first assumption of the ANCOVA was met.
tor Grot?* Group C
Observed Value
Figure 3.1 Residual Plot for Group C, ODA Measure
tor Grwy Group D
T 1 1 I
I 1 T
Observed Value
Figure 3.2 Residual Plot for Group D, ODA Measure
tor Group* Gret^ E
Figure 3.3 Residual Plot for Group E, ODA Measure
85