Citation
Problem solving as an outcome of problem-based learning

Material Information

Title:
Problem solving as an outcome of problem-based learning a case study at the United States Air Force Academy
Creator:
Rusnak, Barbara Jo
Publication Date:
Language:
English
Physical Description:
xii, 113 leaves : ; 28 cm

Subjects

Subjects / Keywords:
Problem-based learning ( lcsh )
Problem solving ( lcsh )
Problem-based learning ( fast )
Problem solving ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 99-113).
General Note:
School of Education and Human Development
Statement of Responsibility:
by Barbara Jo Rusnak.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
228002791 ( OCLC )
ocn228002791
Classification:
LD1193.E3 2007d R87 ( lcc )

Full Text
PROBLEM SOLVING AS AN OUTCOME OF PROBLEM-BASED LEARNING
A CASE STUDY AT THE UNITED STATES AIR FORCE ACADEMY
by
Barbara Jo Rusnak
B.S. in Geography, Michigan State University, 1987
M.S. in Geography, Eastern Michigan University, 1997
A thesis submitted to the
University of Colorado at Denver and Health Sciences Center
in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
School of Education and Human Development
2007


This thesis for the Doctor of Philosophy
degree by
Barbara Jo Rusnak
has been approved
by
Joanna C. Dunlap
Ellen Stevens
| /L 2-001-
Date


Rusnak, Barbara J. (Doctor of Philosophy, Educational Leadership and Innovation)
Problem Solving as an Outcome of Problem-Based Learning: A Case Study at the United
States Air Force Academy
Thesis directed by Dr. R. Scott Grabinger and Dr. Joanna C. Dunlap
ABSTRACT
Problem solving is an essential ability emphasized throughout the United States Air Force
and other military services. Due to the nature of armed forces missions, military officers
are often placed in situations in which they must solve ill-defined problems, sometimes in
situations involving national security. Problem-Based Learning is an instructional
methodology that situates students at the center of the learning process. While in charge
of their own learning, students collaboratively investigate and solve authentic problems,
constructing their own knowledge. Although problem-solving ability is one of the important
outcomes promoted in Problem-Based Learning, do we know that students actually acquire
and improve upon this lifelong ability as a result of this process? Given the importance of
problem-solving abilities in our world and, in particular, our military services, this
dissertation examines the role and effectiveness of Problem-Based Learning in the
development of problem-solving abilities through a case study conducted in a Geopolitics
course at the United States Air Force Academy.
This abstract accurately represents the content of the candidates thesis. I recommend its
publication.
Signed
Signed
Joanna C. Dunlap


DEDICATION PAGE
I dedicate this publication to:
My husband and my hero, Lieutenant Colonel Bob Rusnak, USAF,
for his undying love, understanding, and support.
My precious sons, Bobby and Brett,
for helping me relearn the importance of education and the thrill of learning.
My dear mother, Marilyn Francis,
for her incredible help and support in maintaining my sanity.
My father, Phil Francis, Jr.,
for his encouragement and humorI dearly wish he could be here with us today.
My grandfather, Phil Francis, Sr.,
for his support and not-so-subtle encouragement.


ACKNOWLEDGMENT
I wish to thank all those who helped me with this study:
My advisor and co-chair, Dr. Scott Grabinger,
for his guidance, encouragement, and tremendous sense of reality.
My instructor and co-chair, Dr. Joni Dunlap,
for her clarity, motivation, and enthusiasm.
My instructor and committee member, Dr. Ellen Stevens,
for helping me learn and grow.
My friend, colleague, and committee member, Dr. Jim West,
for his support and brainpower.
My friend and colleague, Lieutenant Colonel Mike Lucchesi, USAF (Ret),
for his mentoring and unrelenting inspiration.
My colleagues at the USAF Academy who helped me with this study,
Dr. Jamie Harris, Lieutenant Colonel Rob Gilchrist, USAF (Ret), and Dr. Craig Foster.
and
Lieutenant Colonel Steve Slate, USAF (Ret),
and the USAF Academy Department of Economics and Geosciences,
for giving me this wonderful opportunity to continue my education.


TABLE OF CONTENTS
Figures..................................................................xii
Tables..................................................................xiii
CHAPTER
1. INTRODUCTION...........................................................1
Purpose............................................................1
Significance of the Study..........................................3
Conceptual Framework...............................................4
j
! Relationship of Constructivist Pedagogies to
| Conceptual Framework..............................................5
| Relationship of PBL to Constructivist Pedagogies..................7
! Relationship of Problem Solving to PBL............................8
Research Questions................................................10
Operational Definitions...........................................11
PBL.............................................................11
Problem-Solving Abilities.......................................12
Problem-Solving Confidence......................................12
i Didactic Instruction.............................................13
j Methodology......................................................13
Design..........................................................13
Data Analysis...................................................14
Dissertation Structure............................................14
2. REVIEW OF THE LITERATURE..............................................15
Introduction......................................................15
! Constructivism...................................................15
VII


Student-Centered Learning.....................................17
Intentional Learning..........................................19
Collaborative Learning........................................22
PBL.............................................................23
The Evolution of PBL..........................................24
PBL Defined...................................................25
The PBL Process...............................................27
Phase 1....................................................27
Phase 2....................................................28
Phase 3....................................................29
Phase 4....................................................30
The Tutor Role................................................31
Outcomes of PBL...............................................32
Problem Solving.................................................32
Problem Solving Defined.......................................33
Problem-Solving Confidence Defined............................35
The Problem-Solving Process...................................35
Summary.........................................................38
3. METHODOLOGY.........................................................41
Introduction....................................................41
Design..........................................................42
Subjects and Sampling Procedures................................44
Setting and Materials.........................................46
Independent Variable............................................49


PBL Instruction.....................................................50
Didactic Instruction................................................53
Dependent Variables...................................................54
Problem-Solving Confidence..........................................54
Problem-Solving Ability.............................................54
Instrumentation.......................................................54
Problem-Solving Confidence..........................................55
Problem-Solving Ability.............................................56
i Data Collection Procedures............................................58
Data Analysis Procedures..............................................59
| Summary...............................................................60
4. RESULTS OF STUDY.........................................................62
Introduction..........................................................62
Differences in Problem-Solving Confidence Between Groups..............63
' Statistical Techniques..............................................63
Results.............................................................64
Differences in Problem-Solving Ability Between Groups.................66
Statistical Techniques..............................................66
Results.............................................................68
Change in Problem-Solving Confidence Within the PBL Group.............69
I Statistical Techniques..............................................69
Results.............................................................69
Change in Problem-Solving Ability Within the PBL Group................71
Statistical Techniques..............................................71
I Results.............................................................71
I
IX


Change in Problem-Solving Confidence Over Time Between Groups......72
Statistical Techniques.........................................73
Results........................................................73
Conclusion........................................................74
5. CONCLUSIONS...........................................................76
Introduction......................................................76
Summary of Findings...............................................76
Problem-Solving Confidence......................................76
Problem-Solving Abilities.......................................77
Explanation of Results............................................77
Problem-Solving Confidence......................................77
Problem-Solving Abilities.......................................78
Limitations.......................................................79
Implications for Further Research.................................81
Recommendations...................................................84
Conclusion........................................................85
Addendum..........................................................86
APPENDIX
A. Problem-Solving Confidence Inventory.................................87
B. Problem-Solving Assessment...........................................88
C. Problem-Solving Assessment Grading Rubric............................89
D. Problem Set..........................................................91
E. Peer Group Assessment Form...........................................94
F. Problem-Solving Confidence Differences...............................95
x


G. Student Evaluation of Teaching...................................96
H. Problem-Solving Ability Differences..............................98
BIBLIOGRAPHY .............................................................99
XI


LIST OF FIGURES
Figure
1.1 CONCEPTUAL FRAMEWORK FOR DEVELOPING PROBLEM-SOLVING
ABILITIES AND CONFIDENCE..................................4
1.2 CONCEPTUAL FRAMEWORK LINKING CONSTRUCTIVISM WITH
INTENTIONAL, STUDENT-CENTERED, AND COLLABORATIVE LEARNING.6
1.3 CONCEPTUAL FRAMEWORK LINKING INTENTIONAL, STUDENT-
CENTERED, AND COLLABORATIVE LEARNING WITH PROBLEM-BASED
LEARNING..................................................8
1.4 CONCEPTUAL FRAMEWORK LINKING PROBLEM-BASED LEARNING WITH
PROBLEM-SOLVING ABILITY AND CONFIDENCE....................9
2.1 CONCEPTUAL FRAMEWORK FOR DEVELOPING PROBLEM-SOLVING
ABILITIES AND CONFIDENCE.................................16
3.1 EXPERIMENTAL DESIGN FOR MEASURING PROBLEM SOLVING........43
xii


LIST OF TABLES
Table
3.1 Experimental and control group demographics................................45
3.2 Percentage of subjects in divisional majors................................46
3.3 Ranges and average scores of students accepted to the United States
Air Force Academy.........................................................47
3.4 Assessment questions derived from problem-solving stages..................57
4.1 Problem-solving confidence differences....................................64
4.2 Problem-solving assessment reliability analysis...........................67
4.3 Problem-solving ability differences.......................................68
4.4 Problem-solving confidence changes in experimental group..................70
4.5 Problem-solving ability changes in experimental group.....................72
4.6 Problem-solving confidence changes over time..............................74
5.1 Student evaluation of teaching for experimental and control groups.........83
F. 1 Problem-solving confidence differences.....................................95
G. 1 Student evaluation of teachingexperimental group..........................96
G. 2 Student evaluation of teachingcontrol group...............................97
H. 1 Problem-solving ability differences........................................98


CHAPTER 1
INTRODUCTION
Purpose
Problem solving is an essential ability emphasized throughout the United States
Air Force (USAF) and other military services. Due to the nature of armed forces missions,
military officers are often placed in situations in which they must solve ill-defined problems,
sometimes in situations involving national security. In his remarks to a group of aviators,
the Secretary of the Air Force asserted that the Air Force is committed to . thought,
creativity, education, and problem-solving skills (Roche, 2004). General Peter
Schoomaker, former Commander in Chief of the United States Special Operations
Command, insists that modern military leaders must be problem-solving warriors able to
develop and use creative solutions in ambiguous circumstances (Cohen & Tichy, 1999,
p. 1). Squadron Officer School, the Air Forces leadership school for junior officers,
recently added problem-solving abilities to its list of evaluation areas (SOS, 2007).
Successful problem-solving abilities have often been the key to successful military
operations. When Special Operations Forces landed in northern Iraq in 1991 to quell a riot
at a Kurdish refugee camp, the unit's commander defused the strained situation by
refocusing the mob's anger. The commander persuaded the local Kurds to list the
problems they were encountering and provide possible solutions. Cohen and Tichy (1999)
claim that the commanders actions and the ensuing corrective actions that his forces took
saved thousands of lives (p. 3).
Combat operations are often made more difficult because of the risk of collateral
damage to innocent civilians. Officers must be capable of making tactical decisions based
1


on sound problem-solving strategies. For example, during United States military
operations designed to help restore democracy in Haiti in 1994 battlefield commanders
were assigned the problem of eliminating specific military threats, such as guardhouses,
while minimizing civilian casualties. Those commanders had to analyze the tactical
situation and develop a plan to quickly resolve the problem (Cohen & Tichy, 1999). The
problem-solving abilities of these officers helped their units successfully accomplish their
missions, while limiting collateral damage.
The previous examples clearly illustrate the necessity of successful problem-
solving abilities in todays dynamic environments. Some educators do not necessarily
know how to teach these skills effectively. Problem-based learning (PBL) is one effective
method of teaching future officers and other professionals in situations that closely
resemble those found in their professions (Boud & Feletti, 1991). PBL is an instructional
methodology in which students collaborate to solve authentic problems (Grabinger,
Dunlap, & Duffield, 1997; Stage, Muller, Kinzie, & Simmons, 1998; Woods, 1994).
Students learn some important lifelong learning abilities, such as collaboration,
communication, decision-making, self-directed learning, reflection, and problem solving by
solving authentic, real-life situations (Barrows, 1985; Delisle, 1997; Engel, 1991; Woods,
1994). Problem-solving abilities provide students with the necessary framework in which to
organize and store their knowledge to enable later use of that knowledge efficiently and
effectively to solve problems as real-world practitioners.
Although problem-solving ability is one of the important outcomes promoted in PBL,
do we know that students actually acquire and improve upon this lifelong ability as the
result of the PBL process? Unfortunately, there is little significant, empirical research
evaluating the change in problem-solving abilities of students who participated in PBL
courses (Berkson, 1993; Distlehorst, personal communication, January 9, 2003). Some
2


authors assert that most studies of PBL are inconclusive because too many variables are
unaccounted for when comparing PBL with didactic courses (Berkson, 1993; Norman &
Schmidt, 2000). Most research focuses on the improvement of disciplinary knowledge and
abilities along with student and faculty satisfaction (Albanese, 2000; Davies, 2000).
Studies examining problem-solving abilities in PBL environments are typically of small
scope, report small effect sizes, and use non-randomized samples (Albanese, 2000;
Colliver, 2000; Davies, 2000, Vernon & Blake, 1993). These studies, essentially limited to
the medical field, focus on understanding and clinical abilities rather than non-discipline
specific problem-solving abilities (Albanese, 2000; Davies, 2000; Vernon & Blake, 1993).
While few education researchers have conducted research on problem-solving
confidence, researchers in the field of psychology have completed studies indicating that
there is a link between problem-solving confidence and problem-solving ability (Heppner &
Lee, 2002). One such study (Cassidy, 2002) showed that subjects with higher problem-
solving confidence had more superior problem-solving abilities. Davey, Jubb, and
Cameron (1996) found that the negative emotions in subjects who have low problem-
solving confidence detracted from their problem-solving abilities. This study focuses on
one important, purported PBL outcome as it examines the effects of the PBL learning
process on students problem-solving abilities and confidence in problem-solving abilities.
Significance of the Study
Problem-solving abilities are essential for today's highly competitive, complex
world (Barrows, 1985; Roth et al., 1996) and students must learn these abilities to be
successful professionals and members of our society. For example, one of the educational
outcomes designated by the United States Air Force Academy (USAFA) is to produce
officers who can solve ill-defined problems (USAFA, 2007). Given the importance of
problem-solving abilities in our world and, particularly, in our military services, this study
3


will contribute to the collective research surrounding the role of PBL in the development of
problem-solving abilities.
Conceptual Framework
Figure 1.1 illustrates the conceptual construct of developing problem-solving abilities and
confidence by framing that development within PBL and related constructivist
epistemologies. PBL is anchored in constructivismstudents construct their own
Figure 1.1 CONCEPTUAL FRAMEWORK FOR DEVELOPING PROBLEM-SOLVING
ABILITIES AND CONFIDENCE.
4


knowledge while collaboratively investigating and solving authentic problems (Grabinger et
al., 1997; Stage et al., 1998; Woods, 1994). In constructivist classrooms, students
generate and demonstrate new knowledge, rather than memorize it from a lecture or book
(Brooks & Brooks, 1999).
Constructivist pedagogies combine cognitive and social interactions with problem-
solving exercises (Hmelo & Evensen, 2000). Constructivist instructors assist and guide
learners during the learning process by focusing on educational pedagogies that align with
PBL. These constructivist pedagogies, according to Brooks and Brooks (1999), Jacobs
and Ferrell (2001), and Stage et al. (1998), include student-centered learning, intentional
learning, and collaborative learning. The first component of this study's framework
examines the relationship between constructivism and these three pedagogies
(Figure 1.2).
Relationship of Constructivist Pedagogies to Conceptual Framework
One constructivist pedagogy, student-centered learning, places students in charge
of decision-making and applying new knowledge to their previously established constructs
(Kohn, 1999; Mayer, 1992; Perkins, 1999). In student-centered classrooms, instructors are
coaches who help provide frameworks from which students construct their knowledge from
their own points of view (Grabinger & Dunlap, 1995; Lave & Wenger, 1991; Mayer, 1992).
Students learn and retain successfully when they are active participants in the learning
process (Alley, 1996). During the process, students also continually assess their strengths
and weaknesses (Glasgow, 1997; Saunders, 1992).
5


Constructivism
Figure 1.2 CONCEPTUAL FRAMEWORK LINKING CONSTRUCTIVISM WITH
INTENTIONAL, STUDENT-CENTERED, AND COLLABORATIVE
LEARNING.
The second constructivist pedagogy, intentional leaning, focuses on helping
students discover how to autonomously exploit their knowledge and abilities across diverse
fields (Bereiter & Scardamalia, 1993). In intentional learning classrooms, learning, rather
than knowledge, is the goal and students take responsibility for their learning and the
learning process (Scardamalia & Bereiter, 1991). According to Dunlap and Grabinger
6


(2003), intentional learning environments are those in which students become
metacognitively aware and learn to (a) reflect on their knowledge deficits, (b) set their own
learning goals, (c) design higher-order questions they want and need to ask, and (d)
research information that meets their individual interests and needs. Instructors engage
students in interdisciplinary projects and problem-solving activities to help them see the
important connection between knowledge and the real world (Grabinger & Dunlap, 1995).
The last constructivist pedagogy, collaborative learning, focuses on group work
and interaction to help students construct meaning of new knowledge (Bransford, 1990).
Dialogue between students not only deepens their understanding, but also increases
motivation to learn and enhances interpersonal skills (Brooks & Brooks, 1999; Kohn,
1999). Students gain an appreciation for diverse perspectives and the importance of
teamwork (Brooks & Brooks, 1999; Lambert et al., 2002; O'Banion, 1997).
Relationship of PBL to Constructivist Pedagogies
Another component of this studys framework examines the relationship among the
three pedagogies (student-centered learning, intentional learning, and collaborative
learning) and PBL (Figure 1.3). First, PBL encourages students to take responsibility for
their own learning and provides them with opportunities to solve relevant, authentic
problems that are encountered in the real worlda student-centered learning approach.
Second, PBL poses relevant problems to advance lifelong, metacognitive skills such as
knowledge reflection, goal setting, question development, and research techniques
(Brooks & Brooks, 1999; Dunlap & Grabinger, 2003; Kohn, 1999; Mayer, 1992)an
intentional learning approach. Finally, PBL requires students to coliaboratively solve
problems. During that process, they gain diverse perspectives and learn the essential
lifelong skill of teamwork (Brooks & Brooks, 1999), which parallels the real world in which
they will be workinga collaborative learning approach.
7


Constructivism
Figure 1.3 CONCEPTUAL FRAMEWORK LINKING INTENTIONAL, STUDENT-
CENTERED, AND COLLABORATIVE LEARNING WITH PROBLEM-
BASED LEARNING.
Relationship of Problem Solving to PBL
The final component of this studys framework examines the relationship between
PBL and problem solving (Figure 1.4). Successful problem solving entails both ability and
confidence (Bonner, 2000). PBL helps students enhance their problem-solving abilities by
guiding them through a well-established problem-solving process that can be used in
8


multiple situations and domains (Barrows & Kelson, 1993; Delisle, 1997; Engel, 1991;
Walton & Matthews, 1989). Scanduras (1977) influential work illustrates how the problem-
solving process leads to enhanced problem-solving abilities. Delisle (1997) asserts that we
gain knowledge and abilities through actively doing things (solving problems and
answering questions) in the real world, not by means of abstract or decontextualized
learning. According to Barrows and Kelson (1993), PBL helps students develop an
Figure 1.4 CONCEPTUAL FRAMEWORK LINKING PROBLEM-BASED LEARNING
WITH PROBLEM-SOLVING ABILITY AND CONFIDENCE.
9


approach to problem solving that involves initiative and diligence and a drive to acquire
the knowledge and skills needed (p. 14) to effectively develop solutions.
PBL can potentially improve problem-solving confidence through practice and
repetition. Bonner (2000) found that confidence significantly influences student success.
Further, Zimmerman (2000) asserts that confidence is a measurable indicator that can
predict learning and ability. Thus, problem-solving confidence is a predictor of problem-
solving ability. Because problem-solving confidence is readily measurable and correlated
to the PBL outcome of problem-solving ability, it is a useful supplementary component of
this study. Confidence is parallel to self-efficacy, in that both concepts are defined as a
students belief that he or she can successfully accomplish a task or skill (Elsenberger,
Conti-DAntonio, Bertrando, 2005). For purposes of this study, however, the term
confidence is most applicable. Given the potential link between PBL and problem-solving
abilities and confidence, this study attempts to illustrate that PBL activities can lead to
improved problem-solving abilities and confidence.
Research Questions
This study answers three research questions that attempt to demonstrate the link
between PBL and problem-solving confidence and abilities. The first question is, To what
extent do students who have had a PBL-based course differ from students who have had a
didactic course (lecture and discussion) in terms of: (a) confidence in their problem-solving
abilities, and (b) their actual problem-solving abilities?" The hypothesis tested is, In the
population, students who have taken a PBL-based course have: (a) more confidence in
their problem-solving abilities, and (b) more successful problem-solving abilities than do
students who have not taken a PBL-based course.
The second research question is, To what extent do students who are engaged in
a PBL-based course change during the semester in terms of: (a) confidence in their
10


problem-solving abilities, and (b) their actual problem-solving abilities? The hypothesis
tested is, In the population, students engaged in a PBL-based course: (a) increase
confidence in their problem-solving abilities throughout the semester, and (b) improve their
problem-solving abilities throughout the semester.
The final research question is, Do students engaged in a PBL-based course
acquire confidence in their problem-solving abilities at a different pace than students
engaged in a didactic course (lecture and discussion)? The hypothesis tested is, In the
population, students who have taken a PBL-based course acquire confidence in their
problem-solving abilities more so than students who have not taken a PBL-based course."
Operational Definitions
PBL
PBL is an instructional methodology in which students learn important lifelong
learning abilities such as collaboration, communication, decision-making, self-directed
learning, reflection and metacognition, and problem solving (Barrows, 1985; Barrows,
1988; Engel, 1991; Woods, 1994). This approach involves an authentic learning context in
which students examine ill-defined problems that mirror problems encountered in the real
world that they must solve with a real-world approach (Grabinger et al., 1997; Stage et al.,
1998; Woods, 1994).
The Southern Illinois University School of Medicine model was applied in this
study. This model consists of seven distinctive steps in which the instructor, in the role of
tutor, helps facilitate the student learning process. These steps, as defined by Barrows
(1985) and Barrows and Kelson (1993), guide students to move through the following
process:
1. Determine and examine the problem.
2. Establish what they know about the problem.
11


3. Recognize what they need to learn.
4. Perform self-directed learning.
5. Apply new knowledge to solve the problem.
6. Synthesize and evaluate what they learned.
7. Evaluate the process and perform peer- and self-assessments.
Problem-Solving Abilities
There are many definitions of problem solving. Wu, Custer, and Dyrenfurth (1996)
offer that problem solving is a set of acts involved in responding to problems in a particular
manner. This definition, although commonly accepted, is too simple and requires further
focus. Mayer (1977) asserts that problem solving is a process in which parts of a problem
are rearranged in order to relate each of them to other parts in a useful, meaningful way.
Once analyzed, those parts can be fit together to devise a solution. Problem solving is
analogous to fitting pieces of a puzzle together. Simons (1980) work strengthens Mayers
position when he states that problem solving is not just the simple process of recalling
knowledge. He claims that knowledge must be recalled and processed by using
successful techniques. These techniques, suitably outlined by Barrows (1985), Krynock
and Robb (1999), and Feldhusen and Treffinger (1977), are: (a) identifying the problem, (b)
identifying prior knowledge and determining unknown information which must be
discovered, (c) synthesizing new knowledge to determine possible solutions/ hypotheses,
(d) analyzing those solutions/hypotheses, and (e) evaluating the final solution.
Problem-Solving Confidence
Problem-solving confidence, as measured by the Heppner and Petersen (1982)
Problem-Solving Inventory, is the amount of certainty that respondents have in their
abilities to solve problems successfully. Because confidence levels positively correlate to
12


problem-solving abilities (Heppner & Lee, 2000), this measurement provides another
indicator of PBLs influence on problem-solving abilities.
Didactic Instruction
Didactic instruction is a methodology in which the instructor delivers factual
information primarily through lecture and demonstration. This is a teacher-centered
technique where students learn primarily through listening rather than performing an
activity. The outcome of this instruction is for students to gain basic subject area
knowledge.
Methodology
Design
This study uses a quasi-experimental design with a pre-test, intervening tests, and
a post-test administered to two groups, one experimental and one control. There is one
independent variablemethod (PBL vs. didactic) of instruction. There are two dependent
variables. The first dependent variable is mean scores on an inventory measuring student
confidence in problem-solving abilities, as described in the following section on
measurement strategies. The second dependent variable is mean scores on an
assessment measuring actual problem-solving abilities, also described later.
The experimental group of USAFA students experienced a PBL curriculum
throughout the semester while the control group experienced didactic instruction
throughout the semester in Geopolitics (Social Science 112). Students were informed of
the study during the first class period of spring semester, 2004, and were provided with an
Informed Consent Form to return the following class period, in accordance with USAFA
policy. Pretests on confidence and ability were administered during the second class
period and were followed up throughout the semester with a series of post-tests
13


(confidence only) approximately every two and one-half weeks, culminating in a final post-
test (confidence and ability) during the final class period (class period 42).
Data Analysis
More than one analysis strategy was employed in this study including an analysis
of covariance to determine if students in the experimental group had more confidence in
their problem-solving abilities; a t-test for difference of means helped answer the second
research question in determining if students in the experimental group increased
confidence in their problem-solving; subject interview data ; an analysis of variance helped
answer the third research question in determining if there was an interaction between
method" and time when comparing students in the experimental group with students in
the control group; and finally, Levene's Tests of Equality of Error Variances was conducted
to test for homogeneity of variance, as the number of subjects in the two groups, control
and experimental, were not equal.
Dissertation Structure
This dissertation has five chapters. Chapter 1 provided a brief description of the
studys purpose, significance, conceptual framework, research questions, operational
definitions, and methodology. Chapter 2 presents a comprehensive review of the literature
regarding PBL and problem solving. Chapter 3 presents the methodology of the study,
outlining sampling procedures, measurement strategies, data collection methods, and data
analysis approaches. Chapter 4 examines the results and interprets the findings for all
three-research questions. Finally, Chapter 5 offers a comparison of this study with
previous research, outlines the implications of the study on future practice and research,
and examines limitations of the study.
14


CHAPTER 2
REVIEW OF THE LITERATURE
Introduction
The primary purpose of this chapter is to establish a link between problem solving and
problem-based learning (PBL). The chapter is centered on this studys conceptual
framework (Figure 2.1). It commences with a discussion of constructivism and
its three major components: student-centered learning, intentional learning, and
collaborative learning. The chapter then turns to a discussion of PBL, with a focus on the
instructional process. Following that is a synthesis of the literature surrounding the theory
and research of the PBL outcome of problem-solving. This section examines the
processes and strategies involved in generating and developing this essential lifelong
ability and explains the significance of confidence to problem solving. The chapter
concludes with a summary and statement about the connections between the key findings
of the literature review and the present study.
Constructivism
Constructivism advocates student construction of knowledge and meaning. This
epistemology focuses on the generation and demonstration of knowledge rather than the
repetition of knowledge (Brooks & Brooks, 1999). Instructors do not just relay the
15


Constructivism
Figure 2.1 CONCEPTUAL FRAMEWORK FOR DEVELOPING PROBLEM-
SOLVING ABILITIES AND CONFIDENCE.
knowledge; they assist students in its construction by providing guidance. This guidance
can be in the form of evidence and resources, metacognitive questions, and/or modeling
methods (Brush & Saye, 2000; Stage et al., 1998). Constructivism has three major
components that are significant to this study: student-centered learning, intentional
learning, and collaborative learning.
16


Constructivism
Figure 2.1 CONCEPTUAL FRAMEWORK FOR DEVELOPING PROBLEM-
SOLVING ABILITIES AND CONFIDENCE.
knowledge; they assist students in its construction by providing guidance. This guidance
can be in the form of evidence and resources, metacognitive questions, and/or modeling
methods (Brush & Saye, 2000; Stage et al1998). Constructivism has three major
components that are significant to this study: student-centered learning, intentional
learning, and collaborative learning.
16


Student-Centered Learning
Student-centered learning, supports a holistic view of the learning process from the
students position (Alley, 1996; Barr & Tagg, 1995; Bonwell & Eison, 1991). There is little
agreement among educators on the definition of student-centered learning (Reynolds,
personal communication, November 6, 2001). However, the literature contains many
common facets of its strategies. The goal of the student-centered approach is that
students actively acquire knowledge and abilities based on their own abilities, experiences,
and needs (Glasgow, 1997; Stage et al, 1998). To acquire these knowledge and abilities,
students participate in activities that help them take responsibility for their own learning.
With the aid of their instructor, students continually assess their strengths and
weaknesses, maintaining attentiveness to their development on both a personal and an
academic level (Glasgow, 1997; Saunders, 1992).
The lecture, which epitomizes teacher-centered classrooms, is a long-standing
tradition in higher education (Ediger, 2001; Stage et al., 1998). This typically linear, one-
way path from instructor notes to student notes is a relatively passive classroom
experience for students. The lecture method is based upon six assumptions (Ediger,
2001):
1. The lecture provides the best learning experience for students.
2. The instructor knows the important information that students do not have.
3. The lecture is the most efficient method of teaching.
4. Information imparted by the instructor can be used and applied by students.
5. Student questions waste time.
6. Assessments are conducted through testing.
Teacher-centered classrooms are relatively authoritarian environments in which
instructors determine what students will learn, how students will learn, and which
17


assessment strategies will be applied (Glasgow, 1997). These qualities are not all
negative, however, because instructors are usually more knowledgeable in their domains
and can offer sage information on objectives and subject material (Carpenter, 2000), if
delivered effectively. The key is for the learned instructor (coach) to help guide students
(apprentices) in constructing the important knowledge from their own perspectives
(Grabinger & Dunlap, 1995; Lave & Wenger, 1991).
Students take an active role in student-centered learning. According to Dewey (as
cited by Roach, 1996), students actively shape their learning process with assistance from
the instructor or facilitator. By sharing responsibility for their own learning, they can help
determine the style, pace, resources, and structure of learning (Glasgow, 1997; Roach,
1996). This student input into the learning process can involve different events, such as
student assignments or classroom activities. Students can better prepare themselves for
their careers, as the student-centered approach is an optimum method to help students
learn how specialists (scientists and economists, for example) think and workthey learn
to use the same tools as do specialists (Alley, 1996).
Students sometimes help determine how learning will be assessed in student-
centered environments (Glasgow, 1997). Assessments in these environments are most
appropriate when they examine contextualized knowledge; present complex, ill-structured
problems; and provide intellectual challenges (Stage et al., 1998; Wiggins, 1989).
Common student-centered assessments include essays, presentations, problem solutions,
outlines, group progress reports, reflections, and instructor-student meetings (Brush &
Saye, 2000; Grabinger & Dunlap, 1995). While these assessments can sometimes be
effective in teacher-centered classrooms, they are typical of appropriate techniques for
assessment in student-centered classrooms.
18


Intentional Learning
The second chief component of constructivism, intentional learning, has two
primary characteristics. This learning method advocates the importance of both relevance
and student responsibility in the learning process. In intentional learning environments,
students learn skills to actively construct knowledge from information and experiences that
are meaningful to them (Brooks & Brooks, 1999). In essence, students create their own
meaning in the context of their individual situations through active, purposeful activity. The
primary goal of intentional learning is learning to learn, rather than fact memorization
(Bereiter & Scardamalia, 1989).
The first characteristic of intentional learning, relevance, allows students to realize
the value of the skills and techniques they learn. When information is presented in a way
which is meaningful, relevant, and important to students, those students are more likely to
understand, retain, and later apply that knowledge (Davis, 1993; McKeachie1994). Gilles
(1988) asserts that students are also more likely to take ownership of their own learning
when learning objectives are tailored to their interests. Brain studies have shown that brain
activity increases when subjects read meaningful material (Jensen, 1998; McKeachie,
1994). Neural patterns are created that link different areas of the brain (linking prior
knowledge with new knowledge) when information in meaningful; this allows the subject to
subsequently more effectively use that knowledge in diverse situations (Jensen, 1998;
McKeachie, 1994). Studies have shown that students who see meaning and relevance in
learning objectives are more comfortable, interested, motivated to learn, and engaged,
thereby furthering and enhancing the learning process (McKeachie, 1994; Ngai, 2006).
Additionally, a study by Martinez (2001) found that intentional learners are more satisfied,
have higher self-efficacy, and perform more successfully than non-intentional learners.
Instructors must be cautious, however, in developing intentional learning objectives
19


because what is meaningful or relevant to instructors is not necessarily meaningful or
relevant to their students (Brooks & Brooks, 1999; Jensen, 1998).
Brooks and Brooks (1999) outline four characteristics of intentional learning. To
promote the first characteristic of intentional learning, relevance, successful intentional
learning environments should include
1. Learning objectives that incorporate significant concepts rather than detailed facts.
2. Students being encouraged to veer off course of the established course objectives
to occasionally pursue related topics that particularly interest them.
3. Students being informed that truth is not constant and that there are diverse
perspectives of truth in the world.
4. Instructors ensuring students are aware that learning and assessment are not
always structured and organized; they are flexible and evolving.
The second characteristic of intentional learning, student responsibility, enhances
students lifelong learning abilities. Intentional learning purports that the process of
learning and organizing knowledge is of greater use to students than the process of simply
memorizing facts to pass a class (Bereiter & Scardamalia, 1989). Palincsar & Klenk (1992)
assert that intentional learning places the student in command of his or her metacognitive
and strategic knowledge or awareness. They contend that intentional learners value the
learning process and learn cross-disciplinary strategies. Students in intentional learning
environments learn from purposeful, effortful, self-regulated, and active engagement
(Palincsar & Klenk, 1992, 212).
Intentional learning environments, as defined by Palincsar and Klenk (1992), are
contrary to intentional learning methods; they do not promote lifelong learning and
problem-solving abilities in that they typically produce students who
20


1. organize their learning around topics rather than goals, which does not give
them the ability to later contextualize knowledge and use it in different
domains;
2. seek to gain only as much information as is necessary to pass a course and
not seek information for their own sake;
3. focus on completing assignments instead of reflecting on the strategies and
processes they used to complete those assignments; and
4. think the learning process as merely adding information to their personal
knowledge base (Grabinger & Dunlap, 1995).
Unfortunately, these didactic environments do not help students learn to transfer
knowledge to new domains and situations (Grabinger & Dunlap, 1995).
Intentional learning environments, alternatively, promote metacognitive awareness
and encourage students to: (a) reflect on their knowledge deficits, (b) set their own learning
goals, (c) design higher-order questions they want and need to ask, and (d) research
information that meets their individual interests and needs (Dunlap & Grabinger, 2003).
Intentional learning classrooms also commonly place students in authentic contexts, which
more readily help them transfer new knowledge to real world situations. (Evans-Stout,
1998). For example, The University of Utah's School of Education developed a successful
program in which students learn by experiencing and problem solving actual classroom
situations (Matthews, 1998). Intentional learning techniques are valuable to many types of
classroom methodologies, including case studies, collaborative learning, PBL, field work,
experiential learning, investigative learning, process writing, games, simulations, and role
playing (Brooks & Brooks, 1999; Jensen, 1998; Lieberman, 1996; McKeachie, 1994).
Flumans, by nature, are meaning makers and intentional learning takes advantage of using
21


that natural trait to help students commit to learn and apply new knowledge (Brooks &
Brooks; 1999; Kohn, 1999).
Collaborative Learning
Collaborative learning is the third key component of constructivism and involves
groups of students working together interactively to accomplish learning objectives
(Barkley, Cross, & Major, 2005). Collaboration is an important life skill because people use
it at work with colleagues, in the community with fellow citizens and government
representatives, and at home with family (Barkley et al., 2005; Herreman, 1988; Johnson,
1998). Constructivist learning theorists maintain that cooperation, knowledge sharing, and
gaining diverse perspectives, all features of collaborative learning, are important factors of
the learning process (Brooks & Brooks, 1999; Lambert et al., 2002; OBanion, 1997).
Collaborative learning focuses on students actively engaged in building their
knowledge, rather than having it imparted to them by instructors (Barkley et al., 2005;
Davis, 1993). In collaborative classrooms, students are placed into groups. These groups
range from ad hoc informal groups who work on one project or event for a short duration to
more permanent, formal groups who work together throughout a course (Barkley et al.,
2005; Davis, 1993). Students are not placed into groups and then allowed to blindly
attempt to complete an objective. There are important requirements of the process that
instructors must facilitate (Davis, 1998; Johnson, Johnson, & Smith, 1998; Smith, 1996):
1. Promote interdependence, ensuring each student is committed to the group's
accomplishments.
2. Promote interaction, encouragement, listening, and cooperation.
3. Promote both student and group accountability, making it clear that the group
is responsible for meeting objectives while students are each assessed on
their contributions.
22


4. Develop teamwork skills, monitoring for unproductive or disruptive students.
5. Implement an evaluation process whereby each student critiques and
assesses the effectiveness of each group member and the group as a whole.
6. Employ an intentional design, creating and structuring meaningful activities
that will focus students.
This list provides the minimum requirements of collaborative learning environments.
Researchers and practitioners differ on the degree to which instructor facilitation is
necessary, from allowing student groups to make nearly all of the decisions, to instructors
consistently intervening (Barkley et al., 2005).
The benefits of collaborative learning are clear. By having discussions with peers,
students are able to explore, clarify, and share thoughts (Golub, 1988). Students often
gain different and diverse perspectives and sometimes understand the vocabulary of peers
more readily than they do of instructors (Golub, 1988; Herreman, 1988; Sills, 1988;
Whitworth, 1988). Studies indicate that learning is enriched and students learn and master
knowledge better and retain it more in collaborative environments (Barkley, 2005;
McKeachie, 1994; O'Banion, 1997).
These three major components of constructivism, student-centered learning,
intentional learning, and collaborative learning, are all integral to the specific instructional
method of PBL, a focus of this study.
PBL
Since Deweys criticism of passive instruction early in the 20th Century,
researchers and practitioners have been devising and experimenting with active,
innovative learning strategies (Alley, 1996). Throughout the century, there has been a call
for innovative student-centered activities as a strategy to help students generate
knowledge (Barr & Tagg, 1995; Boggs, 1999; Ediger, 2001; Saunders, 1992; Stage et al.,
23


1998). Higher education is in the midst of an historic paradigm shift from teacher-center
learning to student-centered learning (Barr & Tagg, 1995; Boggs, 1999; Harden, 2000).
This shift substitutes teacher-centered learnings goal of providing instruction through
transfer of knowledge with student-centered learnings goal of producing learning through
student discovery and construction of knowledge (Barr & Tagg, 1995). Institutions of
higher education have been encouraged to focus their strategies and resources on this
paradigm shift in an effort to make learning more meaningful and lasting for students. PBL,
considered by some to be the most significant innovation in education for the professions
in years (Boud & Feletti, 1991, p. 13), is one such student-centered strategy involved in
this shift.
The Evolution of PBL
Dewey (1924) fashioned a four-step process for problem-solving activities more
than eighty years ago. He suggested a process in which students should (a) define the
problem, (b) list possible reasons for the problem, (c) discover potential solutions, and (d)
evaluate those proposals to determine a final solution. Others have expanded this process
in recent years into the increasingly accepted methodology of PBL. Medical schools,
which have led the way in many educational innovations, were the vanguards of PBL.
They developed the methodology as a means of better preparing future physicians after
they realized that, although their students had learned many important facts in the field of
medicine, they were unable to effectively practice and apply their knowledge to actual
patients (Barrows, 1988; Bridges & Hallinger, 1997; Delisle, 1997).
Teacher-centered approaches, which institutions had primarily been using, lead to
relatively authoritarian environments in which instructors determine what students will
learn, how students will learn, and which assessment strategies will be applied (Glasgow,
1997). The move towards PBL is a move towards generative learning, where students
24


learn by being actively involved and building on prior knowledge. In this approach, the
instructor is the tutor, while the student is the active knowledge seeker and problem solver.
Students work to solve problems while the instructor models the thinking and reasoning
process through questions (Barrows, 1988). The instructor (coach) guides students
(apprentices) in constructing the important knowledge from their own perspectives
(Grabinger & Dunlap, 1995; Lave & Wenger, 1991). With this strategy in mind, medical
schools began the quest for a new, more authentic approach.
The seeds of PBL were planted in the 1950s at Case Western Reserve, but
Howard Barrows officially introduced the methodology at McMaster University in Ontario,
Canada in the late 1960s (Boud & Feletti, 1991; Delisle, 1997; Magnussen, Ishida, & Itano,
2000). Although PBL was originally employed as a medical school approach, it has
become increasingly accepted in other fields, such as architecture, business, computer
science, economics, education, engineering, law, math, science, social work, and
throughout all levels of education (Barrows & Kelson, 1993; Boud & Feletti, 1991; Bridges
& Hallinger, 1997; Howard, 1999; Maxwell, Bellisimo, & Mergendoller, 2001; Myers & Botti,
1998; Stepien & Gallagher, 1993).
PBL Defined
PBL is an instructional methodology that situates students at the center of the
learning process. While in charge of their own learning, students collaboratively
investigate and solve authentic problems, constructing their own knowledge (Delisle, 1997;
Grabinger et al., 1997; Stepien & Gallagher, 1993). Intentional learning is promoted in this
learning process through questioning, self-reflection, and metacognition (Barrows, 1988;
Bereiter & Scardamalia, 1989; Boud & Feletti, 1997). Students manage themselves and a
variety of resources while identifying what they already know, do not know, and need to
know (Barrows, 1985).
25


Because PBL comes in many forms and sizes, some refer to it as an educational
strategy rather than a method (Walton & Matthews, 1989). The models range from those
that employ fully integrated PBL curricula throughout the institution to those that integrate
PBL exercises into a course along with other instructional methods (Anderson, 1991;
Armstrong, 1991; Barrows 1994; Ryan & Little, 1991; Stepien & Gallagher, 1993).
Regardless of labels and types, however, most models mirror the scientific method and
share some common ground (Delisle, 1997). One portion of that common ground is the
problem, the learning issue itself. This study examines a model in which PBL is fully
integrated into a single course.
According to Kelson (personal communication, January 6, 2003), the problem
should serve the processthe process should not be altered to fit the problem. This
means that suitable, ill-defined problems do not provide much prior information, forcing
students to inquire about and work for the necessary information. If too much information
is provided in the problem, students do not learn how to assess their learning needs and
how to obtain information (Lovie-Kitchin, 1991). In this situated learning environment, the
problem should mirror one that could realistically occur in the real world and should be
given prior to any instruction (Grabinger et al., 1997; Hmelo, 1998; Mergendoller, Maxwell,
& Bellisimo, 2000). Problems that lack relevance lead students to store the new
information in such a manner that they will not access it in their daily lives (Barrows &
Kelson, 1993). Authentic, PBL problems energize the learning process as they begin to
set an authentic context that: (a) creates relevance for students in their everyday lives to
encourage a sense of ownership for their learning, (b) helps students gain more in-depth
knowledge to transfer to future situations, and (c) fosters collaboration through the sharing
and discussion of information (Grabinger & Dunlap, 1995). Further, generalization of
knowledge is essential, as Dewey (1910/1997) argued in his statement, without
26


recognition of a principle, without generalization, the power gained cannot be transferred to
new and dissimilar matter (p. 212).
The PBL Process
Prior to the start of the learning process, students are usually formed into
collaborative learning teams. Optimally these teams should be comprised of a handful of
students, preferably five to seven, to ensure that everyone has ample opportunities to
contribute to the process and learn from other students (Barrows, 1985; Lovie-Kitchin,
1991; Walton & Matthews, 1989; Woods, 1994). The PBL process can then progress in
earnest, ensuring that there are reviews after each stage to make certain that student
understanding of what they have learned and need to learn has been accomplished
(Walton & Matthews, 1989). There are four phases of the PBL process:
1. Initial problem assessment.
2. Self-directed study.
3. Problem follow-up and solution.
4. Performance and assessment (Barrows, 1985; Barrows & Kelson, 1993).
These phases were used in this study.
Phase 1. At the beginning of Phase 1, the team must identify the problem in a
scenario and develop multiple hypotheses for a solution (Barrows, 1985; Boud & Feletti,
1991; Foster & Gilber, 1991; Woods, 1994). Students maintain the authentic context
throughout the phase and beyond by situating themselves in the process and conducting
themselves as professionals. During this phase, which occurs prior to any instruction or
studying, the team determines what pertinent information they already possess and notes
potential solutions, based on their prior knowledge (Boud & Feletti, 1991; Ross, 1991;
Stepien & Gallagher, 1993; Woods, 1994). This is a divergence from the traditional notion
27


that students must establish a firm knowledge base prior to solving problems in the
associated discipline (Barrows, 1985).
Novices tend to try to apply all of their prior knowledge in this process and the tutor
must step in to refocus them in an effort to keep the process clear and fresh. Equally,
team members should note and consider all ideas generated by students, regardless of
how irrelevant or invalid those ideas may seem at the time (Delisle, 1997). During this
brainstorming session, team members must not pass judgment on others inputs, but must,
instead, elaborate upon suggestions of others that they believe have merit (Roth et al.,
1996).
Next, teams identify what information they need to learn in order to devise an
effective solution and test hypotheses (Barrows, 1985; Boud & Feletti, 1991; Foster &
Gilber, 1991; Woods, 1994). Once again, the team should note and consider all ideas
generated by fellow students and not flippantly discard those that initially may appear
inconsequential or invalid (Delisle, 1997). During this stage, the team lists all learning
issues they could neither verify nor agree upon as being sufficiently answered (Delisle,
1997; Foster & Gilber, 1991). According to Delisle (1997), this list focuses the team on
what they actually need to know so that they do not blindly go out and seek information in
an unorganized, thoughtful manner. Once they decide what information they are lacking,
the team begins Phase 2 to design a plan to gain that information.
Phase 2. The second phase focuses on self-directed learning. At the onset of this
phase, the team sets objectives and allocates resources among group members (Barrows,
1985; Boud & Feletti, 1991; Woods, 1994). Students learn better through this discovery of
knowledge than through traditional methods, such as lecture and assigned readings,
because they determine their own learning strategies (Armstrong, 1991). During this
stage, team members assign themselves learning issues that were determined in the
28


previous stage (Deslisle, 1997; Foster & Gilber, 1991). They then decide which research
methods to use, including which experts they will contact, which written resources they will
review, and where they search for those resources (Delisle, 1997; Roth et al., 1996). It is
essential for experts to know where to find appropriate resources for their field (Walton &
Matthews, 1989). Then, individual research can begin.
A period of self-directed learning follows in which students individually research
their assigned learning issues (Foster & Gilber, 1991). Self-directed learning is essential to
professionals, particularly in the current climate of rapid technological and social change.
Dunlap (1996) found that PBL enhances self-directed learning, which is an essential
component of lifelong learning. Lovie-Kitchin (1991) asserted that students often felt
compelled to do the research because of peer pressure and usually came prepared for the
follow-on session. After students conduct individual research, they share it with their team
in Phase 3 (Barrows, 1985; Boud & Feletti, 1991; Woods, 1994).
Phase 3. In the third phase, the team follows up on the problem and devises a
solution. To commence, when the team reconvenes, they first discuss the resources they
used. Students evaluate and relate the information they learned as it applies to their
established hypotheses, focusing on relationships and causes (Roth et al., 1996). A major
benefit of PBL is that students learn from each other and from the framework; students
learn new information well enough to clearly and effectively communicate it to their team
(Woods, 1994). In addition, they develop lifelong learning skills by reviewing the methods
and resources they used and assessing those methods and resources (i.e., usefulness and
currency) (Barrows & Kelson, 1993; Delisle, 1997). One side benefit for the tutor is that
students sometimes discover new information and resources that the tutor has never
previously seen or considered (Lovie-Kitchin, 1991). Another benefit is that this sharing
29


provides the other students with valuable insight into a plethora of resources (Barrows,
1994).
Given this new information and insight, the team then eliminates and/or adds
hypotheses (Delisle, 1997). The team revisits the problem as a panel of experts this time.
This cycle of hypothesizing, self-directed learning, and sharing continues until the team
decides that they have enough information to make informed decisions about a solution
(Delisle, 1997; Foster & Gilber, 1991). Each student synthesizes prior knowledge with new
knowledge and then votes for the solution they decide is most appropriate (Barrows &
Kelson, 1993; Delisle, 1997). After coming to a consensus on a single solution to the
problem, the team implements that solution (Barrows, 1985; Boud & Feletti, 1991; Woods,
1994). This promotes critical thinking and decision-making, whereby students probe,
evaluate, assess, and resolve the processes and benefits of various solutions to make an
informed decision (Delisle, 1997; Woods, 1994). The team is then ready to present their
findings in Phase 4.
Phase 4. The final phase of the PBL process is the final product or performance,
followed by an assessment. The resultant product maintains the authentic learning context
of the process and takes the form of presentations, papers, art works, visual products,
performances, debates, photography products, or models (Barrows & Kelson, 1993;
Grabinger & Dunlap, 1995).
After implementing the solution, students evaluate its success (Roth et al., 1996).
The team assesses and reflects on their new knowledge, the solution, and the process
(Barrows, 1985; Boud & Feletti, 1991; Woods, 1994). They explain the basic mechanisms
of the problem, specify how confident they are of their hypothesis, list all pertinent data that
is missing which might better support the hypothesis, describe what data are unexplained,
and state alternative hypotheses (Barrows, 1985). When students present their findings
30


they also note what they learned, relate their findings to other similar problems they have
encountered or heard about, and explain the general principles they discovered (Barrows,
1994).
Students assess themselves, each group member, and the problem itself (Delisle,
1997). Kelson (2000) suggests that students assess themselves and each other in the
following domains: knowledge, problem solving, self-directed learning, . [and]
collaboration (p. 2). Following assessments by each student, the tutor completes the
process by providing his or her assessments of each individual (Kelson, 2000). Although
the formal assessment does not take place until the end of the process, assessment is
integrated into the entire PBL process as students and tutors are constantly assessing
while simultaneously developing critical appraisal skills (Barrows & Kelson, 1993; Delisle,
1997). Students are only one focus of this process. It is also important to examine the role
of the PBL instructorthe tutor.
The Tutor Role
In sharp contrast to the instructor role in a didactic environment, which is the role
of subject matter expert and dictator of knowledge, the instructor role in a PBL environment
is that of tutor and facilitator (Barr & Tagg, 1995; Roach, 1996; Stage et al., 1998).
Throughout the PBL process, the tutor coaches students in metacognition by modeling
thinking processes and strategies through asking direct questions of the students
(Barrows, 1988; Grabinger & Dunlap, 1995; Stepien & Gallager, 1993). However, these
questions and inputs neither articulate opinions nor provide details that may lead directly to
solutions (Barrows, 1988; Grabinger & Dunlap, 1995). Tutors, instead, encourage students
to discover and admit what knowledge they lackthe student who recognizes that he or
she lacks information is more motivated to learn it (Barrows, 1988).
31


Barrows (1994) outlined some additional steps that the tutor takes in order to
establish an effective PBL environment. The tutor first creates an open environment in
which students contribute their ideas and concerns. Next, the tutor continually focuses
student activities and discussions to help them remain on task. The tutor also persuades
students to state their suggested solutions at intermediate steps, as is often necessary in
the real world. Finally, the tutor provides the team with time and resources for students to
work both individually and in groups. Now that we know the PBL process and individual
roles, do we know what the students are to learn from this methodology?
Outcomes of PBL
PBL advocates assert that the methodology leads to several outcomes. The four
major educational outcomes are: (a) substantial, usable knowledge; (b) essential self-
directed and lifelong learning skills; (c) successful collaboration skills; and (d) refined
problem-solving abilities (Barrows & Kelson, 1993; Delisle, 1997; Engel, 1991; Walton &
Matthews, 1989). The fourth outcome, problem solving, is a necessary life abilities and
any methodology that can increase this ability is worthy of consideration. Dewey
(1910/1997) advocated that problem solving is one of the important forces behind the
generation of thinking abilities, while Jensen asserted that challenging problem-solving
experiences are the single best way to grow a better brain (1998, p. 35). The following
section reviews this PBL outcome in detail.
Problem-Solvinq
Everyone uses problem-solving abilities in their daily lives. As the magnitude and
complexity of problems in the world rapidly increases, successful problem-solving abilities
become more and more important. Humans begin problem solving by age two and, from
that point, regularly demand explanations in order to make sense of the world in which we
live (Jensen, 1998; Jones, 1998).
32


Problem Solving Defined
Problem solving is a metacognitive ability that involves thought, reflection,
consideration, and review (Barrows, 1988). It is the process of generating and testing
hypotheses surrounding potential solutions for an obstacle or constraint by drawing
relationships and synthesizing information (Krynock & Robb, 1999; Marzano, Pickering, &
Pollock, 2001). The method requires the problem solver to examine and make educated
decisions along an unknown path toward solving a problem (Bereiter & Scardamalia, 1989;
Polya, 1968; Walton & Matthews, 1989). Some theorists have hypothesized that problem
solving simply entails the process of finding the answer to any question (deBono, 1990)
and that it involves virtually all aspects of existence (Wu et al., 1996, p. 2), whereas
others are more restrictive, contending that reasoning must be utilized in the process
(Kelson, 2000). As this study focuses on solving ill-defined problems, this chapter will be
limited to a review of ill-defined problem solving.
In his statement that since mastery of the bodily organs is necessary for all later
developments, such problems are both interesting and important, and solving them
supplies a very genuine training of thinking power, Dewey (1910/1997, p. 158) suggested
that humans naturally develop problem-solving abilities from childhood. Bereiter and
Scardamalias (1989) ideas parallel Deweys in their suggestion that problem solving is an
intentional learning technique that can be utilized in all learning situations, from writing and
reading to theory building and reasoning. The results of their studies indicate that adults
treat learning as a problematic process in achieving a goal through their use of problem-
solving scaffolds or stepping stones. Once novice problem solvers practice and eventually
master this scaffolding process, they can successfully transfer this ability to other
disciplines.
33


As should be expected, novices and experts use different processes to solve
problems. Because novices are unaccustomed to new situations and have no prior
experience with problem-solving process and systems, they must work problems
backwards using generalized ideas and sub goals to eventually reach a solution (Bryson et
al., 1991; Hegarty, 1991). Novices use linear paths toward a solution, overlooking the
various implications along the way (Funke, 1991). A study by Voss, Wolfe, Lawrence, and
Engle (1991) suggests that novices fail to specify many goals or constraints and do offer
few, if any, critical evaluations of their posed solutions. These novices main focus was to
offer several plausible solutions. The certainty they have about the successfulness of their
solutions is quite low (Hegarty, 1991).
Experts, on the other hand, start with the problems and work forward (Hegarty,
1991). They recognize situations similar to those previously experienced and are able to
apply previously used strategies and paths, along with their knowledge base, to obtain a
solution (Bryson et al., 1991). Mayer (1977) calls this process chunking. Instead of using
linear paths as do novices, experts use webs or schemes to incorporate goals and
constraints (Funke, 1991; Hegarty, 1991). Voss et al. (1991) note that experts offer few
alternatives and do not give their alternate solutions much credence. Their studies indicate
that experts are generally highly assured by their top solution. Using the scaffolding
process and critical evaluation, experts gradually eliminate potential alternatives.
Mayer (1977) offers the definition of problem solving that best fits the purpose of
this study of the impact of PBL on problem-solving ability. He asserts that problem solvers
break problems down into more manageable parts. They then rearrange those parts in
order to relate each part to the others. Problem solvers develop solutions, Mayer asserts,
by combining this restructuring with both productive (creating new organizations) and
34


reproductive (utilizing previously-learned organizations/solutions) thinking. In addition to
problem solving ability, it is important to examine problem-solving confidence.
Problem-Solving Confidence Defined
Problem-solving confidence is the amount of certainty that subjects have in their
abilities to solve problems successfully (Heppner and Petersen, 1982). Although the
measurement of confidence is not as good a predictor of problem solving as is a
measurement of ability itself (Monastersky, 2005), confidence is a measurable indicator
that can predict learning and ability (Zimmerman, 2000).
Several psychology studies have shown that high problem-solving confidence is a
reliable indicator of problem-solving success (Cassidy, 2002; Esposito & Clum, 2002;
Heppner & Lee, 2002). In the education realm, Bonner (2000) found that self confidence
was one of the three most significant influences on student academic success, along with
intelligence and determination. In his study, elevated self-confidence led students to
successfully accomplish assignments, master course material, and acquire new skills.
Another study (Yong, 1994) found that gifted and talented students who were academically
successful had high levels of self-confidence. Of course, over confidence can be
detrimental and should be monitored. For example, over confidence can lead to
complacency. Further, Multon, Brown, and Lents (1991) meta-analysis reveals the
positive correlation between low self-confidence and poor grades. Given the connection
between confidence and performance, problem-solving confidence measurements can
provide reliable indications of problem-solving ability.
The Problem-Solving Process
Researchers have divided the problem-solving process into five stages. The first
stage is to establish an orientation to the problem (Heppner & Petersen, 1982). In this
stage, problem solvers must determine the facts surrounding the problem and chart a
35


course that they plan to follow (Feldhusen & Treffinger, 1977). Characteristics of both the
problem and the problem solver can affect the problem-solving process (Wu et al., 1996).
Because our biases help influence the way in which we analyze problems and develop
solutions, Jones (1998) suggested that problem solvers first examine their biases that may
potentially influence their problem-solving process. This aids problem solvers in making
better informed, less-biased decisions about the path they will take.
The second stage, identifying and defining the problem, is also necessary to
ensure the problem solver pursues an effective path toward a solution (Cho & Jonassen,
2002; Feldhusen & Treffinger, 1977; Heppner & Petersen, 1982). During this stage,
problem solvers identify their objectives and determine the constraints and obstructions
that might be causing the problem (Cho & Jonassen, 2002). Brainstorming is important to
the process in order to classify important aspects of the problem and then define
associations and relationships (Doolittle, 1995; Feldhusen & Treffinger, 1977; Roth et al.,
1996). Doolittle (1995) asserted that there are two methodsflexibility and fluencyof
constructing potential solutions in the problem-solving process. Flexibility is the
construction of a diversity of solutions, while fluency is the construction of numerous
solutions, in spite of their diversity. His research suggests that practicing these cognitive
flexibility classification and association techniques can help lead to improvements in critical
thinking and creativity, which are both vital to effective problem solving. Problems can
often be complex and overwhelming. Moursund (2000) suggests one effective method of
reducing this complexity by breaking problems down into subparts, which can be much
more manageable and easier to solve.
Problem identification can also be promoted by organizing content around theories
and concepts rather than plain facts (Floward, 1999; Marzano et al., 2001). Research
supports the notion that novices conceptualize problems less effectively than do experts.
36


One key study (Chi, Feltovich, & Glaser, 1981) found that expert physicists categorize
problems according to major principles and concepts, while novices categorize them
somewhat superficially, according to exterior physical features. An effective strategy for
conceptualizing is to match, combine, group, order, elaborate, dissect, challenge, and
change perspectives (Roth et al., 1996). Detailed, effective conceptualization helps lay the
groundwork for the development of hypotheses.
Devising multiple alternative hypotheses is necessary in the third stage (Feldhusen
& Treffinger, 1977; Heppner & Petersen, 1982; Roth et al., 1996). The accuracy of
hypotheses is essential to successful problem solving. A study by Chi, Glaser, and Farr
(1989) found that professional physicians examine interrelationships between concepts
and develop, refine, and test multiple hypotheses simultaneously, while novices typically
examine each hypothesis individually. One of the most important requirements in problem
solving is to build on personal knowledge and the work of others (Moursund, 2000). To
develop hypotheses, problem solvers should access prior knowledge and use new
knowledge (from experts and other resources) to generate ideas and identify potential
solutions (Cho & Jonassen, 2002; Feldhusen & Treffinger, 1977). After developing
potential solutions, problem solvers must make decisions.
The fourth stage is to make judgments and decisions about which hypotheses will
most likely be successful (Cho & Jonassen, 2002; Feldhusen & Treffinger, 1977; Heppner
& Petersen, 1982). Prior to making a decision about which solution to choose, problem
solvers should revisit characteristics of their problem-solving goal in order to ensure they
remain on track (Feldhusen & Treffinger, 1977). Studies indicate that successful problem
solving experts are effective in their logic and reasoning strategies (Feldhusen & Treffinger,
1977; Norman, Trott, Brooks, & Smith, 1994; Patel & Groen, 1986). Cho and Jonassen
(2002) state that problem solvers must produce reasoned arguments to support their
37


decisions on solutions. They suggest that scaffolding improves the quality of arguments
produced. This scaffolding can also aid in the determination of further courses of action,
namely how to apply the chosen solution (Cho & Jonassen, 2002; Feldhusen & Treffinger,
1977).
This leads to the final stageevaluation and testing of the solution (Feldhusen &
Treffinger, 1977; Fleppner & Petersen, 1982). When testing the effectiveness of a solution,
problem solvers should analyze and assess the outcomes and explain why the solution did
or did not work (Cho & Jonassen, 2002; Feldhusen & Treffinger, 1977). If the initial
solution was unsuccessful or insufficient, the problem solver should then devise another
solution, using lessons learned, and repeat the process until a successful solution is found
(Cho & Jonassen, 2002).
Summary
Problem solving is an important ability. Early in the 20th Century, researchers and
practitioners began to examine the importance of the ability. Dewey (1910/1997) urged
instructors to encourage problem solving by presenting students with typical problems to
be solved by personal reflection and experimentation, and by acquiring definite bodies of
knowledge leading later to more specialized scientific knowledge (p. 168). The need for
successful problem-solving abilities transcends educational institutions, however, and
extends to corporate settings.
Businesses and organizations are demanding that educational institutions provide
students with effective problem-solving abilities, as workers at even the lowest levels are
required to make decisions on increasingly complex problems (Barrows & Kelson, 1993;
Holt & Willard-Holt, 2000; Roth et al 1996; Wu et al 1996). Private and government
employers and employees must be adept at problem solving to achieve success in a world
38


in which knowledge bases, tools, and skills are constantly changing and organizational
downsizing and economic instability are a reality (Grabinger et al., 1997).
PBL provides educators with an avenue to help students gain the problem-solving
abilities that employers are seeking. PBL activities are excellent constructivist
opportunities for students to collaboratively take responsibility for their own learning and
direct their learning to suit their interests and needs, which, in turn, make them more
marketable in the workplace (Alley, 1996; Bonwell & Eison, 1991; Ediger, 2001; Glasgow,
1997; Landis et al., 1998). PBL combines the elements of student-centered learning,
intentional learning, and collaborative learning into a methodology that effectively promotes
learning and the application of knowledge.
Students feel a need to experiment, as they have intellectual curiosity and
constantly seek to satisfy through doing, not simply listening or reading (Dewey,
1910/1997). Students become more engaged when presented with relevant challenges
and group problem-solving activities can help students to apply multiple concepts to
complex questions (Glasgow, 1997; Grabinger & Dunlap, 1995; Landis et al., 1998).
There is a problem with researcher suggestions that problem-solving abilities can
be learned and refined through PBL because there is a deficiency in the literature to
support those claims (Howard, 1999; Roth et al., 1996; Stepien & Gallagher, 1993; Woods,
1994). Several prominent reviews and a meta-analysis hint at the lack of improvement in
problem-solving abilities after PBL experiences (Albanese & Mitchell, 1993; Colliver, 2000;
Vernon & Blake, 1993). However, these analyses only address clinical reasoning abilities
in medical fields, not problem-solving abilities applied in other domains. This study will
investigate problem solving applied to the social sciences of geography and political
science.
39


This review of the literature helps lay the foundation for the next chapter, which
discusses the study's methodology. The following chapter explains the research
questions, the studys design and sampling procedures, and independent and dependent
variables. The chapter then concludes with an examination of the instruments and data
collection and analysis procedures.
40


CHAPTER 3
METHODOLOGY
Introduction
The United States Air Force Academy (USAFA) is an appropriate and excellent
testing ground for a study of problem solving, due to the importance that the institution
places upon it. One of USAFAs five educational outcomes that it strives for is officers
who can frame and resolve ill-defined problems (USAFA, 2005). USAFA encourages
instructors to address problem solving in classes and the school's end-of-course critique
has students assess the value of problems raised in class (Center for Educational
Excellence, 1995). Because of their age and limited experiences, most USAFA students
are novice problem solvers. The Air Force needs experts who can successfully solve ill-
defined problems. By their nature, military operations are often subject to confusion,
inconsistencies, and constant change; no checklist can be developed to solve every
possible problem that may arise in the battlespace. As such, military members must be
taught effective problem-solving strategies.
Does problem-based learning (PBL) actually improve student problem-solving
ability? This study attempts to answer that question by examining three specific research
questions:
1. To what extent do students who have had a PBL-based course differ from
students who have had a didactic course (lecture and discussion) in terms of (a)
confidence in their problem-solving abilities and (b) their actual problem-solving
abilities?
41


2. To what extent do students who are engaged in a PBL-based course change
during the semester in terms of (a) confidence in their problem-solving abilities
and (b) their actual problem-solving abilities?
3. Do students engaged in a PBL-based course acquire confidence in their
problem-solving abilities at a different pace than students engaged in a didactic
course (lecture and discussion)?
The research hypotheses are:
1. In the population, students who have taken a PBL-based course have (a) more
confidence in their problem-solving abilities and (b) more successful problem-
solving abilities than do students who have not taken a PBL-based course.
2. In the population, students engaged in a PBL-based course (a) increase
confidence in their problem-solving abilities throughout the semester and (b)
improve their problem-solving abilities throughout the semester.
3. In the population, students who have taken a PBL-based course acquire
confidence in their problem-solving abilities more so than students who have
not taken a PBL-based course.
This chapter provides a detailed picture of the studys methodology. It begins with
a description of the design of the study. Following that, the subject and sampling
procedures are outlined. Next, the academic setting is illustrated to provide insight into the
environment. Following that are definitions of the independent and dependent variables
and the instruments. The chapter then concludes with an outline of the data collection and
analysis procedures.
Design
This study used a quasi-experimental design in which PBL instructional methods
were compared with traditional, didactic instructional methods in terms of students
42


problem-solving confidence and ability. The experimental group experienced purely PBL
instruction while the control group experienced only didactic (primarily lecture) instruction.
Control groups such as those established for this study are commonly used to demonstrate
the similarities between the experimental group and others when testing the effect of a
treatment (Krathwohl, 1998; Vogt, 1993).
The design of the study is graphically depicted in Figure 3.1. Circles (O)
represent measurements. The six subscript numbers (1 through 6) represent the problem-
solving confidence measurement while the two subscript letters (A and B) represent the
problem-solving ability measurement. XPBl designates PBL instruction while XD!
designates didactic instruction. Both the control and the experimental groups were
administered the same tests during the same periods throughout the study.
O-l/A XpgL O2 XpBL O3 XpBL O4 XpBL 05 XpBL OtlB
O-l/A Xqi O2 XD| O3 Xdi O4 XD| O5 XD| 06/B
--------------------------------------------------------------------------
Time
Figure 3.1 EXPERIMENTAL DESIGN FOR MEASURING PROBLEM SOLVING.
The only differences between sections were the treatments (educational methods).
Both treatments were ongoing throughout the semester, with the experimental group
undergoing a series of PBL exercises and the control group undergoing didactic
instruction. Each group met 42 class periods, with each period lasting 50 minutes.
Classes met on alternating weekdays (e g., Monday, Wednesday, and Friday of one week
c
o
c
43


and Tuesday and Thursday of the following week). Both the experimental group and the
control group met the same days, with the two experimental sections meeting at 1:10 pm
and 2:10 pm respectively and the control section meeting at 3:10 pm.
Subjects and Sampling Procedures
This study was conducted throughout the spring semester of 2004 and involved 69
students (undergraduate military academy students) from a variety of academic majors.
This is a sufficient number of subjects for a study of this scope, particularly given the
constraints of the institution. Class sizes at USAFA range from 6 to 76 and the average
instructor teaches one to two sections of two different courses. It was essential to have the
same instructor teach all sections in this study in order to control for instructor variability
between sections. Geography classes are limited to 26 students per section and no
instructor teaches more than three sections. Given these limitations, the largest sample
size possible was 78.
The majority of subjects were cadets third class (sophomores), with some cadets
second class (juniors) and first class (seniors). All were enrolled in a four-year academic
and military training program to receive a Bachelor of Science degree and to become
commissioned officers in the United States Air Force. Subjects were drawn from three
sections (24 in each of the two sections comprising the experimental group and 21 in the
section comprising the control group) of a geopolitics course called Social Science 112:
Geopolitics. Students were informed of the study on the first day of the semester and
asked to voluntarily participate. They were also provided with a consent form to sign, if
they chose to participate. Although no incentives were offered, 67 of the 69 students (97
percent) volunteered to be subjects and all volunteers participated in the study.
44


The course is mandatory for all USAFA students and subjects were randomly
assigned to course sections. The subjects simply elected to take the course during spring
semester, but did not have any influence upon the section in which they were enrolled.
USAFA utilizes a computer program, called CAMIS, which deconflicts schedules and
randomly assigns students to available sections of their requested courses. A two-tailed t-
test of the grade point averages from both the experimental and the control groups was
conducted to demonstrate random assignment and further establish the robustness of this
experiment. The difference between the means was not statistically significant (t=.030,
df=67, p=.86), consistent with the hypothesis that the subjects were randomly assigned.
The demographics of the subjects were taken from both a self-report survey and
the USAFA Registrars Office. Approximately 75 percent of the subjects were males and
25 percent were female. Subjects ranged in age from 19 to 24, with mean age of 20.44.
Their grade point averages ranged from 1.85 to 3.94, with a mean of 2.90. See Table 3.1
for group statistics.
Table 3.1 Experimental and control group demographics.
Control Group Experimental Group
Gender Gender
Percent Male 76% Percent Male 75%
Percent Female 24% Percent Female 25%
Average Age 20.89 Average Age 20.24
Grade Point Average 2.92 Grade Point Average 2.90
45


Subjects originated from all areas of the United States. 75 percent of the subjects
were European Americans, 10 percent were Hispanic Americans, 6 percent were Asian
Americans, and less than 2 percent were African Americans. One subject was an
international exchange student from Asia. Subjects were enrolled in a variety of majors, as
noted in Table 3.2. However, few of them had taken any majors courses. Cadets primarily
just take core classes their first two years. Finally, although subjects originated from a
variety of socio-economic statuses, they all attended the institution on full scholarships
(free room, board, tuition, and fees) and received a monthly stipend.
Table 3.2 Percentage of subjects in divisional majors.
Divisional Major Control Group Experimental Group
Basic Sciences 27.1 5.2
Engineering 35.4 26.3
Humanities 2.1 10.5
Social Sciences 25 52.6
Interdisciplinary 10.4 5.2
Setting and Materials
This study was conducted at USAFA, which is located in Colorado Springs,
Colorado. The study took place during the spring semester of 2004. There are some
46


characteristics of USAFA that make the institution unique from other undergraduate
institutions. It is a four-year college with an enrollment of approximately 4,000, which
requires all students to take military training courses, physical education courses,
airmanship courses, and either intramural or intercollegiate sports, in addition to traditional
academic courses. Students wear uniforms and are subject to military restrictions,
policies, and laws. With few exceptions, all students must graduate within four years with a
Bachelor of Science degree and will receive regular commissions as second lieutenants in
the United States Air Force at the completion of those four years. There are no
commutersall students are full-time residents of the campus and their ability to travel off
Table 3.3 Ranges and average scores of students accepted to the United States
Air Force Academy.
Assessment Range Average
SAT
Verbal Aptitude 520-600 569
Math Aptitude 610-700 663
ACT
English 25-29 27
Reading 26-33 30
Mathematics 26-30 29
Science Reasoning 26-31 29
47


campus is closely monitored and limited by local policies. Students at USAFA are very
high achievers. For example, the high school grade point average for the Class of 2010 is
3.86. See Table 3.3 for average scores of those accepted to the institution (USAFA
Admission, n.d.). 90 percent of students attending USAFA graduated in the top fifth of
their high school class and 10 percent graduated in the second fifth (USAFA Admission,
n.d.). Student acceptance is heavily weighted toward academic achievement and aptitude;
however, extracurricular activities, such as sports, jobs, scouts, leadership positions, and
community service, are also a factor (USAFA Admission, n.d.).
Social Science 112 focuses on geography and politics and is geared toward
military students. According to the syllabus, the course:
introduces geopolitical concepts and issues that are critical to understanding U.S.
national security. By extension, these concepts have a direct affect on the Air
Force mission . Geography and politics are deeply intertwined. Physical
geography helps shape the interactions between peoples and influences the
elements of power possessed by each state. Political processes create the
borders between states and way power is used. All of todays major international
issues relate to geopolitics: the war on terrorism, proliferation of weapons of mass
destruction, human rights abuses, resource scarcity, failed states, environmental
pressures, and economic globalization. (Social Science Division, 2003, p. 1)
The three objectives of the course are, through writing papers and taking
subjective and objective examinations, students should demonstrate that they:
1. Understand the basic concepts and principles of geopolitics.
2. Developed] an awareness and understanding of relevant trends
and issues in geopolitics.
3. Understand the geopolitical roots of many contemporary conflicts.
48


[The course] provides the intellectual framework for [the] study of geopolitics and
presents a set of tools to facilitate geopolitical analysis. The [course] describes the
world as it exists now and relates the intellectual framework and the tools ... to
global issues. [The course applies] newly found knowledge by analyzing real
world problems and suggesting real world solutions. (Social Science Division,
2003, p. 1)
Both the experimental and control groups met in the same classrooma standard
geography classroom, with two-person desks and chairs for 26 students. The
experimental group rearranged chairs around the rectangular desks in order to work more
effectively in groups, while the control group sat in rows facing the instructor at the front of
the room. The classroom was equipped with a computer with Internet connectivity, a video
player, an overhead projector, and whiteboards.
Independent Variable
The independent variable of this study was method of instruction. This variable
consisted of two levels, PBL instruction (experimental group) and didactic instruction
(control group). There were no required texts for the experimental group. The control
group had four required texts:
1. Lamborn, A. C., & Lepgold, J. (2003). World politics into the Twenty-first
Century: Unique contexts, enduring pattern. Upper Saddle River, NJ: Prentice
Hall.
2. Braden, K. E., & Shelley, F. M. (2000). Engaging geopolitics. Upper Saddle
River, NJ: Prentice Hall.
3. Bergman, E. F., & Renwick, W. H. (2003). Introduction to geography: People,
places, and environment (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
49


4. Hudson, J. C. (Ed.). (2000). Goodes World Atlas (20th ed.). New York: Rand
McNally.
After this discussion of the course requirements, a description of each method of
instruction is necessary to understand the study.
PBL Instruction
In a PBL environment, students learn lifelong skills such as intentional learning,
reflection, collaboration, communication, decision-making, metacognition, and problem
solving (Barrows, 1988, Engel, 1991; Woods, 1994). In this student-centered environment,
students must use approaches, as do experts, to solve ill-defined problems that parallel
problems encountered in the real world (Grabinger et al., 1997; Stage et al., 1998; Woods,
1994).
This study applied the Southern Illinois University School of Medicine PBL model
to the two experimental sections. The Southern Illinois University School of Medicine
model consists of seven distinctive steps in which the instructor, in the role of tutor, helps
facilitate student learning. These steps, as defined by Barrows (1985) and Barrows and
Kelson (1993), guide students to perform the following:
1. Determine and examine the problem.
2. Establish what they know about the problem.
3. Recognize what they need to learn.
4. Perform self-directed learning.
5. Apply new knowledge to solve the problem.
6. Synthesize and evaluate what they learned.
7. Evaluate the process and perform peer- and self-assessments.
The course instructor had no prior experience with PBL, but was trained prior to
the semester and guided throughout the duration the study by the researcher. The
50


researcher, trained in PBL at a Southern Illinois University School of Medicine workshop by
Ann Kelson and Dr. Linda Distlehorst, trained the instructor in a similar manner. The
researcher attended all class meetings to observe and help ensure the instructor employed
proper PBL techniques in order to assure instructional method integrity.
The PBL-based sections were each divided into four groups of five to six students
each. At the beginning of each class period, the instructor asked generalized questions of
the class to allow groups to share difficulties and successes with each other. He asked if
they were having troubles with the learning process, what valuable resources they
discovered, what concerns they had, and if they had any success stories to share. After
this class discussion, the instructor broke students into their assigned groups. Each group
gathered around a rectangular table with group members facing one another. One
member of each group took notes as their group collaborated on the problem. These
notes were on 24-inch by 48-inch sheets of paper taped to the classroom walls to ensure
all could see. Each group maintained their notes and referred to them during subsequent
class meetings. Each group developed hypotheses, discussed germane known
information, created a list of information they needed, and assigned individual research
projects each class period. When students returned the following period, after self-study,
they discussed relevant information and sources they had discovered and synthesized
group members inputs prior to beginning the problem-solving process again.
The instructor facilitated each group independently every period by systematically
rotating between groups every five to fifteen minutes, ensuring they were following the
problem-solving process. He asked both direct (toward individual students) and indirect
(toward the entire group) questions about their hypotheses, the relevant information they
knew, information they needed, and research sources they had used and would possibly
use. The instructor asked students clarification questions when they were vague,
51


challenged students when they did not justify assertions, and encouraged participation
from all students by redirecting discussions and asking direct questions to those who were
less outspoken. Five minutes prior to the end of each period, he reminded the class to
assign research questions to all group members.
Groups each solved four problem sets throughout the semester. Problem sets
were presented in the form of an email tasking sent by their supervisor. Each set of
problems had the same basic structure and the same required end products; however,
each group was assigned a different geographic location and geopolitical issue in order to
alter the scenario while maintaining a sense of equity among the groups. For example, the
first problem set required students to recommend a relocation site for an operational United
States air base (Appendix D). Group 1 was assigned to Europe, Group 2 was assigned to
Central and South Asia, Group 3 was assigned to Southeast Asia, and Group 4 was
assigned to Central and South America.
At the conclusion of each problem set, each group presented its findings in an Air
Force-style paper or document and a formal, twenty-minute oral presentation with
Microsoft PowerPoint slides. To provide as realistic a setting as possible for the
presentations, students briefed subject matter experts who were posing as their
supervisors. Their fellow classmates attended the presentations as well, to enhance the
learning experience of all, not merely the presenters. The subject matter experts
performed real world supervisory roles assigned to them in each problem. For example,
the subject matter expert in the first problem set was a United States Air Force (USAF)
lieutenant colonel who had experience with base relocations, which was the basis of that
problem set. At the conclusion of each briefing, the subject matter experts asked students
questions that would commonly be posed in operational USAF briefings. These questions
focused on such things as students assumptions, findings and justifications. The subject
52


matter experts also provided immediate feedback and corrected any incorrect or
misleading information provided by groups to ensure students received accurate
information.
Student grades were based on peer assessments and two group projects (a paper
and a briefing) for each of the four problems they solved throughout the semester. The
instructor graded each group paper and presentation; group members all received the
same grades. Each student then provided an assessment (Appendix E) of their own
performance and one for each of his or her group members. These assessments included
both written comments and numerical scores. These scores were then added to each
individuals group scores for an overall grade for each problem. Ideally, students should
provide oral feedback to his or her peers as well. However, due to time constraints,
students only provided written assessments.
Didactic Instruction
The same instructor who taught the experimental group taught the control group.
In the control group, he employed didactic techniques. Didactic instruction is a
methodology in which the instructor delivers factual information primarily through lecture
and demonstration. This teacher-centered technique teaches students basic subject area
knowledge. The instructor had experience with this method of instruction, having used it
for over ten years at higher education institutions.
The instructor taught learning objectives (as outlined in the syllabus) primarily
through lecture, answering questions as they were posed. He supported his lectures with
outlines and images on Microsoft PowerPoint slides and he occasionally supplemented the
lectures with commercial video tape recordings. The instructor occasionally asked indirect
questions of the class to promote thinking, but rarely asked direct questions of specific
students. As with the PBL sections, the researcher observed throughout the study to help
53


ensure the instructor employed the described didactic methods. Student grades were
based on two research papers, three subjective and objective examinations, two computer
projects, and class participation.
Dependent Variables
The dependent variables of this study were problem-solving confidence and
problem-solving ability. Following are definitions of each of these variables.
Problem-Solving Confidence
Problem-solving confidence, as measured by the Heppner and Petersen (1982)
Problem-Solving Inventory, is the amount of certainty that respondents have in their
abilities to solve problems successfully.
Problem-Solving Ability
Problem solving is a set of acts involved in responding to problems in a particular
manner (Wu et al., 1996). Problem-solving ability, for purposes of this study, is the ability
to successfully complete the following stages:
1. Establishing an orientation to the problem.
2. Identifying information needed to solve the problem.
3. Devising multiple alternative hypotheses.
4. Making judgments and decisions about which hypothesis will most likely be
successful.
5. Evaluating and testing the solution.
Instrumentation
This study measured problem-solving confidence and problem-solving ability with
two separate instruments. Each instrument was a paper and pencil survey administered to
subjects during class.
54


Problem-Solving Confidence
A portion of the Heppner and Petersen (1982) Problem-Solving Inventory was
administered to measure student confidence in problem-solving abilities (Appendix A).
Heppner and Petersen (1982) asserted that their inventory measures three aspects of
personal problem-solving perception, as delineated by a principal components factor
analysis: (a) styles of approach and avoidance of problem solving, (b) personal control of
the problem-solving process, and (c) confidence in problem-solving ability.
Factor analysis resulted in factor loadings between .42 and .75 (all well above .3)
for the problem-solving confidence factor. Their internal consistency reliability for that
factor was reasonably high at .85. Their test-retest reliability of .85 was reasonably high as
well, although the tests were administered only two weeks apart.
Heppner and Petersen's (1982) study estimated both concurrent and construct
validity. They calculated correlations between their inventory and the Level of Problem-
Solving Skills Estimate Form. The calculations indicated a moderate, negative correlation
(ranging from -.29 to -.46). The researchers estimated construct validity by comparing test
scores of subjects with problem-solving training with subjects without problem-solving
training. According to Heppner and Petersen, the subjects who had received training
scored significantly better than those who had not. Subsequent literature indicates the
inventory is still applicable and in use (Dixon, 2000; Kruger, 1997; Wu et at., 1996)
Because problem-solving confidence is the only aspect of the inventory that is
applicable to this study, that subscale of the inventory was the only portion administered to
the subjects. The confidence subscale consists of 11 Likert-type questions with a six-point
scale for responses, ranging from Strongly Disagree (1) to Strongly Agree (6) (Heppner &
Petersen, 1982).
55


This survey was administered to all subjects during designated periods. To best
support the repeated measures design, it was administered at a constant interval of every
two and one-half weeks to examine if and how problem-solving confidence changed over
time in both the control and the experimental groups. Repeated measures designs are
more useful when an inventory is taken as many times as possible, without adversely
interfering with subject performance. The researcher conducted a pilot study prior to this
study and found that subjects became bored and frustrated toward the end of the semester
when the inventory was administered every two weeks. As a result, subjects did not
appear to provide accurate and honest information later in the semester. There were 42
periods in the semester. The Problem-Solving Confidence Inventory was administered at
the start of periods 2, 10, 18, 26, 34, and 42. Subjects were allowed five minutes to
complete the inventory and all completed it within the allotted time each time the inventory
was administered.
These six tests (pre-test, four intervening tests, and post-test) quantified student
confidence in problem-solving abilities. Nine of the questions elicited positive responses
while two elicited negative responses. High scores on the inventory suggested a good
deal of problem-solving confidence, while low scores suggested little problem-solving
confidence.
Problem-Solving Ability
The second instrument (Appendix B) was an assessment that examined problem-
solving ability. This assessment is supported by the five stages involved in successful
problem solving, as defined by researchers. Table 3.4 illustrates how the problem-solving
stages relate to the assessment questions.
The instrument posed five questions about a problem based on a real world
scenario that USAF officers may encounter. Subjects were provided 20 minutes to
56


complete the assessment and most completed it. The assessment was administered to all
students at the beginning of periods 2 (pre-test) and 42 (post-test). The pre-test served as
Table 3.4 Assessment questions derived from problem-solving stages.
Problem-Solving Stage Assessment Question
Establish an orientation to and identify and define the problem (Barrows, 1985; Cho & Jonassen, 2002; Krynock & Robb, 1999; Feldhusen & Treffinger, 1977; Heppner & Petersen, 1982) What is the problem?
Identify unknown information that must be discovered (Barrows, 1985; Krynock & Robb, 1999; Feldhusen & Treffinger, 1977) What information do you need to help you determine the solution?
Devise multiple alternative hypotheses (Barrows, 1985; Feldhusen & Treffinger, 1977; Heppner & Petersen, 1982; Krynock & Robb, 1999; Roth et al., 1996) What are possible solutions to this problem?
Make judgments and decisions about which hypotheses will be successful (Barrows, 1985; Cho & Jonassen, 2002; Feldhusen & Treffinger, 1977; Heppner & Petersen, 1982; Krynock & Robb, 1999) Which is likely to be the most successful solution?
Evaluate and test the chosen solution (Barrows, 1985; Feldhusen & Treffinger, 1977; Heppner & Petersen, 1982; Krynock & Robb, 1999) How would you evaluate and test your recommended solution to the problem?
57


a covariate. This test of actual problem-solving ability was only administered twice (unlike
the confidence instrument) because its lengthy administration time would have detracted
from the educational experience if administered more often.
Three raters assessed the problem-solving inventories that subjects completed.
All three raters were senior USAF officers and had experience teaching and assessing
student performance. The raters collectively developed a grading rubric (Appendix C) to
provide common criteria to assess the inventories. The rubric established a point system
for each of the five questions. Each of the five questions had a potential score between
zero and four points, for a total possible assessment score of twenty. The raters reviewed
and assessed four example inventories as a group to help ascertain inter-rater reliability.
They then assessed the subjects inventories independently of one another by scoring all
five questions for each subject based on the rubric.
Data Collection Procedures
The researcher briefed all students about the study during the first period of the
semester and provided each with a consent form to sign. In accordance with USAFA
Institutional Review Board policy, they were instructed to take the consent form home with
them and return with it signed, if they so choose, the next period. All students were then
administered the Problem-Solving Confidence Instrument and the Problem-Solving
Assessment at the beginning of the second period. Non-consenting students were
provided the instruments to use for their own purposes so they would not be distinguished
from consenting students. Non-consenting students were allowed to complete them,
without noting any personal identifier on the forms, or leave them blank. Subjects were
subsequently administered the problem-solving confidence inventory at the beginning of
58


periods 10, 18, 26, and 34. Both instruments were again given at the beginning of
period 42.
The instruments were administered at the beginning of class periods in order to
help ensure a fresher, more attentive audience. If administered close to the end of
periods, subjects may have been adversely affected by classroom activities, such as the
administration or return of graded assignments. In addition, subjects may have been more
likely to complete the forms hurriedly and without as much accuracy in order to leave class
early. To help alleviate the potential problem of subjects rushing through the inventories,
they were not allowed to leave the classroom or conduct other business if they finished
before the allotted times (five minutes for the problem-solving confidence inventory and 20
minutes for the problem-solving ability assessment).
USAFA students are required to attend all class periods unless they have written
permission from their superiors or physicians to be absent. This policy helped keep the
participation rate high, as students were rarely absent from class. Subjects who were
absent from a period in which an inventory or assessment was administered were emailed.
They were instructed to complete the instrument and either return a hard copy or email it to
the researcher. All emailed their completed inventories to the researcher.
Data Analysis Procedures
The data was input into the Statistical Packages for the Social Sciences (SPSS)
program for analysis. A frequency distribution was conducted on the demographic data.
This information included means, maximum values, and minimum values to provide a
picture of the sample group to help describe the generalizability of the study group.
This study involved two independent variables: (a) method, with two levels (PBL
instruction and didactic instruction), and (b) time, with six levels measured approximately
every two and one-half weeks. The study examined two dependent variables: (a) problem-
59


solving confidence, and (b) problem-solving ability, both of which were measured at the
interval level. The study required three major analyses to help answer the three research
questions.
The first task was to examine the extent to which students enrolled in the PBL-
based sections differed from students enrolled in the didactic section in terms of
confidence in their problem-solving abilities and their actual problem-solving abilities. The
appropriate statistical test for this was a paired samples t-test for each assessment
(confidence and ability), which compared the experimental groups mean with the control
groups mean (Hair et al., 1998; Vogt, 1993).
The second task was to examine the extent to which students enrolled in the PBL-
based sections changed their problem-solving confidence and abilities. The appropriate
statistical test for this was a paired samples t-test, which compared pre-test means with
post-test means (Hair et al., 1998; Vogt, 1993).
The final task was to examine the interaction between method and time when
comparing the two groups in terms of confidence in their problem-solving abilities. This
portion of the study involved one dependent variable, problem-solving confidence, tested
six times during the semester. The proper statistical technique for this task was a repeated
measures analysis of variance, to help control for individual-level variation that might have
affected within-group variance (Hair et al., 1998; Lomax, 1998; Vogt, 1993).
Summary
The purpose of this study was to reveal differences in problem-solving confidence
and ability for subjects experiencing PBL instruction (experimental group) and subjects
experiencing didactic instruction (control group). Subjects were undergraduate students in
a military geography course at USAFA. The study used a quasi-experimental design to
I
60


measure problem-solving confidence over time and problem-solving ability at the
conclusion of the educational experiences.
61


CHAPTER 4
RESULTS OF STUDY
Introduction
The purpose of this study was to ascertain if problem-based learning (PBL)
improves problem-solving abilities. The study examined two groups of students over the
course of a 42-class period semester. One group experienced PBL instruction
(experimental group), while the other group experienced didactic instruction (control
group). Subjects from both groups completed a problem-solving assessment at both the
beginning and the end of the semester and a problem-solving confidence inventory six
times throughout the semester. This chapter examines the results and presents the
findings for this studys three research questions:
1. To what extent do students who have had a PBL-based course differ from
students who have had a didactic course (lecture and discussion) in terms of: (a)
confidence in their problem-solving abilities, and (b) their actual problem-solving
abilities?
2. To what extent do students who are engaged in a PBL-based course change
during the semester in terms of: (a) confidence in their problem-solving abilities,
and (b) their actual problem-solving abilities?
3. Do students engaged in a PBL-based course acquire confidence in their problem-
solving at a different pace than students engaged in a didactic course (lecture and
discussion)?
62


Differences in Problem-Solving Confidence between Groups
This section presents the results that answer the research question; to what extent
do students who have had a PBL-based course differ from students who have had a
didactic course (lecture and discussion) in terms of confidence in their problem-solving
abilities? Both the experimental and control groups took the 11-question Problem-Solving
Confidence Inventory (Appendix A) on the second and last class periods of the semester.
There were no instructional activities conducted during the last class periodthe only other
activities students performed that class period were completing this studys Problem-
Solving Assessment and two other institution-required end of course surveys. The
Problem-Solving Confidence Inventory used a Likert-type scale from one to six points for
each question, for a total possible inventory score of 66.
Statistical Techniques
Individual subject scores were entered into the Statistical Packages for the Social
Sciences (SPSS) and analyzed first with a single-factor analysis of covariance (ANCOVA).
An ANCOVA was the most appropriate test in determining differences between the
experimental and control groups because it helps control for extraneous variables or
differences between groups when subjects are not randomly assigned (Lomax, 1998;
Triola, 2001; Vogt, 1993). The Problem-Solving Confidence Inventory pre-test served as
the covariate, while the Problem-Solving Confidence Inventory post-test served as the
single dependent variable. Method of instruction functioned as the independent variable.
ANCOVA tests were conducted for individual inventory questions, as well as total inventory
scores. Secondly, individual subject scores from the experimental group were analyzed
with a T-Test to determine if there was any change in the group's problem-solving
confidence.
63


Results
Subjects who experienced PBL instruction (experimental group) were found to
have marginally different degrees of problem-solving confidence at the end of the semester
than were subjects who experienced didactic instruction (control group). Table 4.1
illustrates that the experimental group's post-test Problem-Solving Confidence Inventory
total scores (M = 58.72, SD = 5.52) were higher than the control group's scores (M =
55.88, SD = 5.95). Additionally, the effect of the treatment on these groups was
statistically significant at the ten percent level of significance, F(1, 62) = 3.29, p = .075.
Comparisons of the means, holding pre-tests constant, for individual questions indicate
mixed results. The control group scored higher only on question 9 (After making a
decision, the outcome I expected usually matches the actual outcome). Those scores
were not statistically significant, however. The experimental group scored significantly
higher on questions 5 and 11 at the ten percent level, and questions 3 and 8 at the five
percent level. However, the results indicate that the instructional method had only a
marginal effect on subjects confidence in their ability to solve problems (see Table 4.1).
Table 4.1 Problem-solving confidence differences.
Treatment Question N Mean Std Dev F Stat
Experimental 1 47 5.28 .62 .05
Control 16 5.25 .45
Experimental 2 47 5.40 .65 .68
Control 16 5.25 .68
Experimental 3 47 5.26 .99 3.09***
Control 16 4.56 1.31
64


(Table 4.1 cont)
Treatment Question N Mean Std Dev F Stat
Experimental 4 47 5.13 .61 .118
Control 16 5.06 .57
Experimental 5 47 5.40 .65 5.21**
Control 16 4.94 .77
Experimental 6 47 5.64 .53 2.06
Control 16 5.44 .63
Experimental 7 47 5.43 .68 .40
Control 16 5.31 .60
Experimental 8 47 5.40 .68 3.00***
Control 16 5.19 .66
Experimental 9 47 4.87 .90 .02
Control 16 5.06 .85
Experimental 10 47 5.23 1.03 1.06
Control 16 5.00 .89
Experimental 11 47 5.68 .47 4.21**
Control 16 5.38 .72
Experimental Total Score 47 58.72 5.52 3.29***
Control 16 56.44 5.67
* Significant at .01 level of significance
** Significant at .05 level of significance
*** Significant at .10 level of significance
65


Differences in Problem-Solving Ability between Groups
This section presents the results that answer the research question: to what extent
do students who have had a PBL-based course differ from students who have had a
didactic course (lecture and discussion) in terms of their problem-solving abilities? Both
the experimental and control groups took the five-question Problem-Solving Assessment
(Appendix B) during the second and the last class periods of the semester.
Statistical Techniques
Three senior United States Air Force officers, including the author, with teaching
and assessment experience collectively created a grading rubric (Appendix C). Each of
the assessment's five questions had a potential score between zero and four points, for a
total possible assessment score of twenty. Prior to individually rating subject inventories,
the raters, as a group, reviewed and assessed four example inventories to help establish a
consistency of ratings between raters. Individual subject scores from each rater were then
entered into SPSS.
A test of inter-rater reliability was then conducted to examine the consistency of
ratings among the three raters (Patten, 1997) on both pre-test and post-test scores. Inter-
rater reliability was positive, indicating that all three raters ratings were consistent with one
another. A two-tailed Pearson Correlation indicated that the correlations between raters
were positive and moderately reliable (coefficients of .653, .730, and .731 on the pre-test
and .700, .780, and .790 on the post-test). See Table 4.2 for mean scores for each rater
on each test.
66


Table 4.2 Problem-solving assessment reliability analysis.
Test Rater N Mean Std Dev
Pre-test A 63 11.87 3.22
B 63 12.35 2.93
C 63 7.67 3.55
Post-test A 62 10.71 3.77
B 62 10.56 3.26
C 62 7.60 3.28
Subject ratings were assigned by averaging each subject's scores from all three
raters. These scores were then analyzed with a single-factor ANCOVA. An ANCOVA was
the most appropriate test in determining differences between the experimental and control
groups and because the test helps control for extraneous variables or differences between
groups when subjects are not randomly assigned (Lomax, 1998; Triola, 2001; Vogt, 1993).
The Problem-Solving Assessment pre-test served as the covariate, while the Problem-
Solving Assessment post-test served as the single dependent variable. Method of
instruction once again functioned as the independent variable. ANCOVA tests were
conducted for individual assessment questions as well as total assessment scores.
67


Results
Subjects who experienced PBL instruction (experimental group) were not found to
have significantly different degrees of problem-solving ability than subjects who
experienced didactic instruction (control group). As seen in Table 4.3, the problem-solving
ability scores for the experimental group (M = 9.90, SD = 3.20) were slightly higher than
those for the control group (M = 9.33, SD = 2.86). Although the experimental group's
scores were higher than the control group's scores, this difference was not statistically
significant, F(1, 57) = .65, p = .42.
Table 4.3 Problem-solving ability differences.
Treatment Question N Mean Std Dev F Stat
Experimental 1 45 2.59 .89 .45
Control 15 2.36 .91
Experimental 2 45 1.66 .85 2.01
Control 15 1.36 .65
Experimental 3 45 2.60 .89 1.86
Control 15 2.31 .97
Experimental 4 45 1.46 .81 2.39
Control 15 1.84 .82
Experimental 5 45 1.60 1.02 1.98
Control 15 1.27 1.14
Experimental Total Score 45 9.90 3.20 .65
Control 14 9.33 2.86
68


Additionally, comparisons of the variances for individual questions were not
significantly different. Although the experimental group scored higher on every question
except question 4, the results indicate that the instructional method had a marginal effect
on subjects ability to solve problems.
Change in Problem-Solving Confidence within the PBL Group
This section presents the results that answer the research question; to what extent
do students who are engaged in a PBL-based course change during the semester in terms
of confidence in their problem-solving abilities? The experimental group took the 11-
question Problem-Solving Confidence Inventory (Appendix A) during the second (pre-test)
and the last (post-test) classes of the semester.
Statistical Techniques
Individual subject scores were entered into SPSS and analyzed with a paired
samples t-test. The t-test was most appropriate in comparing the means of the
experimental groups Problem-Solving Confidence Inventory pre-test scores with their
Problem-Solving Confidence Inventory post-test scores to determine if their problem-
solving confidence changed (Triola, 2001).
Results
Subjects who experienced PBL instruction (experimental group) were found to
have increased their problem-solving confidence by the end of the semester. This can be
seen in Table 4.4, in which the group scored higher on the Problem-Solving Confidence
Inventory post-test than they did on the pre-test. The group's post-test scores (M = 58.72,
SD = 5.52) were higher than their pre-test scores (M = 52.90, SD = 6.28). The effect of the
PBL instructional method on this group was very significant, T(46) = 6.62, p < .001,
indicating that the PBL instructional method did have a positive affect on subjects'
confidence in their ability to solve problems.
69


The experimental group scored higher on each question of the Problem-Solving
Confidence Inventory post-test as well. Question 6 (Given enough time and effort, I
believe I can solve most problems that confront me) was the only question that resulted in
only a marginal mean score comparison (7(46) = 1.66, p = .103).
Table 4.4 Problem-solving confidence changes in experimental group.
Question N Pre-test Mean Std Dev N Post-test Mean Std Dev T Stat
1 48 4.66 .98 47 5.28 .62 4.84*
2 48 4.83 .84 47 5.40 .65 4.92*
3 48 4.83 .87 47 5.26 .99 3.07*
4 48 4.74 .71 47 5.13 .61 2.92*
5 48 4.85 .86 47 5.40 .65 3.81*
6 48 5.47 .69 47 5.64 .53 1.66***
7 48 4.91 .80 47 5.43 68 3.77*
8 48 4.79 .98 47 5.40 .68 7.37*
9 48 4.28 .90 47 4.87 .90 4.21*
10 48 4.53 1.06 47 5.23 1.03 3.80*
11 48 5.04 .93 47 5.68 .47 4.16*
Total 48 52.8958 6.2849 47 58.7234 5.5195 6.62*
* Significant at .01 level of significance
** Significant at .05 level of significance
*** Significant at .10 level of significance
70


Change in Problem-Solving Ability within the PBL Group
This section presents the results that answer the research question; to what extent
do students who are engaged in a PBL-based course change during the semester in terms
of their problem-solving abilities? The experimental group took the five-question Problem-
Solving Assessment (Appendix B) the second (pre-test) and the last (post-test) classes of
the semester.
Statistical Techniques
Individual subject scores were entered into SPSS and analyzed with a paired
samples t-test. The t-test was most appropriate in comparing the means of the
experimental group's Problem-Solving Assessment pre-tests with their Problem-Solving
Assessment post-tests to determine if their problem-solving ability improved (Triola, 2001).
Results
Subjects who experienced PBL instruction (experimental group) were not found to
have changed in their problem-solving ability by the end of the semester. This can be seen
in Table 4.5, in which the group actually scored lower on the Problem-Solving Assessment
post-test (M = 9.70, SD = 3.20) than they did on the pre-test (M = 10.44, SD = 3.00).
Additionally, the group scored lower on every post-test question, except question 4 (Which
is likely to be the most successful solution?), than they did on every pre-test question.
The effect of the instructional method on this group was not statistically significant, T(44) =
1.00, p = .33, indicating that PBL instructional method had no significant effect on subjects
ability to solve problems.
71


Table 4.5 Problem-solving ability changes in experimental group.
Question N Pre-test Mean Std Dev N Post-test Mean Std Dev T Stat
1 45 2.70 .94 45 2.59 .89 .61
2 45 1.73 1.00 45 1.66 .85 .43
3 45 2.64 .85 45 2.60 .89 .26
4 45 1.42 .77 45 1.46 .81 .28
5 45 1.95 1.16 45 1.60 1.02 1.72
Total 45 10.44 2.99 45 9.70 3.29 1.00
* Significant at .01 level of significance
** Significant at .05 level of significance
*** Significant at .10 level of significance
Change in Problem-Solving Confidence over Time between Groups
This section presents the results that answer the final research question; do
students engaged in a PBL-based course acquire confidence in their problem-solving at a
different pace than students engaged in a didactic course (lecture and discussion)? Over
an 18-week semester, subjects from both the experimental and control groups took the
Problem-Solving Confidence Inventory (Appendix A) six times, approximately once every
two and one-half weeks, to determine instructional method trends over time. The
inventories were taken at the start of class periods 2, 10, 18, 26, 34, and 42.
72


Statistical Techniques
Individual subject scores were entered into SPSS and analyzed with a one-factor
repeated measures analysis of variance (ANOVA). A repeated measures analysis was the
most appropriate test since it is a powerful design that can determine possible interactions
between the two instructional methods and time (Lomax, 1998).
Results
Subjects who experienced repeated sessions of PBL instruction (experimental
group) were not found to have changed in their problem-solving confidence differently over
time than those who experienced repeated sessions of didactic instruction (control group).
This can be seen in Table 4.6, in which the two groups scored relatively similarly on each
of the six tests. The interaction between instructional methods and time was not
statistically significant, F(1, 59) = 1.33, p = .27. Individual tests examining the interaction
between instructional methods and time for each question of the survey (Appendix H) also
were not statistically significant, with p values ranging from .05 to .91. These results
indicate that the joint impact of both instructional method and time had little to no affect on
subjects confidence in their problem-solving abilities.
73


Table 4.6
Problem-solving confidence changes over time.
Treatment Test N Mean Std Dev
Experimental 1 47 52.94 6.35
Control 14 53.79 4.32
Experimental 2 47 54.96 5.15
Control 14 54.21 4.08
Experimental 3 47 54.87 6.11
Control 14 53.64 4.24
Experimental 4 47 55.53 5.61
Control 14 56.57 5.50
Experimental 5 47 57.47 5.59
Control 14 57.43 5.81
Experimental 6 47 58.72 5.52
Control 14 56.57 5.32
* Significant at .01 level of significance
** Significant at .05 level of significance
*** Significant at .10 level of significance
Conclusion
This study examined the effect of PBL on student problem-solving abilities. The
first research question explored the extent to which students who experienced a PBL-
based course differed from students who experienced a didactic course in terms of: (a)


confidence in their problem-solving abilities, and (b) their actual problem-solving abilities.
No significant differences in problem-solving confidence or abilities were found.
The second research question examined the extent to which students who experienced a
PBL-based course changed during the semester in terms of: (a) confidence in their
problem-solving abilities, and (b) their actual problem-solving abilities. Students in the
PBL-based course significantly increased their problem-solving confidence, but did not
significantly change their problem-solving abilities.
The last research question investigated an interaction between instructional
method and time throughout the semester when comparing students engaged in a PBL-
based course with students engaged in a didactic course in terms of confidence in their
problem-solving abilities. There were no significant interactions between the two methods
and time.
The final chapter, Conclusions, will further explore the relationship between PBL
and problem solving. Based on the results of the inventories and end-of course critiques,
the chapter will also consider additional potential research issues.
75


CHAPTER 5
CONCLUSIONS
Introduction
The primary purpose of this chapter is to interpret the findings for this study's three
research questions regarding the effect of problem-based learning (PBL) on problem-
solving abilities. After summarizing the findings, the chapter will identify limitations to the
study and offer suggestions for methodology improvement. Because this study raised
further questions, the chapter then recommends future research regarding both PBL and
problem solving, as well as the use of PBL in the classroom. This final chapter then
concludes with an addendum, which explains the positive impact of the study on the United
States Air Force Academy (USAFA).
Summary of Findings
The study used a quasi-experimental research methodology to investigate whether
PBL improves problem solving. Two groups, enrolled in the same course with the same
instructor, participated in the study. The control group experienced didactic instruction,
while the experimental group experienced PBL instruction. The study used two
measurements: an established instrument (Appendix A) measured problem-solving
confidence while an assessment created for this study (Appendix B) evaluated problem-
solving abilities.
Problem-Solving Confidence
The first research hypothesis of this study was that, in the population, students
who have taken a PBL-based course have: (a) more confidence in their problem-solving
abilities, and (b) more successful problem-solving abilities than do students who have not
taken a PBL-based course. The results of this study indicate that students enrolled in a
76


PBL-based course increased their problem-solving confidence. Students in the
experimental group scored significantly higher on the Problem-Solving Confidence
Instrument post-test than they did on the pre-test. When problem-solving confidence was
measured in subjects from both the experimental and control groups at the end of the
study, however, students enrolled in the PBL-based course scored only marginally higher
than students enrolled in the didactic course.
Problem-Solving Abilities
The results of this study did not find that PBL significantly improved problem-
solving abilities. Students enrolled in the PBL-based course did not score significantly
higher on the Problem-Solving Skills Assessment post-test than they did on the pre-test. In
addition, these students did not score significantly higher on the post-test than did students
enrolled in the didactic course.
Explanation of Results
Problem-Solving Confidence
The experimental and control groups obtained marginally different scores on the
Problem-Solving Confidence Inventory post-test. This lack of large difference is most likely
due to the nature of the subjects. Students at USAFA are high achievers and good
students with confidence in their abilities. The average high school grade point average for
incoming students at USAFA is high. The Class of 2010s average was 3.86 (USAFA
Admission, n.d.). Students are competitively selected to attend the institution based on a
number of criteria, including academic achievement and aptitude, extracurricular activities,
participation in sports, leadership positions, and community service (USAFA Admission,
n.d.). Also, in order to be considered for enrollment at USAFA, students must either be the
child of a Medal of Honor recipient or receive an appointment from a state or federal
official. It is possible that some subjects gave themselves high scores on the inventory
77


throughout the semester because of their pre-existing sense of self-confidence in their
abilities to perform any task.
However, a valuable finding in this study was that subjects who were enrolled in a
PBL-based course scored substantially higher after the PBL experience than they did prior
to that experience, despite any previously existing inflated sense of self-confidence. This
indicates that the PBL process did in fact increase their confidence in solving ill-defined
problems. The results of the comparison of pre-test and post-test scores of subjects in the
PBL-based course substantially contribute to both the PBL and the problem-solving
knowledge bases by illustrating the powerful effect PBL can have on student confidence in
problem solving.
Problem-Solving Abilities
PBL did not appear to have an effect on problem-solving abilities. A major
drawback to the examination of these abilities, however, was the reliability and validity of
the Problem-Solving Assessment. Inter-rater reliability was tested during the study in
which three United States Air Force officers collaboratively established a rating rubric
(Appendix C) and independently rated each subject's assessment. Test results indicated
that the ratings were not reliable because the raters did not rate individual assessments
consistently with each other. This was likely due to two reasons. First, the raters were not
sufficiently trained in rating this assessment. Although they each had experience teaching
and assessing student performance, that experience did not involve ill-defined problems.
As this was a new methodology for the raters, they should have been given more training
and practice in rating the assessments. Second, the rating rubric that the raters
collectively created was neither comprehensive enough nor adequately defined, thereby
providing insufficient guidance for the raters. When creating the rubric, the raters
discussed potential answers and the value of each. However, they did not conceive an
78


extensive enough variety of answers that subjects actually provided. As a result, when the
raters individually rated subjects assessments, the rubric was of little assistance in
providing consistent ratings. The raters were left to more subjectively rate answers that
were not included in the rubric. Had the rubric included more stringent criteria and a wider
array of potential answers, the raters would have been more consistent in rating the
assessments. Because the assessment was found to not be reliable, validity testing was
unnecessary.
Limitations
This study, although important, had some limitations. The population sample was
drawn from a very specialized and homogeneous populationcadets at a military
institution. The subjects were not typical college or university students. First, they were
military members who had mandatory military and physical fitness duties and
responsibilities in addition to academic tasks. Traditional civilian college students have a
wider array of obligations; some have no other responsibilities, some are employed, some
have families for which to care, and some have other responsibilities. Second, each
subject from the sample group had to sign a commitment to become a commissioned
officer upon graduation from USAFA's four-year program. Most civilian undergraduate
students have no military or employment commitments as a result of attending school and
their institutions do not typically limit them to four years of study to obtain their degree. The
subjects of this study were under constant pressure to perform well. If they did not perform
well, they would be obligated to serve four years in the Air Force as enlisted members
rather than officers. As a result of this strain and the fact that they were high-achievers,
the subjects may have had an over inflated sense of ability and their assessment scores
may have been artificially higher.
79


The population sample size of this study was also a limitation. Due to the
institution's restrictions in class sizes and instructor loads, only 67 subjects participated.
There were 46 subjects in the experimental group and only 21 subjects in the control
group. The results of the study would have been more statistically robust and may have
been more significant had there been both a larger sample size and an equal number of
students in each group.
Near random assignment did not shield the study from external threats to validity.
The majority of the potential limitations of this study were external threats to validitythose
relating to the generalizability of the study to other settings (Goodwin & Goodwin, 1996).
Among those originally considered limitations were threats of time, pretest sensitization,
generalization of the dependent variable, and ambiguous independent variable. Of these,
one was a limiting factor.
The chief limitation was that of experimenter effect. If the instructor altered his
instruction because he favored one of the instructional methods over the other, he may
have adversely affected subject performance on the measurements. The researcher
focused throughout the semester on ensuring the instructor effectively employed both
instructional methods, PBL and didactic. The instructor had twelve years of collegiate-level
teaching experience using only didactic methods; he had no experience teaching PBL
methods. His sole experience with PBL was from the instruction and guidance he received
from the researcher both before and throughout the study. There was concern that the
instructor would not adequately teach or tutor the experimental section. Continuous
feedback from the researcher, however, helped control for this potential shortcoming. For
example, when the instructor failed to notice a group faltering from PBL methodology in
class, the researcher discretely noted the problem to the instructor and provided guidance
so the instructor could put the group back on track. The researcher and instructor met
80


after every class period to discuss the PBL process, lessons learned from the day's
sessions, and plans for the next class period.
Another limitation to this study was the reliability and validity of the Problem-
Solving Assessment (Appendix B). Inter-rater reliability was tested during the study in
which three United States Air Force officers collaboratively established a rating rubric
(Appendix C) and independently rated each subject's assessment. The final limitation
regarding the Problem-Solving Assessment was the number of times it was given. It could
only be administered twice, unlike the confidence inventory, due to time limitations.
Implications for Future Research
There are no established instruments with proven validity and reliability that
effectively measure problem-solving abilities. The next logical step after this study is to
create an improved instrument and accompanying rubric that successfully measure these
abilities. The follow-on to such a study would be to repeat this study with the newly
established problem-solving abilities instrument and rating rubric. Drawing upon a larger
group of subjects to increase the statistical power, using multiple instructors to help dilute
the possibility of instructor bias, and examining a more heterogeneous sample to more
accurately reflect the general populous would enhance such a study. The instructors in
this study should not only receive comprehensive training in PBL tutoring and techniques,
but also conduct at least one practice session in the role of tutor in order to become more
comfortable with the methodology.
A key learning point to arise from this study was that there was no degradation in
student satisfaction or performance. Subjects thought the PBL-based course provided
better learning experiences than did the didactic-based course. Table 5.1 includes results
from a required end of course survey that clearly illustrate subject preference for PBL. The
experimental group (PBL-based course) rated the learning experience higher on every item
81


of the survey (Appendixes F and G) than did the control group (didactic-based course).
The noteworthy findings were that the PBL-based course:
1. Encouraged students to express themselves and participate (Item 6).
2. Presented an intellectual challenge and encouraged independent thought
(Item 8).
3. Provided relevant and useful course content (Item 11).
4. Allowed students to learn a large amount (Item 12).
Additionally, subjects in the PBL-based course found the course as a whole (Item
10) to be much better than did subjects in the didactic-based course (5.2 versus 3.7 on a
scale of 1 to 6). Students provided unsolicited comments throughout the semester that
they enjoyed the PBL course design more than traditional, didactic designs.
Another follow-on study should be conducted to replicate the significant end of
course critique ratings and determine why PBL students provide higher ratings than do
students enrolled in a didactic course. This study would best be conducted using
qualitative techniques to collect more elaborate and detailed subject responses and
feedback. Additionally, a quantitative assessment would help verify that subject claims of
learning more in the PBL-based course are indeed true. For example, an assessment of
core knowledge, based on course objectives, should be administered to both a PBL and a
didactic group. Claims by subjects that they learned more are significant and noteworthy.
However, these claims must be reinforced with more robust statistical data.
82


Table 5.1
Student evaluation of teaching for experimental and control groups.
Item N Experimental Mean Std Dev N Control Mean Std Dev Significance
1 38 5.3 .77 14 4.2 1.31 .0052
2 38 5.1 .84 14 4.6 1.09 .0489
3 37 5.4 .79 14 4.8 1.05 .0355
4 38 5.2 .82 14 4.8 1.12 .0928
5 38 5.4 .72 14 4.5 .94 .0022
6 38 5.6 .69 14 4.1 1.35 .0009
7 38 5.5 . 65 14 4.6 1.28 .0170
8 37 5.6 . 76 14 4.0 1.04 .0000
9 38 5.3 . 80 14 4.3 1.14 .0036
10 37 5.2 . 86 14 3.7 1.33 .0004
11 37 5.4 . 87 14 4.0 1.36 .0009
12 37 5.4 . 80 14 3.6 1.39 .0001
13 37 5.4 .83 14 4.5 1.09 .0056
83


Recommendations
PBL is a well-established instructional pedagogy that should be encouraged in the
classroom for three major reasonsrelevance, satisfaction, and problem solving. First,
PBL challenges students with relevant problems and the process allows them to store the
new information in such a manner in which they can later access it in their daily lives
(Barrows & Kelson, 1993). PBL stimulates the learning process as it establishes an
authentic context that: (a) creates relevance for students in their daily lives to promote a
sense of ownership for their learning, (b) helps students expand their knowledge bases,
which they can then transfer to future situations, and (c) cultivates collaboration through
sharing and discussing information (Grabinger & Dunlap, 1995).
Not only does PBL promote relevance, but it also promotes satisfaction, leading to
better retention. The results of this study ( as shown in Table 5.1) are consistent with five
meta-analyses conducted by Vernon and Blake (1993), which found that students enjoy
PBL significantly more than traditional, didactic instructional methods. Researchers have
found that higher education students who are more satisfied with instruction are much
more likely to remain enrolled in school (Patti, Tarpley, Goree, & Tice, 1993). Additionally,
there appears to be a correlation between student satisfaction and knowledge and skill
retention (Astin, 1993; Edwards & Waters, 1982).
Finally, students learn valuable problem-solving techniques. Although this study
did not conclusively demonstrate that PBL improves problem-solving abilities, it did provide
indications that it might positively affect student problem solving. Vernon and Blake (1993)
found that medical students in PBL programs performed better than medical students in
non-PBL programs during clinical performance evaluations. Although their analyses were
not generalizable to the non-medical population, their results show promise that PBL may
84


improve problem-solving ability because clinical performance involves some problem-
solving.
PBL is an excellent classroom method because students learn relevant
information, enjoy the learning environment and, thus, better retain the information; and
learn to solve ill-defined problems. Because of this, students can transfer the problem-
solving abilities they learn in PBL classrooms to diverse situations in the future. Given the
importance of problem-solving skills in today's environment of complexity and globalization,
instructors should employ this technique in their classrooms.
Conclusion
This study showed that PBL can improve problem-solving confidence. Although
the study revealed that PBL does not necessarily improve problem-solving abilities, this
chapter presented some possible reasons for the results, explaining the difficulty of
measuring problem-solving abilities. This chapter also noted the limitations of the study
and offered proposals for further research and employment of PBL in the classroom.
Finally, the researcher presented some ancillary data that suggests students consider PBL
to provide a better learning environment than they do didactic instruction. In summary, this
study demonstrated that PBL has the potential to improve problem-solving confidence.
Although further research needs to be conducted to determine if PBL can improve
problem-solving ability, PBL does present a positive and welcomed environment in which
students can learn.
85


Addendum
As a result of the findings of this study, the Social Sciences Division at USAFA
integrated a mini-PBL project into the course examined in this study, Social Science 112.
The course, which is a required course for all cadets, now requires students groups of
three or four to solve a real world geopolitical problem in the role of Air Force officers or
State Department officials. Each group presents its findings in an official Air Force briefing
format for assessment. Following the briefing, each cadet then assesses every member of
their group with a rating form similar to the form used in this course (Appendix E). No
research on the impact of this method on student learning has yet been conducted;
however, many faculty members have noted that the project presents an authentic and
valuable learning opportunity for cadets.
Additionally, the Department of Economics and Geosciences at USAFA integrated
PBL into Geography 470, the Geography of Europe. This course uses strictly PBL
methodology. Students praise the course as an authentic experience in which they learn
more than they typically learn in didactic courses. This course will provide a test bed for
further PBL efforts at the Academy and is the next projected research project for the author
of this study.
86


APPENDIX A. PROBLEM-SOLVING CONFIDENCE INVENTORY.
Identifier:_______________
Please assess yourself on each question by circling the most appropriate response for the
scale: 1 = strongly disagree, 2 = disagree, 3 = slightly disagree, 4 = slightly agree,
5 = agree, 6 = strongly agree
I. 1am usually able to think up creative and effective alternatives to solve a problem.
1 2 3 4 5 6
2. I have the ability to solve most problems even though initially no solution is immediately
apparent.
1 2 3 4 5 6
3. Many problems I face are too complex for me to solve.
1 2 3 4 5 6
4. I make decisions and am happy with them later.
1 2 3 4 5 6
5. When I make plans to solve a problem, I am almost certain that I can make them work.
1 2 3 4 5 6
6. Given enough time and effort, I believe I can solve most problems that confront me.
1 2 3 4 5 6
7. When faced with a novel situation I have confidence that I can handle problems that may
arise.
1 2 3 4 5 6
8. I trust my ability to solve new and difficult problems.
1 2 3 4 5 6
9. After making a decision, the outcome I expected usually matches the actual outcome.
1 2 3 4 5 6
10. When confronted with a problem, I am unsure of whether I can handle the situation.
1 2 3 4 5 6
II. When I become aware of a problem, one of the first things I do is to try to find out
exactly what the problem is.
1 2 3 4 5 6
I
87