Citation
How a computer-based hypercard program for counseling ethics students affects their ability to apply ethical codes to counseling scenarios

Material Information

Title:
How a computer-based hypercard program for counseling ethics students affects their ability to apply ethical codes to counseling scenarios
Creator:
Frederick, Janet Elizabeth
Publication Date:
Language:
English
Physical Description:
xi, 188 leaves : illustrations ; 29 cm

Subjects

Subjects / Keywords:
Counselors -- Education ( lcsh )
Counseling -- Computer-assisted instruction ( lcsh )
Counseling -- Computer-assisted instruction ( fast )
Counselors -- Education ( fast )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Bibliography:
Includes bibliographical references (leaves 184-188).
General Note:
Submitted in partial fulfillment of the requirements for the degree, Master of Arts, Communication.
General Note:
Department of Communication
Statement of Responsibility:
by Janet Elizabeth Frederick.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
37759351 ( OCLC )
ocm37759351
Classification:
LD1190.L48 1996m .F74 ( lcc )

Full Text
HOW A COMPUTER-BASED HYPERCARD PROGRAM FOR COUNSELING
ETHICS STUDENTS AFFECTS THEIR ABILITY TO APPLY ETHICAL
CODES TO COUNSELING SCENARIOS
Janet Elizabeth Frederick
B.S., Colorado State University; 1978
A thesis submitted to the
University of Colorado at Denver
in partial fulfillment
of the requirements for the degree of
Master of Arts
Communication
1996
by


1996 by Janet Elizabeth Frederick
All rights reserved.


This thesis for the Master of Arts
degree by
Janet Elizabeth Frederick
has been approved
by
-3/g)g / %
Date


Frederick, Janet Elizabeth (M.A., Communication)
How a Computer-Based HyperCard Program for Counseling
Ethics Students Affects Their Ability to Apply Ethical
Codes to Counseling Scenarios
Thesis directed by Associate Professor Samuel A. Betty
ABSTRACT
A professor at the University of Colorado at Denver,
Department of Counseling Psychology and Counselor
Education, found certain students had difficulty
completing case study exercises in her class CPCE 5330-
Professional Seminar in Counseling. A computer program,
using HyperCard software, was developed specifically for
this class whose primary goal was to help students
analyze their own thought processes in applying
conflicting ethical codes to a set of case study choices,
and then justify their choices with documentation. The
program consists of three case study scenarios of varying
difficulty. Each case study provides a synopsis of a
counseling situation and a counseling dilemma. The
student may then select one of three options which would
IV


represent his/her choice on how he/she would respond to
the situation. The student is then asked to justify and
document his/her decision. Once a choice has been made,
consequences of the choice are displayed and additional
choices presented.
During the Fall semester of 1995, the students from two
CPCE 5330 classes were offered the opportunity to
participate in a study to measure the effectiveness of
the program. Upon obtaining their consent, the students
were divided into control and experimental groups.
Demographic information, including gender, age, number of
classes completed in the CPCE curriculum, number of
months work experience, computer anxiety and left/right
brain hemisphere dominance, was gathered from both
groups. The control group did not use the program at all
while the experimental group did. Students in the
experimental group were provided with an orientation and
documentation on how to use the program. They were
instructed to use the program on an individual basis and
to not discuss the program with other students. A pen-
to-paper case study was used as the pretest/posttest
measure to provide the basis for the statistical
v


analysis. The pretest was administered at the beginning
of the semester while the posttest was issued near the
end of the semester. Results did not show the program's
contributions to be statistically significant, possibly
due to the small sample sizes. Additional replication
studies would be beneficial to further test the program's
effectiveness.
This abstract accurately represents the content of the
candidate's thesis. I recommend its publication.
Signed
Samuel A. Betty
vi


DEDICATION
This thesis is dedicated to my parents for their love,
their encouragement of my efforts,
and their support of my aspirations.


ACKNOWLEDGEMENT
I wish to thank Dr. Marsha Wiggins Frame for allowing me
to use her students as the basis for my research. I also
wish to thank Dr. Samuel Betty and Dr. Benita Dilley for
their participation and assistance as committee members.


CONTENTS
CHAPTER '
1. INTRODUCTION ................................ 1
Overview of the Research Project ............. 3
The Problem...................................16
Problem Statement ..................... 17
Constitutive Definitions .................... 17
Operational Definitions ..................... 18
Theoretical Significance .................... 20
Applied Significance ........................ 23
Hypotheses.................................. 25
Null Hypotheses ......................25
Alternate Hypotheses .................. 26
2. REVIEW OF THE LITERATURE.....................27
Areas of Study................................27
Review of the Literature......................30
The Scholars' Suggestions for Future
Research......................................43
3. RESEARCH DESIGN ................. ..... 47
Pilot Study...................................47
Population and Sampling ..................... 50
Conducting the Experiment ................... 53
IX


Pretest
62
Posttest.....................................65
4. STATISTICAL ANALYSIS OF THE RESULTS .... 69
Age..........................................71
Number of Classes Completed To-date in
CPCE Curriculum..............................73
Number of Months Experience ................ 75
Computer Anxiety ........................... 77
Brain Hemisphere Preference ................ 79
Pretest......................................81
Posttest.....................................84
5. INTERPRETATION OF THE RESULTS AND
CONCLUSIONS..................................95
APPENDIX
A. NEEDS ASSESSMENT ...........................106
B. LEARNER CHARACTERISTICS ........... 109
C. ENVIRONMENTAL ANALYSIS......................Ill
D. LEARNING OBJECTIVES ....................... 112
E. DESIGN STRATEGY.............................114
F. SUPPORTING MEDIA............................119
G. MOTIVATION..................................120
H. ASSESSMENT STRATEGY.........................121
X


I. "JOE" SCENARIO...............................123
J. "TERRY" SCENARIO ........................... 131
K. "ROSE AND RANDY" SCENARIO....................132
L. EVALUATION FORM..............................138
M. CONSENT FORM.................................142
N. NOTIFICATION OF RESEARCH PARTICIPATION . . 145
O. PROGRAM ORIENTATION INFORMATION..............147
P. HELP MANUAL..................................156
Q. POSTTEST CASE STUDY..........................168
R. SUMMARY STATISTICS AND BAR CHART FOR AGE 170
S. SUMMARY STATISTICS AND BAR CHART FOR
NUMBER OF COMPLETED CPCE CLASSES.............172
T. SUMMARY STATISTICS AND BAR CHART FOR
NUMBER OF MONTHS EXPERIENCE ................ 174
U. SUMMARY STATISTICS AND BAR CHART FOR
COMPUTER ANXIETY ........................... 176
V. SUMMARY STATISTICS AND BAR CHART FOR BRAIN
HEMISPHERE PREFERENCE ...................... 178
W. SUMMARY STATISTICS AND LINE CHART FOR
PRETEST SCORES(%) 180
X. SUMMARY STATISTICS AND LINE CHART FOR
POSTTEST SCORES(%) 182
REFERENCES............................................184
xi


CHAPTER 1
INTRODUCTION
Welcome to the Information Age! At the heart of
this information explosion is the computer. Computers
provide an accurate and timely means of storing,
retrieving and analyzing data to produce large amounts of
information. For example, "Over half the world's
scientific knowledge has been obtained since the
construction of the first electronic computer in the
1940's" (Stair, 1979, p. 6). With the introduction of
the P.C. in 1980, we now find ourselves routinely in the
midst of "...tools and techniques for gathering and using
information..." (Laudon, Traver, & Laudon, 1994, p. 5).
Computers have impacted, and continue to impact, society
in a myriad of ways including our perceptions, knowledge
of the world, careers, organizations, and education.
Mandell, in his 1979 book Computers in Data Processing,
looked towards the future and prophesized:
Computers have had a significant impact on education
- one of society's largest industries. Like
business, it needs computers to store and process
large amounts of data. Recently, computers have
begun to be used in teaching as well. Computer-
assisted instruction (CAI) involves direct
interaction between a computer and a student. This
1


method of teaching holds promise, since the computer
can deal with a large number of students on an
individual basis (p. 11).
Tools, such as CAI, help us to "visualize our
environment, understand it, and creatively control it"
(Laudon et al., 1994, p. 5). CAI, also known as CBT
(Computer-Based Training), can be defined as "the use of
the computer for direct instruction of students"
(Edwards, Norton, Taylor, Weiss & Susseldorp, 1975, p.
147). CAI is being used as an instructional tool by
governments, businesses and schools. CAI courses have
been developed for all age ranges, and for a vast list of
professions and careers. The subjects suitable for CAI
courses are limited only by our civilization's own
knowledge base and the current state of technology!
While basic topics such as remedial math skills have long
been a favorite for CAI courses, newer and more difficult
topics are being pursued for educational software. For
example, programs are being designed to challenge a
person's analytical thinking abilities and problem
solving skills. As technology becomes more
sophisticated, more and more topics could find themselves
being developed into CAI courses.
2


Overview of the Research Project
A potential CAI subject emerged recently in an
Instructional Technology (IT) course at the University of
Colorado at Denver. The purpose of this IT course was to
develop a bona fide piece of instructional material for
real-world use. As a member of one of the development
teams for this course, our group was approached by a
professor from the College of Education, Department of
Counseling Psychology and Counselor Education (CPCE).
This professor routinely teaches a course called CPCE
5530-Professional Seminar in Counseling. The primary
focus of this class is to teach would-be counselors about
the ethical codes and standards which govern the
counseling profession (American Counseling Association
(ACA) and the Code of Ethics for the American Association
for Marriage and Family Therapy (AAMFT)). The purpose of
the course is to help students function as competent and
ethical counselors to avoid lawsuits (e.g. malpractice)
and/or state board investigations.
This class frequently uses case studies to simulate
counseling dilemmas and to challenge the students'
3


ability to apply the ethical codes and standards to a
variety of situations. As a result of this professor's
long-term experience teaching this course, she has found,
in general, that the students do well applying the
ethical codes to simple scenarios in which only one code
is appropriate. However, when the scenarios become more
difficult, and more than one ethical code has to be
applied or multiple codes must be weighed against each
other, not all the students are as successful. However,
mastering these case studies is an important skill for
the CPCE 5330 students. They must be able to pass both
comprehensive and State Board exams, which follow a
format very similar to a case study, to obtain a license
to practice. Because research has shown CAI to be "an
effective tool for presenting didactic material to
counseling students" (Folger, 1990, p. 367), the
professor decided that a CAI course might improve the
learning experience of those students having difficulty
in the CPCE 5330 counseling ethics course. Such a CAI
simulation would "require the learner to apply constructs
to a 'real life' situation in order to solve problems and
make decisions" (Sampson & Krumboltz, 1991, p. 395).
4


Lassan explains in her article "Use of Computer-assisted
Instruction in the Health Sciences" that "this particular
type of CAI is not designed to teach content for the
first time. It is based rather upon assumptions of
certain knowledge that the student is expected to have
already gained; application of knowledge is the primary
objective" (1989, p. 15). In the case of the counseling
ethics course, hands-on computer simulations would be
developed to provide realistic counseling dilemmas to
help students develop their analytical thinking skills
while practicing the application of the ethical codes.
Using formal instructional design techniques, the
development team outlined a four-fold purpose for the CAI
simulation to be developed. First, students would be
able to practice ethical decision-making with realistic
counseling dilemmas in a non-impact environment. Second,
students could learn to make the leap from merely knowing
the ethical codes to being able to apply them to "real
life" situations. Third, students would be encouraged to
be metacognitive (Auerswald, 1985; Hoffman, 1990) That
is, to think about their thinking and to explain their
thought processes to others who may be judging the
5


appropriateness of their actions. Finally, they would
develop the professional skill of appropriate
documentation which would serve them well if a decision
they made at some point were to be challenged in a court
of law. Appendices A through H contain copies of all the
original instructional design documentation (needs
assessment, learner characteristics, environmental
analysis, learning objectives, design strategy,
supporting media, motivation, and assessment strategy) on
which the design and development of the computer program
was based. The formal learning objectives for the
computer program were delineated as follows:
The student should be able to apply the
appropriate ethical codes to scenarios of varying
degrees of difficulty (without the assistance of a
text book or copy of the codes) on an individual
basis to prepare the student for the comprehensive
and State Board exams.
The student should be able to choose among
conflicting ethical codes and justify his/her
decision.
6


The student should be able to analyze his/her
thought process and be able to justify his/her
decisions.
The student should be able to support his/her
decisions based on appropriate documentation.
The student should be able to make more active and
independent decisions.
The next major decision was to select a software
tool. Recent developments in interactive programs have
taken an additional step towards reducing some of the
disadvantages of earlier CAI courses (Cook, 1989/90).
These newer interactive software tools provide the
ability to create branching pathways as opposed to linear
and sequential text. These branching pathways allow
learners greater control over the instructional
experience which has been shown to positively influence
retention of information and student interest (Pridemore
& Klein, 1991). HyperCard (Apple Computer Inc.,
Cupertino, CA) was selected as the tool to program the
CAI because of its ability to provide branching pathways
(Wedge, 1994) and because of the predominance of
Macintosh computers in many educational environments.
7


Using the instructional design documents as our
guide, the development team created a HyperCard program
containing three counseling scenarios which concentrate
on the clinical problem-solving skills required to serve
a single client or case appropriately. For each
simulation, the student role-plays as the counselor.
Using research as the foundation for our design strategy,
the program was constructed as follows (Hmelo, 1989-90):
Upon entering the program, three scenarios are
listed in menu format. This allows the learner to
select the scenario he/she wishes to view to
provide him/her greater control over the learning
experience. The scenarios are intentionally
ordered from the least difficult to the most
difficult to increase the level of challenge and
to maintain interest. The scenarios can be
repeated as many times as desired.
All information that would be available in the
clinical setting is provided to the learner in the
form of a synopsis at the beginning of the
scenario. Where pen-to-paper case studies cannot
effectively cover elapsed periods of time,
8


computer simulations can. The scenarios cover
time periods anywhere from one month to two years.
Because all relevant information is not presented
to the counselor at the beginning of every "real
world" counseling situation, new information is
presented to the learner throughout the
simulation, at varying points in time. In
addition, information obtained in real counseling
relationships can be vague. Therefore, at times,
the information presented to the learner is vague.
The simulations are in a dialogue format to make
the activity more interesting and engaging for the
student.
Each situation provides several branching and
interdependent decision pathways. Each simulation
begins by presenting the student with the
appropriate background information. A subsequent
screen presents the student with at least three
decision choices. The student must select the
choice that most closely reflects the decision
he/she would make in a real-life setting. After
selecting a choice, the student is then presented
9


with a empty "note pad". This free-form area
allows the student to document additional actions,
explain his/her thought process, and justify
decisions. After completing the note pad section,
the student is presented with the consequence of
his/her decision and is then offered an additional
three choices which, again, would reflect his/her
decision on how to handle the situation. This
process continues until the end of a decision
pathway is reached. At the end of each scenario,
the student automatically receives a paper
printout of the scenario, the selected decision
pathways and all documentation the student entered
in the note pad. This printout acts as a record
of the exercise.
True counseling situations may not always have
clearly identifiable choices or an obvious "right"
answer. In fact, a counselor may be faced with
several undesirable choices. To reflect this
facet of life, the simulation does not provide
definitive "right" answers. Most often, several
10


semi-desirable choices are offered to encourage
the student to think critically.
When the student chooses a decision pathway, the
responses are realistic. A subject matter expert
was employed to create realistic situations and
dialogue. Each scenario requires the application
of multiple ethical codes but no two scenarios
focus on the same set(s) of codes.
Decisions e.re final. A student cannot retract a
decision but must continue forward in the pathway
and suffer any appropriate consequences. The
student is faced with determining the relevant
codes and prioritizing those codes in order to
make a decision.
Due to the intentionally vague nature of the
decision options and the avoidance of providing a
definitive "right" answer, feedback is not
provided directly by the program. There is
research which shows that "short-but-frequent
instructor-initiated interactions can increase
achievement in CBT" (Stephenson, 1992, p.26).
Based on this research and the nature of the
11


subject matter, it was believed that the professor
could provide more effective and constructive
feedback by reviewing the printouts with the
students as a class exercise and/or with each
student individually. The program developers
recommended that the professor review the
printouts as a whole class to discuss the ethical
codes at play in each scenario and, additionally,
with each student individually to help the student
analyze his/her thought processes. The individual
sessions help the student by providing more
customized feedback regarding his/her cognitive
decision-making processes.
In summary, the program is not concerned with
"right" versus "wrong" answers. Its purpose is to help
students develop critical thinking skills, and to be
able to justify and support decisions.
Several additional design techniques were employed
to achieve the greatest benefit possible from the CAI
Milheim & Lavix, 1992; Jeiven, 1994) .
12


When accessing the program, on-line help is
available to coach the student on how to use the
program.
Text and graphics are kept to a minimum to avoid
distracting the learner.
Only one complete set of decision choices or one
whole response to a choice is presented on a
screen at a time to avoid confusing the learner
with too much information.
Main text (decision choices and responses) is
displayed in the center of the screen.
Each piece of main text is placed into a box to
improve readability on the screen. In the case of
multiple decision choices, each choice is placed
in its own box so as to be easily distinguishable
from the other options. Within each box, text is
left justified to avoid interrupting eye movement
and slowing down the student's reading speed. In
addition, each decision choice is identified with
a bright blue "button" to help each option stand
out on the screen. The student uses a mouse to
13


"click" a button to select the corresponding
option.
The same font is used for all main text, as well
as, upper and lower case to improve readability.
No flashing or blinking text appears to avoid
distracting the learner.
Larger fonts are used for text on buttons and for
headings to emphasize them.
Each screen and window is consistent in its color
usage, format, and presentation. Buttons are
located in the same position on every screen.
A "Counseling" button generates a pop-up window to
provide general ethical code categories, lawsuits,
and/or references to chapters in the class book.
A green button (a commonly used color for "go" or
"proceed") was used to set apart the "Next Screen"
button. The button was set at the bottom of the
display to facilitate the reading flow from top to
bottom.
A "Quit" button allows the student to exit the
simulation at any time.
14


Appendix I contains a complete graphical
representation of the first computer scenario, "Joe",
which focuses primarily on the ethical issues of
bartering. Appendix J contains a graphical
representation of the second scenario, "Terry", which
stresses the ethical issues on dual relationships. The
last and most difficult scenario, "Rose and Randy",
challenge the student to apply a number of ethical codes
surrounding marriage therapy and HIV.
The program was installed in the Department of
Education's computer lab and prototyped with two former
CPCE 5330 students and another CPCE faculty member.
After receiving positive feedback, the program was
piloted during the 1995 summer offering of CPCE 5330. A
formative evaluation form was given to each student who
completed the program. Again, the feedback was generally
very positive about the value of the program. Appendix L
contains a copy of the formative evaluation form. With
such positive feedback, a more formal and empirical
research study to determine the effectiveness of this
computer program seemed appropriate.
15


Based on the professor's personal experience with
the CPCE 5330 class, she hypothesized that a student's
ability to successfully master the more complicated case
studies might not only be improved by a computer program
but could also be related to the student's abstract
versus concrete thinking patterns. Abstract and concrete
thinking patterns can also be associated with right and
left brain hemisphere preference, respectively.
Therefore, in addition to group membership (i.e. computer
usage), brain hemisphere preference was also included in
this study.
The Problem
The purpose of this study is to determine if a
computer-based HyperCard program for counseling ethics
students affects their ability to apply ethical codes to
counseling scenarios. Because some students perform well
on the case studies and some do not, the CPCE 5330
professor and the researcher decided it would be
interesting to determine if abstract or concrete thinking
styles affected a person's performance. Therefore, in
addition to obtaining general demographic information,
16


the students were assessed for left- or right-brain
hemisphere preference.
Problem Statement
How do modes of instruction and brain hemisphere
preference among counseling ethics students affect the
outcomes associated with learning counseling ethics?
Constitutive Definitions
1. Computer-Based Training (CBT) or Computer-Assisted
Instruction (CAI) is defined as "the use of the
computer for direct instruction of students"
(Edwards et al., 1975, p. 147).
2. The effectiveness of the program on the students'
ability to apply ethical codes to counseling
scenarios will be determined by the variable
"achievement". Achievement "is usually measured
with tests that assess the degree of student
understanding that was accomplished" (Matta & Kern,
1989, p. 82). For this study, achievement is the
degree to which the student can accurately apply
counseling ethical codes to counseling scenarios.
17


3. An abstract thinker is the type of person who can
easily apply a construct to different situations.
This type of person is right-hemisphere dominant.
4. A concrete thinker is the type of person who cannot
easily apply a construct to different situations.
This type of person is left-hemisphere dominant.
Operational Definitions
1. The participants for the study will be UCD graduate-
level students from two sections of CPCE 5330 (Fall
semester, 1995). Please see the Learner
Characteristics in Appendix B for further
descriptions of the general characteristics of the
students.
2. The Computer-Based Training program used for this
study will be the Counseling Ethics HyperCard
program for the Macintosh computer. It will be
contained entirely on the mini-computer as a self-
study course. That is, it can only be completed by
one participating student at a time. Students will
be randomly assigned to either a control group (do
not use the program) or an experimental group (will
18


use the program). Group assignment is an
independent variable.
3. Tendency towards being an abstract or concrete
thinker will be determined based on the results of a
left-brain/right-brain hemisphere dominant test
called the Wagner Preference Inventory. Brain
hemisphere preference is also an independent
variable.
4. Achievement will be measured via a pen-to-paper case
study. The same case study will be given as a
pretest at the beginning of the semester and then
again as a posttest at the end of the semester. At
no time will the professor review or discuss the
case study during the course of the semester to
avoid giving away the answers. Scoring will be
conducted by the course professor using a standard
set of criteria and point assignments, and verified
by at least one other CPCE faculty member.
Achievement is the dependent variable.
19


Theoretical Significance
The SMCR model of communication by David Berio
provides a standard way of viewing the communication
process (Griffin, pg. 24). This core communication
theory contains four parts: (a) source of the message;
(b) the message itself; (c) the channel, or medium,
through which the message is sent and; (d) the receiver
of the message. In his book Theories of Human
Communication, Littlejohn lists five elements on which
core communication theories focus (pg. 18). These are:
The development of messages and the cognitive,
social, and cultural processes involved with how
messages are created and expressed.
Message structure which consists of the elements
of texts in the form of writings, the spoken word,
and other nonverbal forms. This area includes how
messages are put together and how they are
organized.
The generation and interpretation of meaning which
includes the characteristics of senders and
receivers and, how these characteristics process
information to generate meaning.
20


Interactional dynamics which involves
relationships and interdependency among
communicators. It addresses the give and take,
the production and reception, between parties in a
communication transaction.
Institutional and societal dynamics. These are
the ways in which power and resources are
distributed in society, the ways in which culture
is produced, and the interaction among segments of
society. This element includes studies of
mediated communication.
Computers are a form of mediated communication and
provide a channel through which messages are carried.
The primary focus of this study is whether the computer,
as compared to medium of traditional classroom
instruction, adds to or improves the communication
process of instruction. Secondly, this study also
considers the characteristics of receivers. Brain
hemisphere preference is measured to determine if it
plays a part in the successful processing of counseling
ethics information. Lastly, the study examines whether
their is an interaction between the computer as a medium
21


and brain hemisphere preference as a receiver
characteristic to determine if certain brain hemisphere
preferences respond more effectively to the computer
program than others.
Because communication relates extensively to the
dissemination of information via messages, and since the
use of computers as a medium for the dissemination of
information continues to increase, it is important to
know just how and when Computer-Assisted Instruction is
most effective. On the opposing side, research that
obtains insignificant results will help us to know when
CAI is not effective to avoid misuse Currently, it
appears that traditional learning theories are being used
to develop CAI courses. Research on when CAI is and is
not effective should be considered when using traditional
learning theories to develop CAI courses. Eventually,
when the body of CAI research is obtaining somewhat
consistent results, new formalized learning and
instructional development theories will evolve
specifically for the creation of CAI courses to further
improve the application of this medium as an educational
tool.
22


Determining when this medium is most effective and
learning to control it will help humankind to create an
effective communication channel which will in turn, help
to ensure the accurate and useful dissemination of
information to others.
Applied Significance
A need for therapy and counseling is on the rise as
people today face an ever-increasing number of challenges
in their lives. A variety of perspectives and multiple
issues.make ethical decision-making complex for the
counselor. Often, there is no clearly delineated "right"
answer for a decision. Ethical standards must be weighed
and prioritized, sometimes forcing a counselor to choose
the "lesser of all evils". Each choice has a consequence
which could potentially harm a client or be the catalyst
for a malpractice suit. Counselors must be well-trained
to analyze situations, prioritize ethical standards,
evaluate choices, determine consequences, and defend
their decisions.
According to Chan, Berven, and Lam, "several
different types of single-case management simulations
23


have been developed. These simulations provide a method
to facilitate the development of clinical problem-solving
and case-management skills and to assess the extent to
which individuals possess those skills" (Chan et al.,
1990, p. 216). "With all the types of simulations
developed, however, thus far, only a small number of
cases have been simulated" (Chan et al., 1990, p.216).
As a result of their work, these researchers suggest that
"one new area of research and development is the writing
of new case-management simulations representing a variety
of cases in terms of such features as disability type,
case complexity, and information provided" (Chan et al.,
1990, p. 224) They also recommend that as new
simulations are developed, increased sophistication
should be a goal. As Matta and Kern stated, "The
challenges of this research are to identify the right
subject to be presented on the right medium in the right
environment and form to be delivered to the right
students" (1989, p. 83). The HyperCard Counseling
Ethics Simulation was developed for counseling ethics
students to help meet these needs.
24


Hypotheses
There are three hypothesis involved in this study. The
first is to determine if group membership, or mode of
instruction, has a main effect. The second is to
determine if brain hemisphere preference has a main
effect and the third is to determine if there is an
interaction effect between brain hemisphere preference
and group membership.
Null Hypotheses
Brain Hemisphere Preference. Mean scores among the
left-brain dominant, balanced and right-brain dominant
students will be equal.
H0: JLLl = = M-r
L = Left brain dominant
B = Balanced
R = Right brain dominant
Group Membership. Mean scores between the control
groups and experimental group will be equal
H0: JHc = |1e
C = Control Group
25


E = Experimental Group
Interaction. There is no interaction between brain
hemisphere preference and group participation
Alternate Hypotheses
Brain Hemisphere Preference. There is at least one
pair of means that is statistically significant
Group Membership. Students in the experimental
group using the ethics simulation program will score
statistically higher on a semester posttest than the
students in the control group who did not use the
program.
Hi ^experimental ^ M'control
Interaction. There is an interaction between brain
hemisphere preference and group membership
Alpha. a = .05
The following chapters are a documentation of the
study. Chapter 2 presents a review of the literature on
CAI research. Chapter 3 presents the research design,
and chapter 4 presents the results and analysis of the
data. Chapter 5 presents the final conclusions as well
as suggestions for further research.
26


CHAPTER 2
REVIEW OF THE LITERATURE
Areas of Study
CAI has become a structure and medium for education
in ever increasing frequency. It is defined by Alan
Salisbury, in his article "An Overview of CAI", as
"...man-machine interaction in which the teaching
function is accomplished by a computer system without
intervention by a human instructor. Both training
material and instructional logic are stored in computer
memory." (1971, p. 48) CAI is being used in conjunction
with traditional teaching methods, as well as, a teaching
medium of its own. CAI has been used for teaching a
variety of age ranges from children to adults. This new
facet of education has brought new research challenges to
the field of Learning Theory because simply putting a
computer in the classroom or office does not guarantee
its use or success.
CAI is perceived as offering many advantages,
especially for the adult student, such as:
27


A cost effective alternative to resident classroom
teaching
Ability to work at one's own pace
Non-threatening situation as the student works
individually
Ability to repeat difficult topics
Ability to skip familiar topics
Disadvantages include:
Lack of interaction with an instructor for
questions and answers
Lack of interaction with peers for discussions
Lack of learning opportunities such as Cooperative
Learning
Most of the studies to-date have focused on
assessing the effectiveness of CAI in the classroom.
These studies have compared the performance of CAI to the
performance of traditional classroom instruction.
Effectiveness was most often measured by achievement,
retention, time, cost, and/or student attitudes. Student
attitudes have been reported in research to be
significantly linked to learning improvement" (Gray,
1990, p. Ill).
28


In general, the types of questions being addressed
by CAI research are:
How and why do available educational delivery
system differ significantly in effectiveness?
How do environmental factors affect the
relative effectiveness of CAI?
How and why do students' characteristics affect
the performance of CAI?
How does the topic of instruction affect the
effectiveness of CAI?
The challenges have been "to identify the right
subject to be presented on the right medium in the right
environment and form to be delivered to the right
students" (Matta & Kern, 1989, p. 77).
Other prominent areas of study have included the
development and verifying of scales for measurement.
George Marcoulides conducted a study to verify the
validity of the Computer Anxiety Scale. (1989) B. H.
Loyd and C. P. Gressard have also done work in the area
of Computer Attitude Scales.
Advances in computer technology have created a
variety of CAI mediums. As the number of mediums
increase, research in Computer-assisted Instruction will
be broken down into smaller areas of study. For example,
29


an emerging area is Distance Education facilitated via
Computer-Mediated Communication. In this area, students
and instructors are geographically separated from each
other but communicate to each other via a computer
network using phone lines and modems. This area could
even be extended to an international level, opening up
new opportunities for teaching by exposing students to a
wide variety of cultural and experiential backgrounds.
Review of the Literature
In a review of the CAI literature, a myriad of
results have been obtained. At first review, more
studies than not show CAI to be an effective substitute
for traditional teaching, though there are studies that
have obtained mixed results or no significant
differences. One mode of CAI (e.g. drill & practice,
tutorial, problem solving, simulation, a mixture of the
preceding) has not been determined to be significantly
more effective than another. While the time it takes for
a student to learn the material is less than through
other methods, the studies are showing that retention is
less (Edwards et al., 1975).
A meta-analysis done by James Kulik and Chen-Lin
Kulik included a total of 199 comparative studies; 32 in
30


elementary schools; 42 in high schools; 101 in
universities and colleges; and 24 in adult education
settings. This meta-analysis covered the use of the
computer in:
Computer-assisted instruction, including drill-
and-practice and tutorial instruction
Computer-managed instruction
Computer-enriched instruction, including the use
of the computer as a calculating device,
programming tool, and simulator.
Each of the 199 studies included in the meta-
analysis were controlled, quantitative studies. Most of
the studies reported that computer-based instruction had
positive effects on the students. Following are the
summarized results of those studies.
Students generally learned more in classes when
they received help from computers. The average
effect of computers in all 199 studies used the
meta-analysis was to raise examination scores by
0.31 standard deviations, or from the 50th
percentile to the 61st percentile.
Students also learned their lessons with less
instructional time. The average reduction in
31


instructional time in 28 investigations of this
factor was 32%.
Students also liked their classes more when they
received computer help. The average effect of
computer-based instruction in 17 studies was to
raise attitude-toward-instruction scores by 0.28
standard deviations.
Students developed more positive attitudes toward
computers when they received help from them in
school. The average effect size in 17 studies on
attitude-toward-computers was 0.33.
Computers did not, however, have positive effects
in every area in which they were studies. The
average effect of computer-based instruction in 29
studies of attitude-toward-subject-matter was near
zero.
This meta-analysis also found that studies where the
same instructor taught both the experimental and control
classes reported weaker effects than did studies with
different experimental and control teachers. They also
found that studies of long duration often reported weaker
effects than did the shorter studies. Lastly, computer-
based instruction was not found to be uniformly
successful at all instructional levels.
32


Kulik and Kulik conclude their meta-analysis by
saying that "most programs of computer-based instruction
have had positive effects on student learning. Future
programs of implementation and development of computer-
based instruction should therefore be encouraged" (1987,
p.229).
Rafael Colorado conducted a similar review of the
literature and summarized the results with the following:
CAI usually has had positive effects on students,
and in some instances has shown no difference, when
measured by several different criteria:
achievement, retention, instruction time, attitude
toward computers, attitude toward subject matter,
attitude toward instruction in general, and school
attendance rate. However, these effects are not the
same for all types of CAI, across all subject
matters or grade levels (1988, p. 227) .
While the results sound encouraging, an examination
by Gerald Bracey of a report written by Henry J. Becker
of Johns Hopkins University reveals some interesting
information about the current literature. In his report,
Becker "points out that most studies in the past are in
many ways not relevant to us now" (Bracey, 1988, p.28)
because many of the earlier studies used mainframes; the
large computers. Becker found that the "literature that
reports on computer-assisted instruction, which usually
does so favorably, continues to draw on these older
33


studies" (1988, p. 28). Most of the studies reviewed had
to do with math, reading and language arts in the upper
elementary grades. Becker's report goes on to state that
only two of the more than 200 studies contained in the
Kulik, Kulik and Bangert-Drowns, and Niemiec and Walberg
meta-analyses were published prior to 1983. Only
recently have studies turned to the microcomputer or P.C.
However, as the software packages themselves (i.e. word
processors, spreadsheets, graphics, etc.) change, any
research to verify previous studies would probably find
very different outcomes due to these changed
circumstances. As a result, little of the previous
research can be generalized to today's technological
environment. Becker concluded his report by stating that
all together, the research does not provide prescriptive
data for deciding whether and how to use computers as
adjuncts for instruction in these subjects.
CAI has been accepted as being beneficial but with
the newness of computer technology (i.e. hardware) and
its rapid advances, there is sometimes little time to
replicate studies before the technology has changed. For
example, some studies have compared computer-assisted
instruction with training on traditional video cassettes.
New technologies, such as interactive videodisk (I.V.D)
34


and multimedia software, offer a whole new opportunity
for research. Yet, did we obtain all our answers as to
how CAI compares with video cassettes as a medium? Have
achievement, retention, satisfaction, attitude, and other
variables been adequately measured? No. In today's
advancing world, it would not be unfeasible that the
results of a study are obsolete by the time it is
completed and published. The rapid advances of
technology are the largest factor contributing to the
fact that, "After nearly 25 years of use in instruction,
the impact of computer applications on these measures
remains largely an unknown quantity" (Roblyer, 1988, p.
38) .
More recent studies are beginning to examine
specific factors that influence the success of CAI1s
implementation and use. The following discussion
presents a good example of the more recent research. T.
R. Young (1984) reported that as many as 25% of the
microcomputers purchased, are never used. This
phenomenon is what caught the interest of Richard
Bagozzi, Fred Davis and Paul Warshaw to "develop and test
a theory better suited to the learning phase in the
adoption of computer technologies..." (1992, p. 641).
They have developed a new theory that they call Theory of
35


Trying, or TT. They have built their work upon the more
generic models of the Theory of Reasoned Action (or TRA
which measured attitude toward use) and the Technology
Acceptance Model (or TAM which measured ease of use) .
TRA and TAM looked at attitudes toward actions and that
those attitudes can predict intentions. However, models
that incorporate these kinds of attitudes also presume
that "the formation of intentions applies to behaviors
that are largely non problematic" (Bagozzi, et al, 1992,
p. 661). That is, given an opportunity to perform a
behavior, the person believed that there was a high
likelihood that he/she will perform the behavior. An
example might be choosing to use a spreadsheet package
with which the individual is already familiar.
However, the study's authors believe that some
actions related to the adoption of computer technologies
are problematic and that either external or internal
impediments could thwart the performance of the action.
Their focus was upon attitudes toward goals and upon the
process of learning to use a computer; not just the act
of using computers. Their model presumes that a goal is
the "performance of a behavior that a person believes
could be problematic for either personal reasons or
uncontrollable situational interventions" (Bagozzi et
36


al., 1992, p. 662). The authors hypothesize that people
form "distinct attitudes toward the consequences of
succeeding to achieve a goal, toward the consequences of
failing to achieve a goal and attitudes towards the
process of striving to achieve the goal" (Bagozzi et al.,
1992, p. 663). The authors differentiate between an
intention to try and trying by saying that "decision
makers typically first form intentions to try to achieve
a goal. Intentions to try then initiate into trying,
which represents the effort one puts forth in goal
pursuit" (Bagozzi et al., 1992, p. 664). Because new
technologies, such as P.C.s are complex and an element of
uncertainty exists in the minds of individuals with
regard to the successful use of them, people form
attitudes and intentions towards trying to learn to use
the new or novel technology prior to actually initiating
efforts directed at using the technology.
The study administered a questionnaire to 107 full-
time MBA students at the University of Michigan learning
to use a word processing package during the course of a
14 week semester. The authors found that, "Decision
making and actions needed to learn the word processing
package are driven by attitudinal reactions toward the
gains foreseen by achieving the goal, the losses
37


anticipated should one fail, and the pleasurable and
noxious experiences one will accrue along the way"
(Bagozzi et al., 1992, p. 679). When approaching the use
of a new technology, the study determined that the
consequences of failure were the strongest determinations
of intentions to try to learn. The greater the sense of
failure, the weaker the intention to learn.
At the end of the semester, after the students had a
chance to learn the word processing package, their
attitudes changed. The consequences of success were the
strongest factors towards actually trying. Attitudes
towards failure became much less important. Attitudes
towards the process increased their effect on trying.
In summary, the study found that "the psychological
processes associated with goal formation and the pursuit
of goals are important considerations in the adoption of
computer technologies. The processes reflected in
attitudes toward success, failure, and the means of goal
pursuit, intentions to try and trying activities are
early responses to problem solving and precede adoption
and long-run usage." Suggestions for computer interfaces
that minimize errors and failures (such as an "undo"
command) could positively influence intentions to learn.
In addition, the authors feel their results suggest that
38


changing peoples' attitudes toward the process of
learning, irrespective of success or failure, may be an
effective way to improve the learning process. Most
people tend to learn a computer technology by "doing".
They use the technology to generate a desired result,
referring to the reference manual only when needed. The
system is used to get the task done rather than the
individual really learning to use the system. As a
result, most people seem to obtain only a mediocre level
of proficiency at using the technology.
The authors suggest additional research in the
following areas:
Initiation, monitoring, and control of
instrumental actions underlying both the learning
and use of computer technologies
How attitudes towards success and failure, and
trying are formed and changed
Identifying persuasion strategies aimed at
minimizing the beliefs associated with negative
consequences and maximizing the beliefs of
positive consequences
The impact of past experience, education and
social processes on beliefs and attitudes
39


Self-Fulfilling Prophecies and its impact on
attitudes towards success, failure and trying.
. This review also examined the use of CAI
specifically in the health care fields. However,
research on the use of CAI in this area is quite limited.
In her article, Hmelo notes that "much of the problem
solving required of health professionals involves pattern
recognition, a skill that can only be taught through
practice with appropriate feedback; this seems an ideal
application for computer-assisted instruction, yet CAI is
rarely being used in this context" (1989, p. 95). Hmelo
goes on to say, "Despite the widespread use of computer-
assisted instruction in health professions education,
very little research has been done to validate its use"
(1989, p. 94). The literature available on computer-
based simulations for the counseling profession related
primarily to case-management (Berven, 1985; Berven &
Scofield, 1980a, 1980b; Butcher & Scofield, 1984;
Butcher, Scofield, & Baker, 1984; Chan et al., 1990).
The case-management simulations were classified as either
single-case simulations, which "concentrate on clinical
problem-solving skills required to serve a single client
40


appropriately (e.g., gathering medical, psychosocial,
vocational, and educational information to determine
eligibility)", or as multiple-case simulations, which
"focus on skills required to service entire caseloads;
that is, managing caseloads of varying sizes and types,
with varying constraints in time, resources, funds, and
staff, while following agency policy and procedures"
(Chan et al., 1990, p. 213). The only other references
to a health profession CAI were for a simulation program
to teach counselors-in-training to differentiate client
emotional states as well as select appropriate,
facilitative verbal responses to client statements
(Alpert, 1986; Sharf & Lucas, 1993) and for simulations
to "specifically assess the cognitive abilities required
in predicting client behavior" (Janikowski, Berven,
Meixelsperger, & Roedl, 1989, p. 128). While these
programs may indirectly address ethical issues, there
were no references found which directly related to the
application of ethical codes and standards.
In a study by Folger, counseling students in
Contemporary Mental Health classes were taught the course
material by either traditional instruction (TI) methods
41


or by Computer-Assisted Instruction (CAI). The results
showed that "achievement following CAI was significantly
greater than achievement following TI. These data
support the position that CAI is an effective tool for
presenting didactic material to counseling students"
(Folger, 1990, p. 367). The program development team
believed that a CAI course on counseling ethics could
provide counseling ethics students effective,
individualized practice in preparation for class and
state exams.
In summary, the results of the research to-date on
Computer-Based Instruction (CBT) and Computer-Aided
Instruction (CAI) can be synthesized into the following
general conclusions:
CAI used as a teaching supplement can increase
achievement scores (Wang & Sleeman, 1993)
Normal instruction supplemented by CAI is more
effective than normal instruction alone (Edwards
et al., 1975; Folger, 1990; Wang & Sleeman,
1993)
42


CAI is more effective at the college and adult
levels than at elementary and secondary levels
(Roblyer, 1990; Wang & Sleeman, 1993))
Learning time is reduced (Edwards et al., 1975)
Improved attitude towards learning (Hmelo, 1989;
Roblyer, 1990; Wang & Sleeman, 1993)
The Scholars' Suggestions for Future Research
The experts widely agree that CAI has some valuable
benefits to offer. In his review of the recent research,
Roblyer concluded his article by saying that, "Findings
indicate that computer applications have an important
role to play in the future of education, but the exact
nature of that role has only begun to be explored.
Opportunities for using technology to make an impact on
education has never been greater, and neither have
opportunities for research. The next decade must be a
time for taking full advantage of both" (1990, p. 55) .
Additional research could provide insights into two
primary areas: effective designs for CAI courses; and
factors/processes that affect the attitude and motivation
of students toward CAI courses and learning new
43


technologies. Effective CAI courses to provide training
on new the technologies themselves could, in itself,
provide a large impact on attitudes towards those new
technologies. Some of the suggested variables for study
include:
Interrelationships among computer anxiety and
attitudes towards success, failure, and the
process of goal pursuit
Student aptitude
Computing experience and background
Feedback from CAI course
Sequencing of instructions and options in the
CAI course
Environment
Subject
Learning styles of students
Dropout rates
Impact of students' self-concept on attitudes
towards CAI learning
Impact of CAI courses on students' self-concept
Impact of hardware and software designs


In their article, "A Framework for Research in
Computer-Aided Instruction: Challenges and
Opportunities", Matta and Kern (1989) suggest that there
are four major factors that contribute to the success of
a CBT or CAI. These factors are: 1) students, 2)
medium, 3) environment, and 4) subject. Regarding the
factor of subject, Matta and Kern stated that, "where
(i.e. to which topic) CAI can be applied most effectively
remains to be answered" (1989, p. 81). This study
focuses on the factor of subject in an effort to test the
effectiveness of a counseling ethics simulation program,
customized for counseling ethics students, to provide
greater complexity and sophistication than previous
simulations.
Bracey contends that the generally-held benefits of
CAI today are primarily perceptions, not direct measures
of them. While the general consensus today believes that
CAI can be beneficial, it appears that the specific
factors that determine when CAI is most effective are
still inconclusive and require additional study. Bracey
goes on to say, "To get those direct measures we must
turn to more rigorous experimental studies which compare
45


the achievement of students using computers to those not
using computers (1988, p. 28). This type of research is
the exact nature of this study. The hope is to provide
the counseling profession an effective CAI tool for a
topic, counseling ethics, where little research has
occurred.
46


CHAPTER 3
RESEARCH DESIGN
Pilot Study
The Counseling Ethics Simulation program was
prototyped during its original development with the
assistance of two former CPCE students and another CPCE
faculty member. A formative evaluation was completed for
the program to provide valuable feedback about the
program and its usefulness. The evaluation form
contained a combination of likert scales (l=easy to
understand; 5=confusing) and open-ended questions.
Appendix L contains a copy of the formative evaluation
form. In addition, the program was piloted, by one of
the original developers, with ten students during the
1995 summer class of CPCE 5330. A combination of
evaluations, observations, and interviews were used to
assess the computer program.
Specific results from the evaluation can be found in
Table 3.1. Overall, the feedback about the program was
very positive. The program took anywhere from 45 minutes
47


to 2 hours to complete, and nine out of the ten
participants felt that the program met its objective.
Table 3.1
Summary of Formative Evaluations from Pilot Study
Question Percentage Who Responded Results l=easy to understand 5=confusing
1 2 3 4 5
I found getting into the program... 100% 60% 20% 10% 10%
I found using the program... 100% 30% 60% 10%
I found the buttons on the screens... 100% 90% 10%
Presentation of "Joe" scenario 100% 70% 20% 10%
Presentation of "Terry" scenario 80% 50% 50%
Presentation of "Rose & Randy" scenario 80% 50% 25% 12'A 12'/z
I found the information behind the "CONSULT" button... 60% 33% 33% 17% 17%
I found the "note pad" are to be... 80% 75% 25%
48


Those items that the students liked best were:
1. Ease of use
2. Reality of scenarios
3. Made student think about consequences of actions
4. Note pad to record thoughts
Those items that the students thought could be
improved were:
1. The scenarios could be made longer
2. More on-line feedback about the "correctness" of
their answers
3. Response time was slow
Note that for the pilot study that there was no
review of or feedback on the scenario printouts by the
course professor. The pilot concentrated strictly on the
use of the program itself.
Based on the results of the pilot, enhancements were
made to the program to clarify the use of the help
screens and the "CONSULT" button. The Help Manual was
re-worked in greater detail to provide the student more
assistance in accessing and using the program. The
49


program was modified slightly to record the text from the
"note pad" into a text file, rather than constantly and
completely re-building the text in memory, to increase
the speed of the application. In addition, the lab would
be upgrading the memory of the P.C.s which would also
improve the program's response time.
Population and Sampling
For this study, the use/non-use of the computer
simulation program by the student (group membership) and
brain hemisphere preference will be the independent
variables. Group membership consists of a control group,
which did not use the computer program, and an
experimental group, which did use the program.
Achievement will be the dependent variable.
The graduate-level students from two sections of
CPCE 5330 (Professional Seminar in Counseling), offered
through UCD's Counseling Psychology and Counselor
Education (CPCE) program during the 1995 Fall semester,
comprised the sample group. After presenting the
research proposal to the two classes, 15 students from
the first section and 15 students from the second section
50


volunteered to participate in the study. One student
declined to participate. Each volunteer was given a
consent form to read and sign. A copy of the consent
form can be found in Appendix M. It may be interesting
to note that this person was the oldest person between
the two classes (age 57) and rated himself a score of "3"
on the computer anxiety scale (a self-assessment question
where l=low anxiety and 5=high anxiety). Computer
anxiety may have played a role in the decision process.
During the course of the semester, two individuals
from section 1 and one individual from section 2 dropped
the course. This left the experimental group with one
more person than the control group. For statistical
analysis purposes, it was desired to keep the number of
people in the control group the same as the number of
people in the experimental group. So, an individual was
randomly selected from the experimental group and that
person's results were withdrawn from the study's results.
Therefore, the actual number of participants was 12 from
the first section and 14 from the second section for a
total of 26 participants. Students were notified of
their group assignment via a "Notification of
51


Participation" letter delivered to them in a sealed
envelope. In this way, all students received a letter so
the professor for the course could not know which
students had been assigned to the control group and which
had been assigned to the experimental group. Appendix N
contains sample notification letters.
The professor for the course provided the researcher
with the name of each student. The names were placed
into a "hat" and randomly drawn by an individual not
participating in the study in any way. The first name
drawn was placed into the control group and the second
name drawn was placed into the experimental group. The
selection and assignment process continued in this
alternating manner until all participants were placed
into one of the two groups.
It is important to note that the original intent of
the program was to have students review their printout
results with the professor for feedback. However, for
purposes of this empirical study, the printouts were not
reviewed by the professor. In fact, throughout the
course of the study, the professor was never notified of
which students were participating in which groups. In
52


this manner, we could be assured that her teaching style
would not inadvertently favor or influence some students
over others. Instead, the participants were provided
with the name and phone number of a CPCE graduate student
whom had already completed CPCE 5330. This graduate
assistant would act as a "consultant" whom the
experimental group students could contact to discuss or
review the computer scenarios for feedback. It is
interesting to note that the consultant was never called.
Because this option was never exercised by any of the
experimental group students, however, we can be further
assured that any outside human influences were minimized.
This further isolated any statistical significance found
in the study to the computer program itself.
Conducting the Experiment
Both sections of the course were be taught by the
same professor to ensure all students received the same
course information via a consistent teaching approach to
avoid experimenter bias. The professor is the same
individual who originally requested the creation of the
CAI program.
53


A demographic questionnaire given to each
participant requested information such as gender, age,
number of classes completed in the CPCE curriculum,
number of months experience or volunteer work in the
health care field, and a self-assessment computer anxiety-
scale. Figure 3.1 shows the demographic questionnaire.
Each student was also given a right-/left-brain
hemisphere preference measure called the Wagner
Preference Inventory (WAPI II). This measure was
selected due to its previous use in health field studies
and, primarily, due to its concise nature. In addition,
the researcher had difficulty locating a test which
measured strictly brain hemisphere preference. Other
longer tests included brain hemisphere preference as one
of multiple variables. Available time was minimal
because the questionnaires were completed during the
regular class periods. An example WAPI II can be found
in Figure 3.2. This measure was developed and presented
in an article, "A Refined Neurobehavioral Inventory of
Hemisphere Preference", by Rudolph F. Wagner and Kelly A.
Wells (1985). The test consists of 12 multiple choice
questions. Each question provides 4 possible answers.
54


The student answers each question by selecting the answer
that most closely reflects the activity the student would
prefer to do, as opposed to whether they have the
ability.
The results of the WAPI II are analyzed using the
WAPI II Quadrant Analysis diagram. Figure 3.3 contains a
diagram of this tool.
The diagram consists of two halves: left and right.
In addition, the left half is broken into two further
subdivisions: (a) left-logical; (b) left-verbal. The
right half is broken into the following subdivisions:
(c) right-manipulative/spatial; (d) right-creative.
This format allows one to analyze the results of the
inventory by simply left or right brain preference, or by
the four more specific cells. Due to the limited number
of participants in this study, it was determined that
there would not be enough people in each of the 4 cells
to adequately conduct a proper statistically analysis.
Therefore, analysis was limited to the left versus right
halves.
55


Figure 3.1
Demographic Questionnaire
DEMOGRAPHIC QUESTIONNAIRE
Please answer the following questions.
NAME:
AGE:
GENDER (circle one): M F
NUMBER OF CLASSES COMPLETED TO-DATE IN CPCE PROGRAM: classes
NUMBER OF MONTHS EXPERIENCE (VOLUNTEER OR PROFESSIONAL) IN MENTAL HEALTH WORK: months
COMPUTER ANXIETY: 1 = LOW 3 = MODERATE 5 = HIGH 1 2 3 4 5
56


Figure 3.2
Wagner Preference Inventory
The Wagner Preference Inventory Form
NAME:
Read the following statements carefully. There are
12 statements with four items each. Place a cross mark
(X) in the appropriate bracket in front of each item you
select. Mark
one item only under each of the 12 sets of statements.
Choose the activity you prefer even though it does not
necessarily mean that you have the ability to do it. If
you are undecided, make a decision anyway by guessing.
1. ( )a. Major in logic
( ) b. Write a letter
( ) c. Fix things at home
( ) d. Major in art
2. ( ) a. Be a movie critic
( ) b. Learn new words
( ) c. Improve your skills in a game
( ) d. Create a new toy
3. ( ) a. Improve your strategy in a game
( ) b. Remember people's names
( ) c. Engage in sports
( ) d. Play an instrument by ear
4. ( ) a. Review a book
( ) b. Write for a magazine
( ) c. Build new shelves at home
( ) d. Draw a landscape or seascape
57


Figure 3.2 (Cont.)
Wagner Preference Inventory
5. ( ) a. Analyze market trends
( )b. Write a movie script
( ) c. Do carpentry work
( )d. Imagine a new play
6. ( ) a. Analyze management practices
( ) b. Locate words in a dictionary
( ) c. Put jigsaw puzzles together
( ) d. Paint in oil
7. ( ) a. Be in charge of computer programming
( )b. Study word origins and meaning
( ) c. Putter in the yard
( )d. Invent a new gadget
8. ( ) a. Analyze production cost
( ) b. Describe a new product in words
( ) c. Sell a new product on the market
( ) d. Draw a picture of a new product
9. ( ) a. Explain the logic of a theory
( ) b. Be a copy writer for ads
( ) c. Work with wood and clay
( ) d. Invent a story
10. ( ) a. Be a comparison shopper
( ) b. Read about famous men and women
( ) c. Run a traffic control tower
( ) d. Mold with clay and putty
11. ( ) a. Analyze your budget
( ) b. Study literature
( ) c. Visualize and re-arrange furniture
( ) d. Be an artist
12. ( ) a. Plan a trip and make a budget
( )b. Write a novel
( ) c. build a house or shack
( ) d. Make crafts your hobby
58


Figure 3.3
Wagner Preference Inventory Quadrant Analysis
L R
The test is scored for each individual. To score
the test, the number of "(a)" answers selected by a
particular student are totaled and entered into the "a"
cell. The number of selected "(b)" answers are then
totaled and the number entered into the "b" cell. This
continues for the "(c)" and "(d)' answers as well. Once
a number has been entered into all 4 cells, the halves
are totaled into the "L"eft and "R"ight boxes at the
bottom of the Quadrant Analysis. "A difference of at
59


least 3 points between L and R are needed (expressed in a
ratio) to show a significant difference between L and R;
otherwise, the ratio is considered to be balanced (B).
Examples are: L=ll/1; R=4/8; and B=5/7, or 78/5"
(Wagner & Wells, 1985, p. 673).
To avoid situations in which a student would be
reluctant to use the program due to computer anxiety, a
hands-on orientation was conducted for all students in
the experimental group so that all users would be
familiar with how to access and use the program. Each
participant also received a copy of the Help Manual
originally created by the development team specifically
for the program. Appendix 0 contains a copy of the
orientation packet and Appendix P contains a copy of the
Help Manual.
The participants were instructed NOT to approach the
course professor with any questions about the HyperCard
program nor to bring program printouts to class. The
participants were also asked NOT to talk to or share
information about the program with the other students or
among themselves to reduce any additional influences.
While such influences are difficult to actually control,
60


this study had the benefit of graduate-level
participants. Many of the participants made mention that
they had previously taken a research methods class and
understood the importance of these guidelines.
The phone numbers of the primary researcher were
provided to the students in the event they needed
technical assistance with the program in any way. The
computer program was installed on seven Macintosh Ilsi
computers in the Department of Education's Instructional
Technology Lab. Weekly technical checks of the programs
were conducted by the researcher to ensure the accurate
and uninterrupted functionality of the software. In
addition, anti-viral software was installed on each
machine to prevent unexpected software behavior.
The computer program had been modified to require
the input of a password to access the program. This was
to avoid any of the control group participants from
trying to access the program. The experimental group
participants were asked NOT to give the password to
anyone.
For the pretest/posttest results for an experimental
group participant to be included in the statistical
61


analysis, the participant had to complete at least one of
the three scenarios Therefore, the computer program was
modified to automatically print two copies of the
scenario printouts. An envelope was left in the computer
lab into which the participants were instructed to place
one copy of the printed output. The second copy was for
their own use or for review with the graduate assistant
"consultant". This check allowed the researcher to be
absolutely sure that all experimental group users
completed as least one of the scenarios for their
posttest scores to be included in the statistical
analysis. All 12 users completed at least one scenario.
Out of the 12 computer users, 11 completed all three.
The 12th person completed the first scenario only.
Pretest
One pen-to-paper, essay-type case study was used as
the pretest for the course. The students were asked to
answer the scenario describing the ethical issues, the
actions he/she would or would not take, and the ethical
principles he/she would apply. The case study had been
used on many occasions in previous teachings of the class


so the professor was well acquainted with the study and
confident of her scoring criteria. The pretest was
administered to all students in the first or second class
of the semester (dependent upon the student's class
attendance). The purpose of the pretest was to provide
an initial assessment of the student's beginning
knowledge level of counseling ethics. It was to provide
a baseline (covariate in an ANCOVA) against which the
semester-end posttest could be evaluated. However,
during the course of the statistical analysis, the
regression coefficient was calculated between the pretest
and posttest results for the control group and the
experimental group. Using this information, the slope of
the line for each group was calculated. Table 3.2 shows
these results.
Due to the difference in the slopes of the lines for
the two groups (.344 and -.12), this indicates that the
ANCOVA assumption of Homogeneity of Regression
Coefficients has been violated.
63


Table 3.2
Slopes of the Lines for the Control and Experimental
Groups
CONTROL GROUP EXPERIMENTAL GROUP
PRETEST POSTTEST PRETEST POSTTEST
Mean = 31.46 Mean = 82.15 Mean = 29.69 Mean = 87.38
Sx = 10.18 SY = 12.29 Sx = 10.64 Sy = 7.97
rxy = .285 rxy = -.160
b = (.285) (12.29/10.18) b = .344 b = (-.16) (7.97/10.64) b = -.12
The covariate, pretest scores, was diagrammed
against the dependent variable, posttest scores. Figure
3.4 presents the graph. As can be seen, the ANCOVA
assumption of Linearity has been violated as well.
Due to the violations of the assumptions of
Homogeneity of Regression Coefficients and Linearity, it
was determined that an ANCOVA analysis could not be used.
Therefore, the statistical analysis was done using a
factorial ANOVA with GROUP and BRAIN as the factors.
64


Figure 3.4
Covariate, Pretest Scores, Diagrammed Against the
Dependent Variable, Posttest Scores.
Posttest Scores -
Dependent Variable
GRAPH OF COVARIATE AND
DEPENDENT VARIABLE
- - CONTROL
--a---EXPERIMENTAL
Posttest
The posttest was the same case study as the pretest.
It was administered two weeks prior to the end of the
semester. The posttest was intentionally given well in
advance of the final exam to ensure that there would not
be a "study influence" on the final results. This was
also done to allow the control group participants an
opportunity to use the program prior to the final exam to
65


ensure that all students had had equal exposure to the
program for the final test. It is interesting to note
that all the students wanted to be a part of the
experimental group!
The posttest was worth 30 points. The professor
utilized a set of 12 criteria that the student must have
mentioned in his/her case study analysis. Each criteria
was assigned a point value. The total points possible on
the test was 30. Table 3.3 contains a list, of the
criteria and their assigned point value. Posttest scores
were not counted as a part of the students' grade for the
course. However, each student who participated in the
study, regardless of group membership, was given a two
point bonus if they agreed to participate in the study.
The research scores were intentionally kept completely
separate and not included in any way as a part of the
course grade. In this manner, the student would feel no
pressure to score a certain grade on the posttest. In
addition, it assured that the researcher would not be
exposed to or need to have the students' grades to
protect the students' right to privacy and
confidentiality.
66


Table 3.3
Posttest Criteria for Scoring
Ethical Issue That Must be Points
Mentioned
Confidentiality 2
Who is client 2
Competency 2
Autonomy 2
Countertransference 2
Suicide assessment 2
Consult 2
Document 3
Informed consent 2
Abandonment 2
Obj ectivity 2
Referral 2
Total Points 25
The students were not provided any answers to the
pretest/posttest case study at any time during the
semester. Also, the pretest scenarios were never
directly addressed by class activities or other class
case studies. This ensured that the students would not
67


know outright the answers to the posttest but had to
derive them from the semester's instruction. Using the
same case study for the pretest and posttest allowed the
design to control for any differences in the results that
might have been experienced due to using different case
studies.
68


CHAPTER 4
STATISTICAL ANALYSIS OF THE RESULTS
This chapter presents the statistical analysis of
the research data. The appropriate summary statistics
for each variable are analyzed and a graphical
representation provided. Together, this information
provides an informative picture of the sample group.
Lastly, an analysis of the posttest results, which serve
as the final outcome of the study, is presented along
with a discussion of their significance.
The results for each individual participant in the
study have been compiled into Table 4.1. Subjects for
the study were randomly placed into either the control or
experimental group Each group contained 1 male and 12
females for a total of 13 people in each group, and a
grand total of 26 participants. Because there were so
few male subjects, other than noting numbers, no further
analysis on gender was conducted.
69


Table 4.1
Results for Each Participant
SUBJECT GROUP GENDER AGE # CPCE # MONTHS COMPUTER BRAIN PRETEST POSTTEST
CLASSES EXPERIENCE ANXIETY HEMISPHERE SCORE SCORE
DOMINANCE <%) (%)
1 CONTROL MALE 27 4 40 LOW BALANCED 53 92
2 CONTROL FEMALE 33 4 0 LOW RIGHT 43 84
3 CONTROL FEMALE 26 1 48 LOW BALANCED 43 84
4 CONTROL FEMALE 26 3 40 LOW BALANCED 26 84
5 CONTROL FEMALE 24 5 3 LOW BALANCED 33 92
6 CONTROL FEMALE 33 6 0 MODERATE LEFT 23 68
7 CONTROL FEMALE 40 5 0 LOW BALANCED 30 76
8 CONTROL FEMALE 32 4 60 LOW LEFT 33 76
9 CONTROL FEMALE 34 5 0 LOW-MOD RIGHT 16 84
10 CONTROL FEMALE 31 5 24 LOW RIGHT 20 92
11 CONTROL FEMALE 23 2 18 LOW-MOD LEFT 33 100
12 CONTROL FEMALE 29 8 36 LOW BALANCED 26 52
13 CONTROL FEMALE 56 9 14 LOW-MOD RIGHT 30 84
14 EXPER. FEMALE 23 4 10 LOW-MOD LEFT 30 92
15 EXPER. FEMALE 24 5 9 LOW-MOD LEFT 16 92
16 EXPER. FEMALE 27 4 48 LOW-MOD LEFT 30 92
17 EXPER. FEMALE 23 2 24 MODERATE LEFT 40 76
18 EXPER. FEMALE 36 4 0 LOW LEFT 16 80
19 EXPER. FEMALE 45 16 24 LOW LEFT 33 76
20 EXPER. FEMALE 30 9 0 LOW RIGHT 33 100
21 EXPER. FEMALE 32 7 8 LOW RIGHT 20 84
22 EXPER. FEMALE 26 6 0 LOW-MOD RIGHT 56 84
23 EXPER. FEMALE 36 5 12 LOW RIGHT 26 100
24 EXPER. MALE 36 9 72 LOW RIGHT 33 84
25 EXPER. FEMALE 23 2 4 LOW-MOD BALANCED 23 84
26 EXPER. FEMALE 31 7 12 LOW-MOD BALANCED 30 92


Age
The control group had the highest concentration of
ages in two categories: 2 6 and 33. The experimental
group had two modes as well: 23 and 36. Note that the
two modes for the experimental group are each lower and
higher, respectively, than their control group
counterparts. These modes demonstrate the wide age range
in the CPCE curriculum at the University of Colorado at
Denver.
The median and mean age for the control group are
almost the same: 31. The median and mean age for the
experimental group are very close as well: 30. In
addition, the means and medians for the control and
experimental groups are very close to each other which
indicates that, based on age, the two groups were fairly
evenly distributed.
Figure 4.1 presents a line graph of age for both
groups graphed on the same chart. Appendix R provides
tables of summary statistics and bar charts for each
group(control and experimental) separately.
71


Figure 4.1
Graph of Age Distributions for Both Groups
Age


Number of Classes Completed To-date in CPCE Program
"Number of classes completed" represents the number
of classes that the student has completed to-date, prior
to the CPCE 5330 course. This variable provides
information on the relative position of the students in
relation to the CPCE curriculum. All students are
graduate level students. The mode and median for the
control group are identical at 5.0 and the mean is close
at 4.7. This information plus the bar graph paints an
almost normal distribution. For the experimental group,
the mode is 4.0, the median is 5.0, and the mean is 6.1.
The range of completed classes was anywhere from one to
nine. However, the mean for the experimental group is
skewed higher than the control group due to one
individual who reported having completed 16 classes to-
date (this number was confirmed with the reporting
individual).
Figure 4.2 presents a line graph of completed
classes for both groups graphed on the same chart.
Appendix S provides tables of summary statistics and bar
charts for each group(control and experimental)
separately.
73


Figure 4.2
Graph of Number of Completed Classes for Both Groups
-j
0^
W
-P
c
(0
04
H
O
H
4-J
M
nj
Pn
4-1
O
M
p
s
Number of Classes


Number of Months Experience
Number of months experience is the number of months
a student has accumulated in the health care field as
either a paid professional and/or as a volunteer. The
control group had a range from zero to 60 months and the
experimental group from zero to 72 months. These
represent large ranges of experience. The mode for both
groups was zero months experience meaning that almost
one-fourth of the group (7 people) did not have any
experience. The median for the control group was 18
while the median for the experimental group was 10. The
mean was 21.7 for the for the control group and 17.1 for
the experimental group. All together, this information
indicates that the control group had more overall
composite experience than did the experimental group.
Figure 4.3 presents a line graph of the months
experience for both groups graphed on the same chart.
Appendix T provides tables of summary statistics and bar
charts for each group(control and experimental)
separately.
75


Number of Participants
Figure 4.3
Graph of the Number of Months Experience for Both Groups
Number of Months Experience


Computer Anxiety
Computer anxiety was a self-assessment question for
which the student rated his/her own anxiety on a likert
scale where 1 was "low anxiety" and 5 was "high anxiety."
It is interesting to note that all scores were either
"1", "2", or "3" with the most commonly used score a "1".
No one rated him/herself a "4" or "5". It is possible
that the students may have rated themselves a lower
anxiety score than what may be true. It is difficult to
admit in this age of computers that one is highly anxious
about using computers. The researcher was more
interested in the overall computer comfort level.
Because of some "3" scores, the researcher conducted a
program orientation to prevent any experimental group
users from avoiding the program due to being ill at ease.
Figure 4.4 presents a line graph of computer anxiety
for both groups graphed on the same chart. Appendix U
provides tables of summary statistics and bar charts for
each group(control and experimental) separately.
77


Number of Participants
Figure 4.4
Graph of Computer Anxiety for Both Groups
Compuer Anxiety Score


Brain Hemisphere Preference
Brain hemisphere preference was based on the Wagner
Preference Inventory.
The mode for the control group indicates that the
group is predominately "balanced" (followed by a
preference for the right hemisphere). On the other hand,
the mode for experimental group shows a predominance for
the left hemisphere (followed by the right hemisphere).
The combined graph in Figure 4.15 shows how the two
groups are almost completely opposite for the categories
of "left hemisphere" and "balanced." It was expected,
and desired, that the random assignment of the
(
participants to groups would have more evenly distributed
the brain hemisphere preferences, especially for the left
and balanced categories, between the two groups.
Figure 4.5 presents a line graph of computer anxiety
for both groups graphed on the same chart. Appendix V
provides tables of summary statistics and bar charts for
each group(control and experimental) separately.
79


Number of Participants
r
Figure 4.5
Graph of Brain Hemisphere Preference for Both Groups
Brain Hemisphere Preference


Pretest Scores
As mentioned in Chapter 3, a pretest was given to
all participants at the beginning of the semester. The
pretest scores for both groups, calculated as a
percentage score, were quite low. This would be expected
on the pretest since the students had not been exposed to
any of the course material.
The pretest scores were to serve as a covariate in
an ANCOVA analysis to help account for any influence
prior knowledge and/or experience might have on the
dependent variable, posttest. This would ultimately
reduce the unexplained error variation. However, during
the ANCOVA analysis, it was determined that the ANCOVA
assumption of Homogeneity of Regression Coefficients and
the assumption of Linearity were being violated.
Therefore, the ANCOVA analysis was abandoned for a two-
way ANOVA analysis using group and brain hemisphere
preference.
The means for the control and experimental groups
(31 and 30 respectively) are quite close, and the medians
are the same for both groups (30). This would be
expected as the independent variable, computer program
81


usage, had not yet been applied. The modes are also very
close at 33 for the control group and 30 for the
experimental group. Also, the standard deviations are
also very similar at 10.2 and 10.6 indicating similar
spreads about the mean for both groups. Lastly, the two
groups have very similar distributions. This statistical
information lends credibility to the fact that the
participants were, indeed, randomly assigned to the two
groups.
Figure 4.6 presents a line graph of the pretest
scores for both groups graphed on the same chart.
Appendix W provides tables of summary statistics and bar
charts for each group(control and experimental)
separately.
82


Number of Participants
Figure 4.6
Graph of Pretest Scores(%) for Both Groups
Pretest Scores (%)


Posttest Scores
As expected, the posttest scores, calculated as a
percentage, are much higher than the pretest scores. The
higher scores, in general, would be due to the course
material presented to the participants throughout the
semester.
The mode and median for the control and experimental
groups are all the same value, 84. However, the mean for
the experimental group (87.4) is approximately 5
percentage points higher than the control group (82.2).
In addition, the standard deviation for the experimental
group (7.9) is smaller than for the control group (12.3)
indicating a smaller spread of scores about the mean.
This information indicates that the experimental group,
overall, had higher scores.
Figure 4.7 presents a line graph of the posttest
scores for both groups graphed on the same chart.
Appendix X provides tables of summary statistics and bar
charts for each group(control and experimental)
separately.
84


Figure 4.7
Graph of Posttest Scores(%) for Both Groups
Posttest Scores (%)


A two-way ANOVA was calculated using group
membership and brain hemisphere preference as the factors
(or independent variables). The purpose of this test was
to determine if there was a group membership main effect,
a brain hemisphere preference main effect, and/or a
group-by-brain interaction.
To begin, the ANOVA analysis presents the means for
group, brain hemisphere preference, and group-by-brain.
As already mentioned, the experimental group (2) had a
higher mean than the control group (1). See Table 4.3.
Table 4.3
Means for Total Population and by Group
ft * CELL POSTTEST MEANS * *
by GROUP BRAIN
Total Population
84.77 ( 26)
GROUP
1 2
(Control) (Exper.)
82.15 87.38
( 13) ( 13)
86


As shown in Figure 4.4, the right brain hemisphere
dominant group (3) had a higher mean than the left brain
dominant (1) or the balanced groups (2).
Table 4.4
Means for Brain Hemisphere Preference
BRAIN
1 2 3
(Left) (Balanced) (Right)
83.56 82.00 88.44
( 9) ( 8) ( 9)
The analysis has a total of six cells with the
recommended minimum of two people in each cell. See
Table 4.5. Examining all the cells together shows that
the right brain hemisphere dominant participants in the
experimental group (BRAIN-3, GROUP-2) had the highest
mean scores on the posttest. In fact, the balanced and
right hemisphere dominant cells for the experimental
group had the two highest means scores, followed by the
right hemisphere dominant cell for the control group.
The spread in percentage points between the lowest and
highest means is a full 10 points.
87


Table 4.5
Means for Brain Hemisphere Dominance by Group
BRAIN
1 2 3
GROUP (Left) (Balanced) (Right)
1 (Control) 81.33 80.00 86.00
( 3) ( 6) ( 4)
2 (Exper.) 84.67 88.00 90.40
( 6) ( 2) ( 5)
Table 4.6 shows all the summary statistics for each
of the six cells compiled into a single source of
information. While this table helps to highlight the
performance of the top end groups, it also points out the
noticeably lower performance of the left hemisphere
dominant individuals and the balanced individuals in the
control group. These two cells had the lowest mean
scores. In addition, the left hemisphere/control group
cell had the lowest median by far. Otherwise, the modes
and medians of the cells are all fairly close. Also note
the large spreads in the standard deviations. The lower
deviations indicate a smaller spread of scores about the
mean which reflects a fairly homogeneous group. On the
other hand, the higher deviations indicate a greater
88


spread of scores about the mean and, so, reflect a more
heterogeneous group.
Table 4.6
Summary of Statistics by Cell
GROUP BRAIN # of PPL MODE MEDIAN MEAN STD. DEV.
1 Control 1 Left 3 no mode 76 81.33 16.65
1 Control 2 Balanced 6 84 & 92 84 80.00 14.97
1 Control 3 Right 4 84 84 86.00 4.00
2 Exper. 1 Left 6 92 86 84.67 8.17
2 Exper. 2 Balanced 2 no mode 88 88.00 5.66
2 Exper. 3 Right 5 84 84 90.40 8.76
Each group (control, experimental) contained 13
people. This was intentional so as to avoid violating
the ANOVA Homogeneity of Variance assumption. However,
the breakdown of the six cells shows an unequal number of
people in each cell. To be sure the assumption was not
violated, Hartley's F-Max test was calculated (Fmax =
S largest/S smallest)
Fmax = 277.33/16 = 17.33
df = largest n 1 = 6-1 = 5
89