Citation
Formative assessment in context

Material Information

Title:
Formative assessment in context
Creator:
O'Brien, Julie Oxenford ( author )
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
Language:
English
Physical Description:
1 electronic file (513 pages). : ;

Subjects

Subjects / Keywords:
Psychometrics ( lcsh )
Educational tests and measurements ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )

Notes

Abstract:
This dissertation responds to critical gaps in current research on formative assessment practice which could limit successful implementation of this practice within the K-12 classroom context. The study applies a socio cultural perspective of learning to interpret a cross-case analysis of formative assessment practice occurring during one mathematics instructional unit in a 5th and one in a 6th grade classroom. It illustrates how a fully defined theoretical foundation deepens understanding of the roles of formative assessment in learning, posits a working definition by which the describe what formative assessment practice looks like and sound like as it is occurring in actual classrooms, and explains how the classroom social context influences formative assessment practice. The study has implications for future researchers investigating formative assessment practice; practitioners interested in implementing formative assessment practice; and policy makers evaluating the effectiveness of teachers's instructional practice.
Thesis:
Thesis (Ph.D.)--University of Colorado Denver. Educational studies and research
Bibliography:
Includes bibliographic references.
System Details:
System requirements: Adobe Reader.
General Note:
School of Education and Human Development
Statement of Responsibility:
by Julie Oxenford O'Brien.

Record Information

Source Institution:
|University of Colorado Denver
Holding Location:
|Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
891774345 ( OCLC )
ocn891774345

Downloads

This item has the following downloads:


Full Text
FORMATIVE ASSESSMENT IN CONTEXT
By
JULIE OXENFORD O'BRIAN
B.A., University of Colorado, Boulder, 1991
M.P.P., Georgetown University, 1996
A thesis submitted to the
Faculty of the Graduate School of the
University of Colorado in partial fulfillment
of the requirements for the degree of
Doctor of Philosophy
Educational Studies and Research
2013


2013
JULIE OXENFORD O'BRIAN
ALL RIGHTS RESERVED


This thesis for the Doctor of Philosophy degree by
Julie Oxenford OBrian
has been approved for the
Educational Studies and Research Program
By
Deanna Iceman Sands, Chair and Advisor
Elliott Asp
Honorine Nocon
Maria Araceli Ruiz-Primo
September 2nd, 2013


Oxenford O'Brian, Julie (Ph. D., Educational Studies and Research)
Formative Assessment in Context
Dissertation directed by Professor Deanna Iceman Sands.
ABSTRACT
This dissertation responds to critical gaps in current research on formative
assessment practice which could limit successful implementation of this practice within
the K-12 classroom context. The study applies a socio cultural perspective of learning to
interpret a cross-case analysis of formative assessment practice occurring during one
mathematics instructional unit in a 5th and one in a 6th grade classroom. It illustrates
how a fully defined theoretical foundation deepens understanding of the roles of
formative assessment in learning, posits a working definition by which the describe
what formative assessment practice looks like and sounds like as it is occurring in actual
classrooms, and explains how the classroom social context influences formative
assessment practice. The study has implications for future researchers investigating
formative assessment practice; practitioners interested in implementing formative
assessment practice; and policy makers evaluating the effectiveness of teachers'
instructional practice.
The form and content of this abstract are approved. I recommend its publication.
Approved: Deanna Iceman Sands


ACKNOWLEDGEMENTS
So many people provided support and encouragement during the development
of this dissertation. First, I want to thank my husband, Jeff Oxenford and my daughter
Alyse Oxenford for tolerating the gaps in my presence in their lives and their steadfast
support throughout. I want to thank my parents, Betty Keddington, Don O'Brian and
Nancy O'Brian, for maintaining faith that I would finish even when I lost mine. I want to
thank my colleague and friend Mary Romke for reminding me that I would finish and for
celebrating along the way. I want to thank my advisor, Deanna Sands, for reading and
re-reading and challenging me to continue to refine even when I just wanted to be
done. I want to thank the other members of my committee, Elliott Asp, Honorine
Nocon, and Maria Ruiz-Primo for their attention to the details, for reading and re-
reading and still having the energy to talk with me about my findings. I want to thank
Helen Douglas for helping me to collect the data that I analyzed in this study and caring
about the outcome. I want to thank Bonita Hamilton for letting me think aloud,
providing encouragement when I needed it most and helping me figure out how to
identify formative assessment episodes in context. I want to thank members of the
DEMFAP team who helped organize and manage all of the data used in this study-Jen
Feehan, Erin Sago, and Kendra Occhipinti. I couldn't have kept it together without you. I
want to thank two members of my Center staff who also helped me manage data, and
provided copy edits and transcription -Travis Grotewold and Star Hess. Finally, I want
to thank the student services staff who helped me overcome many bureaucratic hurdles
that could have paralyzed my progress, Rebecca Schell and Sandra Snyder-Mondragon.
IV


TABLE OF CONTENTS
CHAPTER
I: INTRODUCTION 1
Defining the Problem 2
Background on the Problem 4
Theoretical Framework 14
Research Questions 16
Methodology 16
Researcher Perspective 20
Organization 21
II: REVIEW OF THE LITERATURE AND THEORETICAL FRAMEWORK 22
An Historical Derivation of the Practice of Formative Assessment 26
Empirical Research on the Attributes of Formative Assessment Practice 39
A Socio-Cultural Interpretation of the Role of Formative Assessment in Learning 68
Bringing Together the Components of the Theoretical Framework 84
III: METHODOLOGY 87
Purpose 87
Research Questions 89
Data Collection 90
Data Analysis 105
IV: CASE REPORT ONE 131
Critical Attributes of Formative Assessment Practice 132
How the Social Context Influences Formative Assessment Practice in Situ 169
V: CASE REPORT TWO 226
v


Critical Attributes of Formative Assessment Practice 227
How the Social Context Influences Formative Assessment Practice 278
VI: CROSS-CASE ANALYSIS 337
A Comparison of the Critical Attributes of Formative Assessment Practice 337
A Comparison of How the Social Context Influenced Formative Assessment Practice 360
VII: CONCLUSIONS AND IMPLICATIONS 398
Introduction 398
Discussion of the Critical Attributes of Formative Assessment Practice 400
Discussion of How Social Context Influences Formative Assessment Practice 435
Limitations of Findings 449
Conclusions and Implications for Future Research 451
REFERENCES 455
APPENDIX A: INTERVIEW PROTOCOLS 464
APPENDIX B: OBSERVATION NOTES TABLE 480
APPENDIX C: DAILY CLASS SESSION MEMO EXAMPLE 481
APPENDIX D: TEACHER DEVELOPED MATERIALS USED IN LIZA'S CLASSROOM 491
APPENDIX E: TEACHER DEVELOPED MATERIALS USED IN KARI'S CLASSROOM 499
vi


LIST OF TABLES
TABLE
2.1 Historical Development of Formative assessment in the U.S.................... 23
2.2 The Role of Feedback in Student Self-Regulation.............................. 47
3.1 Data Sources for each Proposition............................................ 98
3.2 Learning Activity Names......................................................108
3.3 Formative Assessment Episode Codes...........................................120
4.1 Frequency of Types of Uses of Student Learning Data (Liza)...................135
4.2 Percent of Data Collection Method by Types of Data Use (Liza)................149
4.3 Learning Targets and Mathematical Tasks (Liza) ..............................152
4.4 Analysis and Interpretation Approaches by Work Product (Liza)................162
4.5 Levels of Uses of Student Learning Information by Types of Uses (Liza).......163
4.6 Teacher Developed Tools (Liza)...............................................177
4.7 Learning Activity Participation Structures and Tools (Liza)..................185
4.8 Typical Daily Activity Pattern...............................................194
4.9 Formative Assessment Episode Frequency by Learning Activity (Liza)...........202
4.10 Learning Activity and Participation Structure (Liza).........................218
5.1 Frequency of Types of Uses of Student Learning Data (Kari) ..................230
5.2 Data Collection Methods by User..............................................252
5.3 Percent of Data Collection Methods by Types of Data Use (Kari)...............255
5.4 Learning Targets and Mathematical Tasks (Kari)...............................258
5.5 Analysis and Interpretation Approaches by Work Product (Kari)................266
5.6 Levels of Uses of Student Learning Information by Types of Uses (Kari).......267
5.7 Levels of Use by Who Was Using...............................................269
5.8 Teacher Developed Tools (Kari)...............................................285
vii


5.9 Learning Activity Participation Structures and Tools (Kari)....................290
5.10 Formative Assessment Episode Frequency by Learning Activity (Kari).............301
5.11 Teacher Object and Student Engagement by Type of Learning Activity.............305
5.12 Frequency of Formative Assessment Episodes by Tools Used.......................312
5.13 Learning Activity by Participation Structure (Kari)............................325
6.1 Relative Frequency of Types of Data Use by Classroom...........................340
6.2 Data Collection Methods across Classrooms......................................348
6.3 Types of Learning Activities across Classrooms.................................365
6.4 Comparison of Participation Structures for Learning Activities.................385
7.1 Potentially Critical Attributes of Different Types of Data Use.................416
7.2 Attributes of Formative Assessment Episodes....................................433
viii


LIST OF FIGURES
FIGURE
1.1 The actions included in a formative assessment episode........................ 14
1.2 The components of the social context of a classroom........................... 15
2.1 The actions included in a formative assessment episode........................ 39
2.2 An illustration of formative assessment as a mediator of learning............. 80
2.3 The components of an activity system as identified by Engestrom............... 83
3.1 The units of analysis for the social context of formative assessment episodes occurring
within a mathematics instructional unit...................................... 93
3.2 The data collection sequence.................................................. 97
3.3 An illustration of my analysis approach.......................................106
3.4 An example of an expanded daily class session memo outline....................110
4.1 An illustration of data collection methods....................................146
IX


Chapter I
Introduction
The purposes of the research study described in this dissertation was to
illuminate how formative assessment practice works in real classrooms and how the
construction of the social context of classrooms influences, and is influenced by, the
formative assessment practice that occurs within them. Two theses guided this study.
The first was that I could establish what counts as formative assessment from a socio-
cultural perspective, a formative assessment episode, and using that definition to
identify and describe the attributes of formative practice within two case study
classrooms. The second was that the social context of the classrooms would influence,
and be influenced by the formative assessment occurring within them. To examine
these theses, I applied a socio cultural perspective of learning to interpret a cross-case
analysis of formative assessment practice occurring during one mathematics
instructional unit in a 5th and one in a 6th grade classroom.
In this chapter, I provide context and introduce the major components of this
dissertation. I first provide background information about the problem of focus, and
then introduce the theoretical framework and its associated components, the research
questions, and the research methods. Since no interpretation is independent from the
interpreter, I share relevant personal background information to provide a window to
the perspective I brought to the study. Finally, I describe the organization of the
remainder of the dissertation.
1


Defining the Problem
There is widespread agreement within the educational research community that
formative assessment plays a critical role in learning (Black & Wiliam, 1998a; Hattie &
Timperley, 2007; Li, Yin, Ruiz-Primo & Morozov, 2011; Shepard, 2000). Numerous
reports, studies, and "how to" books aimed at practitioners (K-12 teachers and
administrators) recommend formative assessment among practices most likely to
improve student learning (Brookhart, 2009; CCSSO, 2008; Hattie, 2009; Marzano, 2001;
OECD, 2005; Popham, 2009, 2011; Wiliam, 2011). A substantial body of research
evidence supports claims that formative assessment has a significant impact on student
learning (Black & Wiliam, 1998a; Hattie, 2009; Hattie & Timperley, 2007; Herman,
Osmundson, & Silver, 2010; James et al., 2007; Kluger & DeNisi, 1996; Rodriguez, 2004;
Ruiz-Primo & Furtak, 2006, 2007; Torrance & Pryor, 2001). This literature, however, has
a number of critical gaps and recent studies have identified challenges for teachers in
effectively implementing formative assessment practices (Heritage, Jones & White,
2010; Herman, Osmundson, & Silver, 2010).
The first gap is the lack of a consistent definition of formative assessment
practice and the critical attributes of that practice. Definitions vary from using a web-
based program to comment on students' homework (Lipnevich & Smith, 2008), to
providing extensive written feedback on student work (Ponte, Paek, Braun, Trapani, &
Powers, 2009), to engaging students in a sequence of question-answer-response (Ruiz-
Primo & Furtak, 2007; Chin, 2006). The attributes necessary for a practice to be
"formative assessment practice" are unclear within these various definitions. Even
2


authors of literature reviews who cite one another define the critical attributes of
formative assessment somewhat differently (Black & Wiliam, 1998a; Brookhart, 2009;
Frohbeiter et al., 2011; Hattie & Timperley, 2007; Li, Yin, Ruiz-Primo, & Morozov, 2010;
Mory, 2004; Shute, 2008).
Second, the lack of a fully defined theoretical foundation has hampered
consistent conceptualization of formative assessment (Black & Wiliam, 2009; Brookhart,
2004; Perrenoud, 1998). In an influential review, Black and Wiliam (1998a) took a
pragmatic approach to describing formative assessment practices with evidence of a
positive impact on student learning without contextualizing the practices within a
theoretical model of learning (Perrenoud, 1998). In some contexts, formative
assessment has a positive impact on learning and sometimes it does not (Kluger &
DeNisi, 1996). However, without a theoretical framework, it is difficult to explain why.
Finally, the lack of a consistently defined theoretical foundation for formative
assessment has limited efforts to identify aspects of the context that amplify or diminish
the effects of formative assessment practice. Without an understanding of the context
within which formative assessment is most likely to have a positive impact on learning,
educators and educational leaders will continue to implement these practices with
uneven success.
I responded to these gaps through the study described herein by applying a
socio-cultural perspective of learning to researching the presence and critical attributes
of formative assessment through a cross-case analysis of formative assessment
practices as they occur in two math classrooms (one 5th and one 6th grade). My primary
3


audience is researchers, with secondary audiences including practitioners and
educational policy makers.
The data collected for this study were part of a 4-year, lES-funded research
project, Developing and Evaluating Measures of Formative Assessment Practice
(DEMFAP) (Pi's, Ruiz-Primo & Sands, 2009). I was part of the team who developed the
initial DEMFAP project proposal and negotiated school district participation in the
project. I was the lead researcher collecting data in one of the classrooms included in
this study and shared lead responsibility for collecting data in the other classroom. The
DEMFAP project has a broader focus than this study. It aims to develop measures to
identify the presence and depth of formative assessment practice occurring within
classrooms. This study only focused on identifying the critical attributes of formative
assessment practice in two classrooms and explaining how the social context of the
classrooms within which formative assessment practice occurred influenced that
practice.
Background on the Problem
During the last twenty-five years, a number of meta-analyses and literature
reviews have focused on the impact of formative assessment practice on student
learning outcomes. In their 1998 review of over 250 empirical studies, Black and Wiliam
(1998a) found that when teachers used assessment data formatively to inform their
teaching and students' learning, they produced learning gains with effect sizes ranging
between 0.4 and 0.7. The impact of this review within schools and districts was
4


magnified when practitioners were exposed to it through a companion article published
in the Phi Delta Kappan (Black & Wiliam, 1998b), a practitioner-focused magazine.
While some claims in the Black & Wiliam review have come under fire (Benner,
2011), interest in and commitment to formative assessment as a reform strategy
remains strong, as evidenced by its inclusion in recent U.S. Department of Education
Race to the Top grants (Herman, Osmundson, Dai, Ringstaff, & Timms, 2011). In
addition, despite contradictory definitions, formative assessment has been considered a
classroom version of "data-driven decision making," a strategy many state and national-
level policy makers advocate as a mechanism for transforming low performing schools
(Frohbieter et al., 2011). Initial evidence and subsequent policy support for formative
assessment has been strong enough to ensure that policy makers and educators will
continue to advocate for implementation of formative assessment within K-12 schools
in the U.S., regardless of challenges to the research base. Given this context, it becomes
critical to understand what formative assessment practice looks like in actual K-12
classrooms and how the classroom context influences formative assessment practice.
I followed Shepard (2000), James (2008), and Black and Wiliam (2009) in building
upon socio-cultural theories of learning (Brookhart, 2004; Gipps, 1999; Lave & Wenger,
1991; Gee, 2008; Moss, 2008; Torrance & Pryor, 1998, 2001; Vygotsky, 1978; Wells,
1999) to contextualize formative assessment within a model of learning. Based on this
perspective, I derived a definition of formative assessment practice, developed a
description of the role of formative assessment practice in learning, and identified the
5


components of the social context of classrooms to consider as influencers or shapers of
formative assessment practice.
Defining formative assessment.
I considering the historical development of assessment practice generally, and
classroom uses of assessment results more specifically, to derive a definition of
formative assessment practice. My context for this derivation included the shifts in the
dominant learning theories influencing public K-12 education in the U.S. from its
inception.
Once K-12 education became available to all citizens, supported by taxes, and
mandated for certain ages of children, the public asked for an accounting of the value of
that education. This accounting, in the form of student progress reports, was the first
classroom use of assessment in the U.S. As access to secondary education expanded,
more efficient methods for accounting for the value of schooling were developed,
ultimately resulting in a 5-point grading scale used in the U.S. by 1918 (Guskey & Bailey,
2001) and still used today in many high schools (Marzano, 2006).
Assessment administered outside of the classroom expanded significantly during
the 1900s following publication of the first intelligence test in 1905 and of achievement
tests intended to provide direction for failing schools in 1908. Both types of tests were
standardized, intended to be objective, and designed to sort students based on ability
(Shepard, 2006; Gipps, 1996). The dominant learning theory of the time, behaviorism,
shaped these tests. Behaviorism advocated the use of testing to ensure students
mastered one objective before proceeding to the next. Consistent with these theories,
6


experts developed achievement tests outside of classrooms, and districts used them to
identify program level improvements rather than to support teachers' instructional
decisions (Shepard, 2006). Behaviorist learning theories and associated achievement
tests established beliefs among teachers that assessment needed to be separate from
instruction, uniformly administered, and "objective" to be fair (Shepard, 2000).
It was not until 1967 that the term "formative evaluation" was first used by
Michael Scriven as a process that included using student assessment data to evaluate
the effectiveness of school programs during, rather than just after, implementation
(Cizek, 2010; Wiliam, 2010). Bloom subsequently, in 1971, suggested that classroom
assessment should be part of formative evaluation "to both provide students feedback
on their learning progress and to guide correction of learning errors" (Guskey, 2010, p.
108).
Initial interest in formative uses of classroom assessment occurred as cognitive
theories of learning emphasizing the importance of the process of learning rather than
just mastery of discrete objectives were gaining prominence in the U.S. However,
despite the growing influence of cognitive learning theories on instructional practice, it
was not until 1989 that U.S. educational measurement experts first suggested a shift
"away from pass-fail post-tests of student mastery to richer assessments of students'
understandings and proficiency in a domain" (Shepard, 2006, p. 625). Even then, they
developed these newer tests outside of the classroom and administered them in
standardized formats (Shepard, 2006). Today, almost all states use the same type of
7


achievement tests for school and district accountability, and many states are beginning
to use these tests in educator evaluation.
The measurement community outside of the U.S. responded more quickly to
changes in learning theories. The Assessment Reform Group was formed in the United
Kingdom in 1989 to focus on the relationship between classroom assessment and
teaching and learning, (Shepard, 2006). Two of its members, Paul Black and Dylan
Wiliam (1998a) conducted the influential review of literature on formative assessment
practice and learning, referenced above, launching much of the current interest in
formative assessment practice.
The popularization of formative assessment following the Black and Wiliam
(1998a) review resulted in widespread appropriation of the term. In the U.S., vendors of
interim/benchmark assessments (formal tests administered several times during a
school year) called their products "formative assessments" making inappropriate claims
to this evidence as supporting the use of these products (Shepard, 2005a).
In 2006, because of wide spread inconsistency in the definition of formative
assessment in the U.S., the Council of Chief State School Officers (CCSSO) brought
together educational leaders and researchers to develop a common definition (CCSSO,
2008). They defined formative assessment as "a process used by teachers and students
during instruction that provides feedback to adjust ongoing teaching and learning to
improve students' achievement of intended instructional outcomes" (p. 3). This
definition described formative assessment as a process or practice, that occurs during
the learning process rather than a type if instrument. It also highlighted the relationship
8


between formative assessment and feedback, "formative assessment. . provides
feedback [emphasis added] to teachers and students" (CCSSO, 2008).
In developing the definition of formative assessment used in this study, I began
with the CCSSO definition but considered also efforts by Ramaprasad (1983), Sadler
(1989), and Black and Wiliam (1998a) to define formative feedback as a means of adding
greater depth to the CCSSO definition. These authors suggested that feedback is
formative only if it includes information about actual performance, some point of
reference for contextualizing that information, and mechanisms for comparing the two
and closing any apparent gap. I integrated the CCSSO definition of formative assessment
and these additional clarifications regarding when feedback is formative in my
definition.
In collaboration with my colleagues from the DEMFAP project, I built on
descriptions of other authors (Airasian, 1991; Bell & Cowie, 2001; Mavrommatis, 1997)
who coined the term "assessment episode" to operationalize my definition of formative
assessment practice within a classroom setting. I defined a formative assessment
episode as encompassing both the concept of an episode (something that involves
discernible actions) and the definitions of formative assessment and formative feedback
described above. Similarly to the DEMFAP project, I define a formative assessment
episode as including the following steps or actions: 1) Specifying (explicitly or implicitly)
a learning target(s); 2) Collecting data (through formal or informal methods) about the
actual level of student learning in reference to the learning target(s); 3) Analyzing
student learning data (in reference to the learning target); 4) Interpreting student
9


learning data (in reference to teacher practice and/or student learning needs); and,
5) Taking action, based on the interpretation, to improve student learning and/or
teacher practice. Both the teacher and students can be the actors taking these steps.
However, assessment episodes are "formative" only if they include the last step, the
teacher and/or student(s) using the information to improve teaching and learning.
A socio-cultural interpretation of the role of formative assessment in learning.
My next step was to describe the role of formative assessment in learning from a
socio-cultural perspective and identify what aspects of the social context of the
classroom had the potential to influence formative assessment practice. My description
stemmed from socio-cultural theories of learning based upon the seminal contributions
of Lev Vygotsky, and building upon the many authors who expanded Vygotsky's work to
school-based contexts (Cole, 1990, 1999; Cole & Engestrom, 2006; Gallimore & Tharp,
1990; Giest & Lompscher, 2003; Gipps, 1999, 2002; Lave & Wenger, 1991; Moss, 2008;
Shepard, 2000; Torrance & Pryor, 1998; Wells, 1999, 2002; Wertsch, 1990).
Vygotsky (1978) conceptualized learning and development as situated within a
social context. According to Vygotsky, learning is internalized social interaction. This
conceptualization of learning places socio-cultural theories somewhere between
behaviorist and constructivist theories. Conceptualizing learning as social emphasizes
the importance of considering teacher-to-student and student-to-student interaction in
any investigation of learning (Wertsch, 1990). I describe this as the social context of the
classroom. As a result of taking this perspective, I focused analysis on the learning
activity occurring within classrooms.
10


Building upon Vygotsky's work, Lave and Wenger (1991) developed the concept
of legitimate peripheral participation (LPP) to describe how individuals participate in
activity that includes learning. LPP suggests that "how" learners participate in the social
interaction that happens in the classroom matters, and emphasizes the importance of
considering teacher and student roles in learning activity and how those roles are
structured by behavioral norms.
Vygotsky (1978) introduced two additional concepts that helped to further
explicate the role of formative assessment in student learning tools as mediators of
learning, and the zone of proximal development. Vygotsky (1978) asserted that learning
is always mediated by culturally and historically developed artifacts and tools; tools are
more than aids to learning, tools shape the learning. According Wertsch (1990) we can
only understand human activity if we consider the tools that mediate it. From this
perspective, formative assessment practice itself is a cultural tool that mediates
learning. In other words, we can conceptualize the role of formative assessment in
learning as a social practice that mediates learning.
If formative assessment is a social practice, then cultural tools can also mediate
formative assessment. Examples of the tools that mediate formative assessment include
the following: tasks used in data collection, learning targets on which data collection
focuses, and protocols or material resources used to structure informal interactions
regarding evidence of student learning. I considered each of these types of tools in
developing my understanding of how the social context of a classroom influences
formative assessment practice.
11


Vygotsky's concept of the zone of proximal development (ZPD) was perhaps his
most important contribution to explaining how learning occurs (Wells, 1999). A learner's
ZPD is the knowledge, skills and understanding that she/he could not demonstrate on
her/his own, but could with help (Vygotsky, 1978). This conceptualization of the ZPD
highlights one key difference between socio-cultural learning theories and constructivist
theories; it shifts the focus away from what learners can demonstrate on their own
(constructivist) and towards what they can demonstrate with help in the process of
learning (socio-cultural). Each learner's ZPD becomes the target for assistance provided
by the teacher. By establishing a target for instructional assistance, formative
assessment also mediates teacher instructional practice.
The ZPD further implies that it is important to collect data about student learning
during learning activity, while students have access to help from the teacher or their
peers, and within the social context in which learning activity takes place (Shepard,
2000; Gipps, 2002). This means that one critical attribute of data collection, as part of
formative assessment practice, is that it must occur during learning while students have
access to help from the teacher or other students.
Moss (2008) asked what should count as assessment in classrooms from a socio-
cultural perspective. She recommended a definition of assessment that goes beyond
instruments used to collect student performance information and defined assessment
as social practice. She asserted that informal interactions regarding evidence of student
learning are as important to consider as the formal ones (those that are more typically
called 'assessment'). Jordan and Putz (2004) also defined a continuum of assessment
12


practice from formal to informal that included documentary (resulting in written or
electronic evidence of learning), discursive, and inherent assessment. Both discursive
and inherent assessment fit within Moss's expanded definition of what should count as
assessment in classrooms. Jordan and Putz further described something they called a
"social object," shared discursive assessment that becomes something to which the
teacher and students can later refer. Both Moss's description of what should count as
assessment from a socio-cultural perspective and the Jordan and Putz continuum of
assessment practice highlight the importance of explicitly considering informal data
collection as part of what counts as formative assessment.
A socio-cultural orientation to learning helps define the role of formative
assessment in learning and suggests a framework for identifying the critical attributes
formative assessment practices. This perspective highlights the need to consider the
tools used, who participates, the nature of that participation (student and teacher
roles), and the rules and norms that guide participation in explaining how the social
context of the classroom influences and is influenced by formative assessment practice
(Moss, 2008). Thus, I followed Moss (2008) and Oxenford-O'Brian, Nocon, and Sands
(2010) in using the components of an activity system proposed by Engestrom (1987,
1993, 2001) as an analytical tool to draw attention to the social context of K-12
classrooms within which formative assessment practices occur.
13


Theoretical Framework
My theoretical framework emerged from the literature and included two
constituents, definition of a formative assessment episode, and identified elements of
the classroom social context that influence formative assessment practice.
I derived historically a definition of formative assessment, and then
operationalized the actions included in the practice of formative assessment as a
formative assessment episode. This allowed me to distinguish formative assessment
from other related practices found in classrooms. Figure 1.1 depicts the actions included
in a formative assessment episode.
Figure 1.1. The actions included in a formative assessment episode
14


I adopted a socio-cultural perspective to describe the role of formative
assessment in learning in the K-12 educational context which supported my
identification of the key elements of the social context of classrooms that are potential
shapers of formative assessment practices. I adopted the components of an activity
system proposed by Engestrom (1987, 1993, and 2001) as an analytical tool to draw
attention to the social context of K-12 classrooms within which formative assessment
practices occur. As a result, I focused on the following aspects of the social learning
context: the object (the purpose of learning activity), tools (both physical and semiotic
that mediate learning), participants and roles (how teachers and learners participate in
learning activity), and rules or norms (what guides or structures teacher and student
participation in learning activity). Figure 1.2 identifies theoretical categories I later used
to describe the social context of K-12 classrooms.
Tools
Norms
Figure 1.2. The components of the social context of a classroom.
15


Research Questions
My study was guided by two research questions: 1) what are the critical
attributes of formative assessment practice as it occurs in K-12 classrooms?, and 2) How
does the social context of a K-12 classroom (i.e. object, tools, participation/roles, and
rules/norms) influence formative assessment practice in situ? I focused my study
further by propositions for each question that actualize the details of the theoretical
framework while simultaneously creating boundaries for data analysis. I introduce these
propositions in Chapter 3.
Methodology
I chose to conduct individual case studies in two classrooms and a cross-case
analysis to meet the purpose and respond to the study research questions. Including
more than one classroom in the study enhanced my description and explanation of
formative assessment practice in situ. I considered the multiple formative assessment
episodes occurring in one upper-elementary and one middle-level classroom during one
mathematics instructional unit in each classroom. During the second year of the
DEMFAP project, we collected data in fourteen upper-elementary and middle-level
classrooms in three different Colorado school districts. The two classrooms selected for
this study were among the fourteen.
One of my foci for each case was the attributes of formative assessment
practices evident across the many episodes of formative assessment practice that
occurred within a single instructional unit. The second foci of each case was the learning
activities within which formative assessment episodes occurred and relationship
16


between the social context of those learning activities and the attributes of the
formative assessment episodes.
Formative assessment practices occur during instructional units within the social
context of the classroom; however, that context often varies as the teacher engages
students in different learning activities during the instructional unit. As a result, I
defined a learning activity as a subcomponent of a class session identified by a change in
one or more component of the social context (object, tools used, roles, participation
structures, and norms). For example, when students complete a "warm-up" task that is
a learning activity distinct from students grading homework from the day before; these
two activities have different objects, use different tools and involve students engaging in
different roles. Both are sub-components of a single class session and the instructional
unit. I located formative assessment episodes within the constituent learning activities
of the observed unit of instruction. I was able to consider then how the social context of
different learning activities influenced, or were influenced by, their respective formative
assessment episodes.
DEMFAP researchers and I collected data for all class sessions within a single
instructional unit in both of my case study classrooms. These data included teacher
interviews (entrance, pre- and post-unit, and daily post-observation), video/audio-
recordings of daily mathematics class sessions across the entire unit, in-person class
session observations, and classroom artifacts (digitized). The pre- and post-unit
interviews in each classroom were structured. The daily post-observation interviews
were semi-structured, with questions based on what had occurred during the class
17


session preceding the interview. A videographer recorded each class session during the
unit and the teacher wore a microphone. In addition to the videographer, a separate
researcher was in the classroom capturing observation notes during each class session.
Observers captured notes about the social context of each learning activity during each
class session. I organized the observation notes by the key components of the social
context of the classroom, with an additional column to list formative assessment
episodes occurring during each learning activity. Artifacts, including all handouts, pages
used from instructional resources, and student work done during each class session
were also collected. Each data source provided information related to both of the study
research questions.
My general approach to analysis within each classroom, resulting in a case
report, included the following steps:
1. Characterize the classroom learning context for the unit over-all (unit learning
context) based on multiple types of data.
2. For each class session, code the video/audio recordings, observation notes, and
transcripts of the post-observation interviews (using process codes for the every
step of each formative assessment episode and descriptive codes for elements of
the classroom social context).
3. Develop a class-session memo (using coded video and audio recordings and
remaining data sources) including the following:
a. Characterize the learning context for each type of learning activity that
occurred (activity learning context),
18


b. Identify formative assessment episodes (based on data about student
learning being used and associated data collection of a discernible learning
target being identified), and
c. Locate each formative assessment episode within the learning activity during
which it occurred.
4. Create a table of formative assessment episodes (one row per episode); populate
the table based on data captured in the class session memos supplemented by
additional reference to the primary data sources as needed.
5. Analyze and characterize the formative assessment episodes that occurred across all
of the class sessions within the unit, responding to the propositions associated with
the first research question.
6. Analyze and characterize the social context of each type of learning activity that
occurred across all of the class sessions in the unit, describing how each component
of the social context related to the formative assessment episodes occurring within
the learning activity, and responding to the propositions associated with the second
research question.
After both case reports were developed, I compared the unit and activity
learning contexts, formative assessment episodes, and how the activity learning
contexts related to the formative assessment episodes across the two classrooms to
develop a cross-case comparison report. Then I provided a discussion of my results,
limitations of the study and implications for future research.
19


Researcher Perspective
For the past ten years, I have been the director of a training center located
within a school of education at a university. I established the center at the university to
build-capacity among in-service teachers and educational leaders. Two multi-year,
multi-million dollar grants provided start-up financial support for the center at the
university, which then focused on promoting "data-driven instructional practice" within
school districts and teacher preparation programs.
Early in the genesis of both projects, it became clear to me that much of the
educator training that was available in the marketplace focused on data too distant
from the daily decisions of teachers to have a meaningful impact on instructional
practice. I learned about the work of Paul Black and Dylan Wiliam in the U.K. after
several attempts to make use of existing training resources developed in the U.S. The
U.K. work fit the projects' intended impact on instructional practice better than
anything else we had come across. Subsequently, we developed two strands of training,
the first for educational leaders, focused on data-driven decision-making at the school
level and the second, for teachers, focused on classroom formative assessment practice.
We based the second strand on international research available at that time.
Project staff saw amazing results in some classrooms. Learners who started more
than a year behind caught-up. Teachers transformed the experience of their learners,
and learners were able to talk about what it meant to take responsibility and own their
learning. We saw no change in other classrooms with the same training and coaching.
We didn't know why or what could account for these differences. This was the genesis
20


of my interest in better understanding how formative assessment practice works in real
classrooms, what attributes of formative assessment practice are most important, and
under what conditions it can result in meaningful changes for students.
I brought to this study a belief in the power of formative assessment to
transform learning experiences for students. I also brought an appreciation for the work
of Paul Black and Dylan Wiliam and a respect for the role their efforts have played in
moving teachers towards using formative assessment practices in schools in my home
state. However, I also brought skepticism regarding implementation of formative
assessment practice something about the practice makes it difficult for many teachers
to implement. I believe it is difficult for some educators to make the necessary shifts in
their instructional practice and I wanted to understand better why.
Organization
I organized my dissertation into seven chapters. They include the following:
Chapter One: Introduction to the problem, theoretical perspective, and methods;
Chapter Two: Review of the literature about the problem and the theoretical
perspective, and presentation of the theoretical framework;
Chapter Three: Methodology of the study including research questions and
associated propositions, data collection, and data analysis;
Chapter Four: Case study report for the first classroom;
Chapter Five: Case study report for the second classroom;
Chapter Six: Cross-Case analysis; and
Chapter Seven: Results and implications for practice and future research.
21


Chapter II
Review of the Literature and Theoretical Framework
Formative assessment is learning activity that involves students interacting with
one another, the teacher, and learning resources situated within the context of
collective classroom activity. Formative assessment shapes social development,
distributed learning, and individual cognitive development within the K-12 classroom
context. Thus, I build upon socio-cultural theories of learning (Brookhart, 2004; Gipps,
1999; Lave & Wenger, 1991; Gee, 2008; Moss, 2008; Torrance & Pryor, 1998, 2001;
Vygotsky, 1978; Wells, 1999) to contextualize formative assessment practice within a
model of learning to develop the theoretical framework for this study.
From a socio-cultural perspective, "the present state can be understood only by
studying the stages of development that preceded it" (Wells, 1999, p. 5). To understand
better the mix and context of formative assessment practices present in K-12
classrooms within the US today, I consider the genesis of formative assessment practice
at the cultural level within the US over the past century, define formative assessment,
and develop the concept of a formative assessment episode. Then I review empirical
research related to the attributes of formative assessment practice associated with
improvements in student learning to deepen my description of what effective formative
assessment practice includes. Finally, I apply a socio-cultural perspective to describe the
role of formative assessment in teaching and learning and to further develop my
framework and to explain how the social context of a K-12 classroom influences, and is
influenced by, formative assessment practice.
22


Table 2.1
Historical Development of Formative Assessment in the U.S.
Year Assessment Classroom Use of Assessment Learning Theory
1818 Public education becomes widely available in the US.
1850- Formal progress evaluations
1900 of student work introduced as measures of the impact of public education
Early Rapid growth of high schools
1900 perpetuates a shift to percentage grading.
1905 Binet publishes intelligence tests used to sort students into learning experiences based on ability.
1908 Thorndike publishes achievement tests, using the same formats and scoring as IQ tests, to compare schools.
1912 Reliability of percentage scores as grades questioned.
1918 Most high schools move to a grading scale with fewer categories (A, B, C, D, F).
1920's Use of standardized, Behaviorist learning theories Behaviorist theories of
objective achievement enforce beliefs among learning gain momentum
tests expand based on teachers that assessment in US.
the army's success in needs to be separate from
using them to place instruction, uniformly Vygotsky conducts
recruits during WWI. administered, and research in cognitive
"objective" to be fair. development, forming basis of socio-cultural learning theories, but it is not available outside of Russia.
1930's Political rhetoric Grading on a curve is
suggests that introduced to reflect the
assessments should be same normal distribution as
used to improve learning; tests still developed and used outside the classroom. intelligence.
23


Year Assessment Classroom Use of Assessment Learning Theory
1931 Thorndike claims evaluative feedback (rewards and punishment) facilitates learning.
Early Cognitive theories of
1960's learning gain prominence with a focus on Piagetian constructivist theories.
1962 Vygotsky's socio-cultural learning theoretical research first published in the west.
1967 Scriven introduces formative evaluation as using student achievement data to evaluate school programs and curriculum during rather than after the program.
1971 Benjamin Bloom asserts classroom assessments should be used "to provide students feedback on their learning progress and to guide correction of learning errors."
1977 Bandura proposes Social Learning Theory: 1) people learn through observation; 2) internal mental states are essential to learning, and 3) learning does not always result in behavior change.
1983 Ramprasad defines feedback as "information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap".
24


Year Assessment Classroom Use of Assessment Learning Theory
1989 Shift away from pass-fail Constructivist influence
post-tests of student on instruction gets ahead
mastery to richer of assessment.
assessments of students' Curriculum developers
understanding; assessments propose alternatives to
still standardized and standardized tests for
developed outside of the classroom. use in classrooms.
1989 UK Assessment Reform Group formed to consider the relationship between assessment and learning
Sadler proposes, "Formative assessment is concerned with how judgments about the quality of student responses can be used to shape and improve the student's competence by short-circuiting the randomness and inefficiency of trial-and-error learning."
1998 Black & Wiliam review reports that practices that use assessment formatively produce large learning gains; Companion article aimed at practitioners is published.
2000 National Research Council report, How People Learn: Brain, Mind, Experience and School, a summary of the last 40 years of research on learning, emphasizes constructivist and socio- cultural learning theories (Bransford, Brown, & Cocking, 2000).
Early US vendors of interim
2000's assessments make inappropriate claims to evidence that formative assessment improves learning.
25


Year Assessment Classroom Use of Learning Theory Assessment
2001 National Research Council publishes Knowing What Students Know: The Science and Design of Educational Assessment, mapping new directions for assessment in the US including a greater role for classroom assessment (Pellegrino, Chudowsky, & Glaser, 2001).
2006 Council of Chief State School Officers (2006) develops a common definition of formative assessment.
2009 The Third international
Conference on Assessment
for learning proposes
another revised definition of
formative assessment.
An Historical Derivation of the Practice of Formative Assessment
Historically deriving the current form of the practice of formative assessment in
the US includes consideration of the antecedents and practices that are constituents of
formative assessment as it occurs within the context of K-12 classrooms in the US. Thus,
I consider the development of assessment practice more broadly, evolving classroom
uses of assessment (including grading), and related shifts in dominant theories of
learning from the beginning of public education in the U.S. Table 2.1 is an abbreviated
timeline of developments between the mid 1800's and the present day.
Most trace the beginning of public education in the U.S. to efforts of Horace
Mann and Henry Barnard who advocated for schools paid for by taxes and free to all
citizens by the mid 1800's (Public Broadcasting Service, 2001). Associated mandatory
attendance laws (passed in all states for the elementary level by 1818) brought greater
attention to questions about what students were getting out of going to school. The
result was an early classroom use of assessment, to develop formal student progress
26


evaluations, introduced in the form of narrative descriptions of the skills that students
had gained. By the early 1900's, rapid growth of high schools created the need for more
efficient mechanisms for providing information about what students were getting out of
going to school, resulting in another use of assessment results, to calculate percentages
to certify accomplishment in different subject areas. In 1912, the reliability of
percentage scores was questioned when a study of English and geometry percentage
grades in high school found dramatic variation in teachers' evaluation of the same
student work. By 1918, most high schools moved to a grading scale with fewer
categories (A, B, C, D, or F) to reduce the variability in grading if not to address grading
subjectivity (Guskey & Bailey, 2001). By the 1930's the assumption that intelligence was
normally distributed in the population began to influence grading. The practice of
grading on a curve became more common so that grades would reflect the same
distribution as intelligence. It wasn't until the 1940's that the assumption that
intelligence was normally distributed began to be challenged (Gipps, 1999). This scale
(A, B, C, D, F) still dominates grading practices in K-12 contexts in the US today
(Marzano, 2006).
Use of assessment outside of classrooms, followed a similar progression. In 1905,
a French psychologist, Binet, published the first intelligence test designed to and largely
used to sort students into learning experiences based on their abilities. "Items of an
educational nature were chosen for their effectiveness in distinguishing between
children who were judged by their teachers to be bright or dull," (Gipps, 1999). Then in
1908, Thorndike published achievement tests aimed at documenting the need for and
27


setting direction for failing schools. These tests focused on measuring the information
gained by those being assessed (rather than their reasoning), and because they were
developed in the same time period as early IQ tests, shared the same formats and
scoring approach based on discerning individual differences. Their standardized format
allowed for comparisons across schools and was seen as a remedy for the unreliability of
individual teacher-tests (Shepard, 2006).
The use of standardized objective achievement tests expanded further after
World War I because of the army's success in using objective tests to place recruits into
different roles (Gipps, 1999). While the rhetoric at the time suggested assessment
should improve learning, this was to be accomplished by district leadership using the
results of tests that were developed outside the classroom to shape programmatic
improvements rather than by teachers making decisions about changes in their own
practice (Shepard, 2006). This use of standardized achievement tests in the U.S.
continues through the present day.
The expansion of standardized objective testing corresponded with the
development of behaviorist learning theories in the U.S. which "conceived of learning as
the accumulation of stimulus-response associations," and advocated the use of testing
to "ensure mastery before proceeding to the next objective," (Shepard, 2000, p. 5). Not
surprisingly, Thorndike was a leading figure in both objective testing and behaviorist
approaches to learning, (Shepard, 2000). Within a behaviorist paradigm, it is important
to determine if connections have become habitual. Thus, the role of assessment is to
measure typical performance, or the performance a student would be likely to sustain
28


over time (Gipps, 1999). Assessment items that require no judgment on the part of the
scorer, such as multiple-choice items, are preferable to those in which the scorer plays a
role, such as extended response or essay questions. Educators can then use assessment
results to provide the appropriate response to learners. The behaviorist learning
paradigm helped establish beliefs among teachers that assessment needs to be separate
from instruction, uniformly administered, and "objective" to be fair (Shepard, 2000).
Within the behaviorist theoretical framework, feedback plays an important role
in the learning process. According to Shepard (2006), "one of the oldest findings in
psychological research is that feedback facilitates learning," (p. 631). However, from a
behaviorist perspective, the feedback that teachers provide to students should be
evaluative; it should be in the form of rewards or encouragement for mastering content,
and punishments or discouragement for failing to master content (Tunstall & Gipps,
1996). The notion that assessment or feedback about assessment could be formative
came much later.
Russian psychologist Lev Vygotsky conducted extensive research on
developmental child psychology and cognitive development, forming the basis of socio-
cultural learning theories, during the same period that behaviorist theories dominated
in the U.S. However, his work was not available outside of Russia until the 1960's.
A number of authors give Scriven credit for coining the term formative
evaluation in 1967 in a monograph created for the American Educational Research
Association (Cizek, 2010; Wiliam, 2010). Scriven introduced this term in the context of
29


using student achievement data as part of evaluating the effectiveness of school
programs during rather than just at the end of the program (Cizek, 2010).
The idea that assessment results could be used in formative evaluation gained
greater recognition when Benjamin Bloom and his associates introduced a related idea
in the Handbook of Formative and Summative Evaluation of Student learning published
in 1971, (Cizek, 2010). In it, Bloom proposed a solution to the problem of teachers failing
to vary instruction even while students came to classrooms with vastly different
backgrounds and skills. He suggested that teachers could use classroom assessments as
"learning tools, both to provide students feedback on their learning progress and to
guide correction of learning errors," (Guskey, 2010, p. 108). Then Bloom went on to
describe how teachers could use assessments formatively as a part of regular classroom
routines (Guskey, 2010).
This initial description of formative uses of assessment occurred about the same
time that cognitive theories of learning became more prominent in the U.S.
Constructivism grew out of cognitive psychological research and offers that learning is
an individual process of constructing meaning from experience. As Phillips (2000) put it
"knowledge is made, not acquired" (p. 7). Constructivist theories of learning, such as
those espoused by the Swiss scientist Jean Piaget, began gaining momentum in the
1960's (Crain, 1992).
The influence of constructivist theories on assessment lagged behind the
influence on instruction (Shepard, 2000). It was not until the third edition of Educational
Measurement published in 1989 that the authors first suggested a shift "away from
30


pass-fail post-tests of student mastery to richer assessments of students'
understandings and proficiency in a domain" (Shepard, 2006, p. 625). Even then, the
authors assumed the assessments should influence classroom decisions, but experts
should develop them outside of the classroom and provide them in standardized
formats (Shepard, 2006). While the U.S. measurement community continued to focus
on large-scale standardized assessment, curriculum developers and subject-matter
experts began to develop alternatives to standardized tests for use in classrooms. These
alternatives included informal strategies for collecting information about learning such
as checking on reading while reading was happening, observing students in action,
teacher questioning, and student journaling (Shepard, 2001).
The measurement community outside of the U.S. responded more quickly to
changes in learning theory. In 1989, an Australian researcher, Royce Sadler, proposed a
widely accepted model of formative assessment (Shepard, 2006; Wiliam, 2010).
Sadler(1989) described formative assessment as being "concerned with how judgments
about the quality of student responses (performances, pieces, or works) can be used to
shape and improve the student's competence by short-circuiting the randomness and
inefficiency of trial-and-error learning," (p. 120). He further asserted that for
improvement to occur, the learner has to (a) possess a concept of the standard (or goal,
or reference level) being aimed for, (b) compare the actual (or current) level of
performance with the standard, and (c) engage in appropriate action which leads to
some closure of the gap. During the 1990's and beyond, researchers continued to build
upon Sadler's model of formative assessment.
31


Also in 1989, in the U.K., the Assessment Reform Group was formed to focus on
the relationship between classroom assessment, and teaching and learning, (Shepard,
2006). In 1998, two members of the U.K. Assessment Reform Group, Paul Black and
Dylan Wiliam, conducted a review of research on the relationship between formative
assessment practice and learning, including studies conducted in the years following
Sadler's 1989 presentation of his model of formative assessment. Black and Wiliam
found that innovations that included strengthening practices which used assessment
formatively produced learning gains with effect sizes ranging between 0.4 and 0.7.
While some have raised methodological questions about the Black and Wiliam
(1998) review (see for example Bennett, 2011), interest in formative assessment in the
U.S. has grown substantially since the publication of the review and a companion article
aimed at practitioners in the Phi Delta Kappan (CCSSO, 2008). This resulted in a
considerable increase in studies on the impact of the formative use of assessment on
student learning (e.g. Andrade, Du, & Wang, 2008; Chin, 2006; Hattie & Timperley,
2007; Lee & Gavine, 2003; McDonald & Boud, 2003; Rodriguez, 2004; Sebba et al., 2008;
Torrance & Pryor, 1998, 2001). However, the definition of formative assessment used in
these studies has been far from consistent. With the popularization of the term
following the Black and Wiliam 1998 review, in the U.S., vendors of interim or
benchmark assessments appropriated the term "formative assessment" as a description
of their products, making inappropriate claims to this evidence as supporting the use of
their products (Shepard, 2005a).
32


Concurrent with these developments in classroom use of assessment, and the
evolving definition of formative assessment, socio-cultural and social learning theories
continued to gain influence in the U.S. Lev Vygotsky's socio-cultural learning theoretical
research was first published outside of Russia an 1962. Then in 1977, Albert Bandura
proposed "social learning theory" which asserted that: 1) people can learn through
observation; 2) internal mental states are an essential part of this process, and 3) just
because an individual learns something, does not mean his/her behavior will change.
In 2000, the U.S. National Research Council published, How People Learn: Brain,
Mind, Experience and School, a summary of the last 40 years of research on learning
('Bransford, Brown, & Cocking). This work emphasized constructivist and socio-cultural
perspectives over behaviorist perspectives. One year later, the National Research
Council published Knowing What Students Know: The Science and Design of Educational
Assessment (Pellegrino, Chudowsky, & Glaser, 2001). This publication mapped new
directions for assessment in the U.S., based on constructivist and socio-cultural learning
theories. It identified a larger and more specific role for classroom assessment,
representing a greater alignment between the assessment research community in the
U.S. and those outside the U.S. This publication also provided a common definition of
assessment as "a process by which educators use students' responses to specially
created or naturally occurring stimuli to draw inferences about the students' knowledge
and skills" (p. 8). In it, the authors specified that assessment included three
components, the aspect(s) of student learning that are to be assessed (cognition), the
tasks used to collect evidence about students' achievement (observation), and the
33


approach used to analyze and interpret the evidence resulting from the tasks
(interpretation). These three components apply to assessment used formatively.
Defining formative assessment.
In an attempt to remedy confusion related to the definition of formative
assessment in the K-12 classroom context, in 2006, the Council of Chief State School
Officers (CCSSO, 2008) brought together researchers and state officials to develop a
common definition of formative assessment. According to their joint and unanimous
effort, "formative assessment is a process used by teachers and students during
instruction that provides feedback to adjust ongoing teaching and learning to improve
students' achievement of intended instructional outcomes" (p. 3). The CCSSO definition
is similar to definitions posited later (Black & Wiliam 2009; Nichols, Meyers, & Burling,
2009; Popham, 2008; Shepard, 2009; Third International Conference on Assessment for
Learning, 2009) in making a clear distinction between practices that formatively use
assessment results and the assessment instruments through which student learning
data are collected. Practices are distinct from the more general human activity because
a practice is activity that adds meaning and often involves the use of instruments or
tools (Wenger, 1998). The practice of formative assessment adds meaning when
educators use information collected about student learning to adjust their instruction
and/or students use it to adjust their learning tactics (Popham, 2008). The assessment
instruments used to generate information about students' learning are critical tools that
support formative assessment practices. The implication is that, "it is the use of an
instrument rather than the instrument that must be shown, with evidence, to warrant
34


the claim of formative assessment," (Shepard, 2009). In other words, determining if
formative assessment has occurred requires considering the practices in which students
and teachers engage while using assessment results in the formation of learning rather
than the instruments used in the process of collecting data (Brookhart, 2009; Nichols,
Meyers, & Burling, 2009; Shepard, 2009). A second common thread across the
definitions of formative assessment is that assessment results become feedback to
teaching and learning. "Information from assessment is fed back within the system and
actually used to improve the performance of the system in some way," (Wiliam &
Thompson, 2008). Finally, each definition affirms that formative assessment practices
actively involve students. "One of the critical aspects of formative assessment is that
both students and teachers participate in generating and using the assessment
information," (Brookhart, 2009, p. 1).
According to Black and Wiliam (1998), the two concepts, formative assessment
and feedback, overlap significantly. This is evident in the definition of formative
assessment, which describes it as a process that provides feedback (CCSSO, 2008).
According to Ramaprasad (1983), feedback is "information about the gap between the
actual level and the reference level of a system parameter which is used to alter the gap
in some way" (p. 4). Ramaprasad asserts that feedback depends on information about
the reference level (how good is good enough) and the actual level of the "system
parameter" both being present, and comparable. This roughly corresponds with the first
component of assessment identified in the National Research Council publication,
Knowing What Students Know: The Science and Design of Educational Assessment
35


(Pellegrino, Chudowsky, & Glaser, 2001). According to that publication, the first
component of assessment is the aspect of student learning to be measured. For
formative assessment to be a process that provides feedback, both the reference and
actual level of a system parameter must be present. Also according to Ramaprasad, in
the context of education, the "system parameter" could include disposition (affect),
effort, self-regulation, meta-cognitive process, or the actual response (performance). In
the case of formative assessment, the system parameter is student learning or
performance. Ramaprasad also established a condition for feedback to be formative; he
asserted that if information about the gap between the reference and actual values of
the system parameter is only stored or reported and not used to alter the gap then it is
not feedback. This reinforces the idea that assessment practice is only formative if the
information gained from assessment is used to improve teaching and learning. Black and
Wiliam (1998) built on Ramaprasad's definition in describing formative feedback as
including: 1) information about the actual level of some measurable attribute;
2) information about the reference level of that attribute, 3) a mechanism for
comparing the two levels and generating information about the gap, and 4) a
mechanism for making use of the information to alter the gap. Applied to the definition
of formative assessment, this suggests that formative assessment depends on
information about the actual and reference level of student performance, a mechanism
for comparing the two levels to generate information about any gaps, and a mechanism
for using that information to alter the gap.
36


Sadler (1989) extended Ramaprasad's definition of feedback by emphasizing the
importance of the student understanding the reference parameter, "The indispensable
conditions for improvement are that the student comes to hold a concept of quality
roughly similar to that held by the teacher. .." (p. 121). He asserted that students
should understand the target or goal for their learning. My definition of formative
assessment incorporates currently used definitions of the term and efforts by several
authors to clarify what counts as formative feedback.
Defining formative assessment episodes.
Operationalizing the components of the definition of formative assessment
within an actual classroom context requires distinguishing between occurrences of
formative assessment and occurrences of other instructional practices also found in
classrooms. Thus, in collaboration with my DEMFAP colleagues, I built upon others
(Airasian, 1991; Bell & Cowie, 2001; Mavrommatis, 1997) who have described and used
the concept of an "assessment episode" in their analysis of classroom practice, to derive
a definition of a formative assessment episode. Mavrommatis (1997) identified four
"phases" of an assessment episode: evidence collection (using a variety of methods
from informal to formal), evidence interpretation (comparison of evidence to desired
standard), teacher response, and impact on students. These phases roughly parallel the
components of formative assessment as described by Bell and Cowie (2001):
(1) gathering information about learning, (2) analyzing/interpreting the gathered
information about learning, and (3) acting-on/using this information to improve student
learning. Neither Mavrommatis nor Bell and Cowie explicitly include a "reference
37


parameter" in their definition of the components of an assessment episode or their
explication of how analysis and interpretation occurs.
I built on several sources to define a formative assessment episode as including
the following elements: 1) Identifying (explicitly or implicitly) learning target(s) and
clarifying it with students, 2) Collecting data about the actual level of student learning in
reference to the learning target (through formal or informal methods), 3) Analyzing
student learning data (in reference to the learning target), 4) Interpreting student
learning data (in reference to teacher practice and/or student learning needs), and
5) the teacher and/or student(s) taking action, based on the interpretation, to improve
student learning and/or instructional practice. To be formative, assessment episodes
must include the last step the teacher and/or student(s) using the information to
improve learning and teaching. Examples of instructional practices that include these
elements, but are not a formative assessment episodes, include: the teacher
administering an end-of-unit test, scoring, grading and returning the test to students
and using the score in calculating an end of term grade; or the teacher collecting data
about student learning (i.e. a homework assignment) but not relating the data to a
learning target.
Figure 2.1 depicts the elements of a formative assessment episode as a
continuous cycle of actions that considered from both the teacher and student
perspective. The learning target or reference parameter is in the middle of the cycle
because it establishes the context for the other steps in the cycle. This historically
derived depiction of a formative assessment episode forms the first piece of the
38


theoretical framework for this study. I identify formative assessment practices in situ as
episodes that include the actions depicted in Figure 2.1.

r
r :ollectin Data g 1 i


Using Data i 1

A
1 lentifyin .earning
1 rarget(s] W

' A i nalyzin Data g 1 j

Figure 2.1. The actions included in a Formative Assessment Episode.
Empirical Research on the Attributes of Formative Assessment Practice
Following publication of the Black and Wiliam review (1998) and subsequent
alignment of definitions of formative assessment between the U.S. and international
assessment research community, research on formative assessment continued to
expand, adding to the literature related to the attributes of formative assessment
practice most likely to result in improvements in student learning. In their 1998 review,
Black and Wiliam identified six different formative uses of student learning data for
which they found evidence of improvements in student learning outcomes. These
39


included: 1) the teacher sharing the criteria for evaluating learning with their students,
2) the teacher providing descriptive (as opposed to evaluative) feedback to students,
3) students engaging in self-assessment, 4) student-to-student peer assessment, 5) the
teacher using oral questioning to learn about and form learning, and 6) the teacher
using student learning data to adjust instructional practice.
A number of researchers (Brookhart, 2008; Cho & MacArthur, 2010; Hattie &
Timperley, 2007; Li, Yin, Ruiz-Primo & Morozov, 2011; McDonald & Boud, 2003;
Orsmond, Merry, & Callaghan, 2004; Pointe et al., 2009; Ross et al., 1999, 2002; Sebba
et al., 2008; Zundert et al., 2010) focused more narrowly on one or more formative use
of student learning data. This literature base contributes to my definition of formative
assessment by providing additional evidence regarding the attributes of different
formative uses of student learning data that may be important. The use of student
learning data with the most substantial research base is the teacher providing formative
feedback to students, with much of the research pre-dating the Black and Wiliam
(1998a) review. I provide a more extensive review of the literature related to critical
attributes of formative feedback below. Then I provide abbreviated summaries of
literature related to the critical attributes of student self-assessment, student-to-
student peer assessment, and formative questioning. Finally, I summarize some of the
more recent literature on teachers using assessment formatively to adjust their
instructional practice. In my model, I characterize the teacher sharing the criteria for
evaluating learning with their students is a step in the process of formative assessment,
rather than a use of student learning data. Several of the studies reviewed below
40


consider sharing learning criteria with their students, or the student and the teacher
coming to joint understanding of the learning criteria as part of different uses.
Formative feedback.
Given that feedback has been described as one of the most powerful influences
enhancing achievement (Hattie, 2009), it is surprising how few recent meta-analyses
and empirical studies have confirmed a positive relationship between feedback and
student learning (Li, Yin, Ruiz-Primo & Morozov, 2011). Numerous studies have
demonstrated a negative or no relationship between feedback and student learning
(Bangert-Drowns, Kulik, Kulik & Morgan, 1991; Kluger & DeNisi, 1996; Li, Yin, Ruiz-Primo
& Morozov, 2011; Morozov, Yin, Li, and Ruiz-Primo, 2010; Wadell, 2004), pointing to the
importance of specifically considering the attributes of feedback, the context in which
the feedback was provided and, some argue, the characteristics of learners, any of
which could mediate the relationship between feedback and learning. More recent
(since 1991) meta-analyses, empirical studies, and research syntheses focused on the
relationship between feedback and student learning, provide some clues regarding the
attributes of feedback most likely to impact student learning
Meta-analyses. Kluger and DeNisi conducted a seminal review and meta-analysis
of the effectiveness of feedback in improving learning in 1996 (Black & Wiliam, 1998).
Kluger and DeNisi's meta-analysis considered "action(s) taken by (an) external agent(s)
to provide information regarding some aspect (s) of task performance" (p. 255). The
authors estimated 607 effect sizes from studies of feedback meeting this definition,
within an average effect size of 0.41. Probably the most remarkable finding of this meta-
41


analysis was that over 38% of the effect sizes they calculated were negative. They
proposed a feedback intervention model to identify mediators of the effect of feedback
on student performance with two key findings emerging from their meta-analysis:
1) feedback that provides information about changes from prior performance and/or
that supplies the correct solution, improves effects on student performance;
2) feedback that directs attention to the self (including both criticism and praise) and/or
that references others, reduces effects on student performance; and 3) whether the
feedback is negative or positive is not significant.
Bangert-Drowns, Kulik, Kulik and Morgan (1991) focused on the information
provided to students through written text or a computer, that was in response to a
formal event (like a test), and addressed achievement. This review suggested that
feedback in more typical classroom settings can be more effective than that provided in
computer-based settings, and feedback which tells learners whether or not their
answers were correct is the most effective.
Li et al., (2011) defined feedback as, "information provided by teachers (or
peers) to students in order to reduce the difference between their current
understanding and what is expected in their performance . the essence of [which] lies
in its' formative role in shaping subsequent learning actions" (Morozov et al., 2011, p.
1). In their meta-analysis of the relationship between feedback and learning in
mathematics, Li et al., (2011) applied strict inclusion criteria to select empirical studies
of feedback between 1988 and 2010. This left them with only 18 papers that included a
total of 33 studies from which they calculated effect sizes. They found statistically
42


significance positive impacts on student learning when the feedback was about the
content of what students were learning (three studies); and even when the only
feedback provided was how well the students had performed, learning outcomes
increased in comparison to no feedback at all.
Taken together these meta-analyses provide some clues about the attributes of
formative feedback that can mediate the effects of the feedback on student learning.
Within the context of typical classroom instruction, feedback is effective in improving
performance if it provides knowledge of results (Bangert-Drowns et al., 1991; Li et al.,
2011), or if it provides information about changes from prior performance, or supplies
the correct solution (Kluger & DeNisi, 1996; Li et al., 2011). Feedback is less helpful if it
directs attention to the student him/herself (such as criticism or praise) or references
the performance of other students (Bangert-Drowns et al., 1991; Kluger & DeNisi, 1996).
Empirical studies. Additional empirical studies further clarify the attributes of
feedback that mediate its impact on learning. Three studies by Butler, and one including
a co-author (Butler, 1987, 1988; Butler & Nisan, 1986) and a two more recent studies
(Lipnevich & Smith, 2008; Pointe et al., 2009) considered the relationship between
normative feedback (or grades) and student learning in comparison to feedback that
was not normative. These studies were consistent with the meta-analyses (Bangert-
Drowns et al., 1991; Kluger & DeNisi, 1996) in finding that normative feedback was not
associated with improvements in performance. Butler (1987, 1988) described negative
effects of normative comments or normative grades. In general, grades and grades plus
comments had similar and generally undermining effects on performance. However,
43


high achievers who received grades maintained high interest and performed well on
additional tasks when they anticipated further grades. Pointe et al. (2009) found that
the students of teachers who reported that they provided more detailed feedback (not
grades, but rather explanations about where students made mistakes and how to
address them) had higher than expected performance on the Biology AP exam (when
PSAT scores were used to predict performance). Lipnevich and Smith (2008) found that
students who received descriptive feedback performed significantly better than those
who did not receive feedback; students who received a grade and praise, scored higher
subsequently than students who received a grade but no praise; and in the absence of
detailed feedback, students who received a grade did somewhat better than those who
did not.
Other empirical studies provide insight into the relationship between praise and
student performance. Dweck (2000) found that students who received what might seem
like the most ego-boosting forms of praise were at a clear disadvantage when it came to
later coping with additional challenges, and students whose positive feedback focused
on their effort or their strategy were better prepared to cope with later obstacles.
Follow-up studies with older children extended these findings with the negative effects
of person oriented feedback showing up even before subsequent failure. Elwar and
Corno (1995) found that students who received "specific comments on errors and faulty
strategy tempered by suggestions on how to improve, plus at least one positive remark
on work done well" (p. 164) had statistically significantly higher scores on the post-tests
than students who received no feedback and that these positive effects on student
44


learning were present across student ability levels. Both the Dweck (2000) and Elwar
and Corno (1995)studies suggest that praise results in higher student performance only
as long as it is directed towards effort and not ability.
What emerges from the empirical literature (meta-analysis and individual
empirical studies) regarding the relationship between feedback and student learning is a
messy picture of which feedback attributes actually make a difference. However, some
findings are consistent. As long as the feedback is not normative (e.g. it does not
compare student performance to that of other students) and doesn't reference fixed
characteristics of the student, some feedback increases performance over no feedback,
even if the feedback is just "knowledge of results." Feedback directed at the individual
or fixed attributes of the individual has a negative effect on performance. This includes
praise. Finally, feedback that includes information about the process or strategy in
which students are engaging improves both performance and motivation.
Research syntheses. Research syntheses further explicate the attributes of
feedback that improve student learning (Black & Wiliam, 1998; Brookhart, 2008; Butler
& Winne, 1995; Hattie & Timperley, 2007, 2002; Nicol & Macfarlane-Dick, 2006; Shute;
2008). Several provide models of the relationship between feedback and learning and,
in some cases, build upon one another.
Black and Wiliam (1998) took the feedback intervention theory proposed by
Kluger and DeNisi (1996) as a starting point for their explanation of the relationship
between feedback and student learning. They expanded upon the studies that assert
"feedback interventions that cue individuals to direct attention to the self rather than
45


the task appear to be likely to have negative effects on performance," (p. 49),
summarizing with the idea that teacher feedback should encourage students to believe
that "success is due to internal, unstable, specific factors such as effort, rather than
stable general factors such as ability (internal) or whether one is positively regarded by
the teacher (external)" (p. 51).
Butler and Winne (1995) developed a theoretical model for the role of feedback
in self-regulation and then re-examined earlier feedback studies to explain their findings
about feedback's effects on learning. They asserted researchers must consider "how
feedback mediates performance through a series of recursively linked self-regulatory
cognitive engagements" (p. 255). The concept of self-regulation helps explain what
happens while students are engaged in a task, thus, monitoring mid-task is key to self-
regulation. Butler and Winne (1995) proposed functions for feedback in addressing both
student difficulties in implementing tactics needed to complete tasks and in monitoring
his/her engagement in the task. They asserted that external feedback is effective if it
improves students' knowledge or their processes. Table 2.2 includes their description of
the relationship between specific student difficulties and feedback's potential roles in
addressing each difficulty simplified from the original (Butler & Winne, 1995, p. 267).
This model provides specific suggestions regarding how teachers should aim their
feedback, while assuming feedback occurs during the learning process.
46


Table 2.2
The Role of Feedback in Student Self-Regulation (simplified from the original)
Student difficulty related to. . Feedback's potential roles in
Implementing strategies Monitoring Developing knowledge Improving Self- Regulation
Fails to recognize task conditions that cue strategy use Add to or correct knowledge about task Analyze tasks and set goals
Misperceives task conditions and selects the wrong strategies Sets inappropriate criteria forjudging performance Reshape knowledge about task and improve knowledge about strategies Analyze tasks, set goals, select strategies, monitor
Doesn't recognize the relationship between task conditions and performance Reshape knowledge about task and improve knowledge about strategies Analyze tasks, set goals, monitor
Experiences problems executing selected strategies Is overly challenged by cognitive demands during monitoring Tune knowledge about tasks (combining or chunking); tune and automate knowledge about strategies Select and implement strategies, monitor
Provides too little effort to deploy strategies Lacks motivation to monitor actions and results Address action control strategies, motivational beliefs, or epistemological beliefs Select and implement action control strategies, monitor
Nicol and Macfarlane-Dick (2006) built upon the work of Butler and Winne
(1995), positioning the research on feedback within a model of self-regulated learning,
although their analysis and resulting recommendations were much more general. They
conceptualized their model partially within a socio-cultural perspective, identifying
seven principles of feedback practice that facilitates learner self-regulation. According to
47


Nicol and Macfarlane-Dick, effective feedback: "helps clarify what good performance is
(providing goals, criteria, expected standards), facilitates the development of self-
assessment (reflection) in learning, delivers high quality information to students about
their learning, encourages teacher and peer dialogue around learning, encourages
positive motivational beliefs and self-esteem, provides opportunities to close the gap
between current and desired performance, and provides information to teachers that
can be used to help shape teaching," (p. 205 ).
Hattie and Timperley (2007) also applied a model of feedback that builds upon
those proposed by Kluger and DeNisi (1996) and Butler and Winne (1995), to help
explain the "circumstances under which feedback has the greatest impact" (p. 81). They
proposed that the effectiveness of feedback is determined by the level at which the
feedback helps to answers three questions: "Where am I going? (What are the goals?)
How am I going? (What progress is being made toward the goal?) and Where to next?
(What activities need to be undertaken to make better progress?)" (p. 86). By level, they
meant the following: task performance, process of understanding how to do a task,
regulatory or meta-cognitive process level, and/or the self or personal level. According
to Hattie and Timperley (2007) feedback focused at the task level is most effective if it
helps students reject erroneous hypotheses and provides cues as to directions for
searching and strategizing and should not be too complex. At this level, teachers can
provide feedback to individuals or groups of students. Task level feedback is more
effective if provided immediately (timing) after completion of the task, and can be
positive or negative as long as the negative feedback includes information about how to
48


correct mistakes. Feedback focused at the process level should relate to student
construction of meaning and strategies for error detection, should cue student
searching for information and use of effective strategies, and should assist students in
rejecting erroneous hypotheses. Process level feedback is more effective than task level
feedback. Students' capability to create internal feedback for self-assessment and their
willingness to invest effort into seeking and responding to feedback mediates feedback
focused at the self-regulation level. The student's degree of confidence in the
correctness of his/her response, how he/she attributes success or failure, and his/her
level of proficiency at seeking help all influence this level of feedback. Finally, feedback
focused at the self or personal level is not effective. Hattie and Timperley (2007) also
confirm several findings presented earlier, including that rewards should not be
considered feedback at all, praise (unconnected to the task) is not helpful at improving
performance or understanding (even though students like it), and feedback that
attributes performance to effort may increase performance.
A couple of more recent reviews by Brookhart (2008) and Shute (2008) are best
characterized as lists of characteristics or attributes of effective feedback. Brookhart's
(2008) review, published as a guide for practitioners on how to give effective feedback,
brings together much of the same research already cited to provide recommendations
to practitioners about how to provide effective feedback based on the attributes of the
feedback message content and the delivery strategy. Her dimensions of feedback
content include the following: focus (the four categories identified by Hattie &
Timperley, 2007), basis for comparison (criteria, other students, past performance),
49


function (descriptive or evaluative), valence (positive or negative), clarity to student
(vocabulary, amount), specificity, and tone (implications and word choice). She made
recommendations from each about what effective feedback messages include
consistent with the literature cited already. Brookhart also made recommendations
regarding the delivery strategy for the feedback, including the following: determine the
timing of feedback based on the type of learning goal but never delay feedback beyond
when it would make a difference to the learning, prioritize the points upon which to
provide feedback choosing those that related to major learning goals and consider
learners' developmental levels, select the best mode for the message (oral, written or
demonstration) based on the content of the feedback, interactive feedback (oral) is best
when possible, providing feedback to individual students indicates that that students
learning is valued, and provide group/class feedback if most of the class missed the
same concept. Shute's (2008) review focuses more narrowly on task-level feedback. Like
Brookhart (2008), Shute summarizes recommendations regarding what makes feedback
effective. Her recommendations included:
"Focus feedback on the task, not the learner; Provide elaborated
feedback to enhance learning; Present elaborated feedback in
manageable units; Be specific and clear with feedback message; Keep
feedback as simple as possible, but no simpler (based on learning needs
and instructional constraints); Reduce uncertainty between performance
and goals; Give unbiased, objective feedback, written or via computer;
Promote a learning goal orientation via feedback; and Provide feedback
after learners have attempted a solution," (p. 177-178).
Student self-assessment.
The literature on student self-assessment and the attributes of the practice that
make it most effective is much less substantial than that related to feedback. I report
50


here on empirical studies that point to possibly important attributes of self-assessment,
and a summary review by one of the authors.
In two different studies, Ross et al. considered the impact on student learning of
training students how to self-assess in writing (1999) and in mathematics (2002). They
found that training increased student accuracy in self-assessment and improved student
learning outcomes, although the improvements were greater in math than in writing.
They concluded that subject matter makes a difference with regard to the effectives of
self- assessment. In their 1999 study, they also identified joint development and use of
criteria between the teacher and students as critical to the success of self-assessment,
and asserted that student self-assessment could take the place of other assessment
data. Their 2002 study on training on self-assessment in math classrooms, building on
the 1999 study, included the following processes: (a) involve students in defining
evaluation criteria, (b) teach students how to apply the criteria, (c) give students
feedback on their self-evaluations, and (d) help students use evaluation data to develop
action plans. These four elements are potentially critical attributes of student self-
assessment.
McDonald and Boud (2003) provided 12 training modules to Caribbean teachers
in how to introduce self-assessment practices to their students and investigated the
impact on their students' learning. They found that students who received self-
assessment training outperformed their peers in every content area. While the authors
didn't provide detailed information about the attributes of self-assessment practice
supported by their training, their general description of the content their training
51


sessions provides clues regarding attributes of the practice on which they focused. They
reported their training included the following content: "teachers constructing,
validating, applying and evaluating criteria to apply to students' work and . students
making reasoned choices, to assess responses to questions by applying given criteria, to
write a variety of question types, to allocate marks to such questions, to evaluate their
work and to make and use self-assessment activities of their own," (p. 213). Again, the
characteristics of the training that resulted in significant student learning improvements
point to possible important attributes of self-assessment practice.
In a 2008 review of research on self-assessment and peer-assessment in
secondary schools, Sebba et al. reported evidence of positive effects on student
attainment (9 out of 15 studies), self-esteem (7 out of 9 studies) and engagement with
learning (17 out of 20 studies). Across these studies, they identified the following
conditions that affect the impact of self- or peer assessment: teacher commitment to
learners having control over the process, the teacher discussing learning to develop
effective student feedback, and a move from a dependent to an interdependent
relationship between teacher and students that enables teachers to adjust their
teaching in response to student feedback. Although they found no relationship between
students owning the process and positive outcomes, they recommended that teachers
involve students in 'co-designing' the criteria for evaluation. In addition, seven of the
studies they reviewed (including those reported on above) identified student training in
self-assessment as important to its success.
52


Andrade, Du, and Wang (2008) investigated the impact of self-assessment on 3rd
and 4th grade students' writing. They used a matched comparison design to identify the
impact of providing students model papers to generate criteria and engaging students in
using rubrics as part of their self-assessment practice. They found a statistically
significant, positive correlation between both strategies and students' essay scores
controlling for previous achievement.
In a review of the conditions leading to success with self-assessment, Andrade
and Valtcheva (2009) defined self-assessment as "a process of formative assessment
during which students reflect on the quality of their work, judge the degree to which it
reflects explicitly stated goals or criteria, and revise accordingly," (p. 13). Building on
Ross et al. (2002), they provided a summary of attributes of self-assessment associated
with positive gains in student learning that summarize many of the studies presented
here. These attributes include the following: "define the criteria by which students
assess their work, teach students how to apply the criteria, give students feedback on
their self-assessments, give students help in using self-assessment data to improve
performance, provide sufficient time for revision after self-assessment, and do not turn
self-assessment into self-evaluation by counting it toward a grade" (p. 17).
Student peer-assessment.
The literature focused on the most effective attributes of peer assessment is less
well-developed than research on other formative uses of student learning data. Over
time, many researchers focused on peer grading and the alignment of peer provided
grades with teacher provided grades, a summative rather than a formative use of
53


student learning data. This definition of peer assessment falls short of the
conceptualization of peer assessment as students providing formative feedback to one
another as proposed by Black and Wiliam (1998). According to Van Zundert, Sluijsmans,
and Merrienboer, (2010) research on peer assessment practices "has hardly evolved
beyond a summative and quantitative view of peer assessment with a strong reliance on
scoring and grading," (p. 266).
In their review of research on peer assessment in higher education, Gielen,
Dochy, Onghena, Struyven, and Smeets (2011) asserted that determining the quality of
peer assessment practice depends upon educators' purpose in engaging learners in the
practice. They identified five "goals" for peer assessment, only three of which could be
considered formative uses of student learning data. The three included the following:
using peer assessment as a learning tool for those receiving the feedback, using peer
assessment to develop student self-assessment skills, and using peer assessment to
actively engage students in their learning. While the authors proposed how to measure
quality for each of these three goals, they offered that only the first has been the focus
of empirical research. They also asserted that when the goal of peer assessment is to
use it as a learning tool, measuring quality depended on determining the effects on
student learning within a particular content area and that quality criteria thus varies by
content area, (e.g. just as learning goals in mathematics vary from writing). Thus, studies
conducted in one content area on the quality criteria for peer assessment may not apply
to another content area.
54


Currently, most empirical studies focus on peer assessment in post-secondary
educational contexts, rather than K-12, and almost exclusively on peer assessment in
the context of writing (Cho & MacArthur, 2010; Orsmond, Merry, & Callaghan, 2004;
Topping, 1998, 2010; Zundert et al., 2010). Despite these limitations, I report on a few
empirical studies that have provided evidence regarding critical attributes of peer
assessment used as a learning tool that impact student performance and may have
application within the context of K-12 math instruction.
In their examination of the impact of peer assessment on student writing
performance in post-secondary contexts, Orsmond, Merry, and Callaghan (2004) and
Cho and MacArthur (2010) found that feedback from multiple peers improved the
quality of subsequent writing drafts because this resulted in students making more
complex revisions to their writing (e.g. revisions that clarified meaning at the sentence
or paragraph level). Orsmond et al., (2004) compared the content of peer feedback in
these instances across four categories: directive, non-directive, praise, and criticism.
They found that students making complex revisions to their writing subsequent to peer
assessment was associated with the content of the peer feedback being non-directive
(feedback that included nonspecific observations that could apply to any paper).
In a special issue of the journal Learning and Instruction that focused on peer
feedback Topping (2010) summarized the lessons learned from six empirical studies
focused on critical attributes of peer feedback leading to improvements in student
learning outcomes. Researchers conducted four of the six in post-secondary educational
contexts. Topping asserted the following:
55


Peer feedback content that is non-directive, and/or includes justification (especially
with students who have less skills) leads to better student performance outcomes;
Peer assessment leading to more elaborate feedback is only effective it if is non-
directive
Peer assessment leading to grades is not effective;
Student training in peer assessment that includes modeling and observation results
in more effective peer feedback;
The attributes of effective peer feedback depend upon the degree to which students
have previously participated in peer assessment -- more experience may lead to
increased acceptance and use of peer feedback whether or not it includes
justification; and
Finally, if the difference in competence between the student providing and the
student receiving peer feedback is large, the student receiving may reject more
elaborate peer feedback; if the difference is small, students may consider elaborate
feedback as sharing ideas.
One of the studies summarized by Topping (2010) bears additional mention
because it focused on a K-12 context. Gielen et al., (2010) used a quasi-experimental
design to investigate the impact on learning of various characteristics of peer feedback
with seventh grade Dutch students in writing. They paired students with similar ability
to provide feedback over various writing tasks. The study tested the use of a specific
feedback form completed by the student receiving the peer feedback after the peer
feedback event as well as evaluating the impact of different qualities in the feedback
56


message. They found that "clear formulation, and presence of suggestions, did not have
a significant impact on performance improvement. . and the only characteristic with a
significant impact on performance, namely justification, is also among the most difficult
to teach." (p. 312). They concluded that it is more important for the student providing
the peer feedback to justify their feedback than to be accurate in their comments, and
that training students to explain their feedback may have as great an impact on
performance as ensuring student comments are accurate.
The current research base on peer assessment is somewhat limited with regard
to what it contributes to this study. Researchers have focused on higher education
settings and writing instruction while my study focused on a K-12 mathematics context.
In spite of these limitations, I consider the following as potentially important attributes
of peer assessments that emerge from this research: how students are paired for peer
assessment matters, more experience with peer assessment may make students more
likely to use the peer assessment, providing students with training in how to peer
assessment can improve the practice, students should not just provide suggestions
about the other students' work but should explain their suggestions, and feedback that
is non-directive (could be applied to many different similar tasks) may be more
effective.
Formative questioning.
Within the context of formative assessment, questioning is both a data collection
method and a type of data use. Here, however, my focus is on research related to the
attributes of questioning as a formative data use. According to Ruiz-Primo (2011), while
57


there are a number of studies focused on classroom discourse and questioning, "few
focus on how the quality of these interactions influences students' learning," (p. 23).
She also argues that the nature of the practice and the cost of researching it may limit
generalizability of empirical studies. Research studies that examine the quality of
interactions necessarily involve video recording individual teacher to student and
student to student interactions within classrooms. As a result, many have relatively
small sample sizes involving only a few teachers/classrooms. Several authors (Chin,
2006, 2007; Mortimer & Scott, 2003; Ruiz-Primo & Furtak, 2004, 2006; Ruiz-Primo,
2011) have considered formative use as part of a questioning cycle with small numbers
of teachers. Although the results of these studies may not be broadly generalizable, they
help clarify potentially important attributes of formative questioning.
Chin (2006, 2007) investigated question-based discourse in the context of
science instruction. In her 2007, study she made a distinction between traditional uses
of questioning used to evaluate what students know and "constructivist-based"
questioning used to "to elicit what students think (such as their explanations and
predictions, especially if these are different from what scientists think), encourage them
to elaborate on their previous answers and ideas, and to help students construct
conceptual knowledge," (2007, p. 818). According to Chin (2007) more traditional
questioning follows a sequence of actions described by others as IRF, the teacher
initiates by asking a question (I), the student responds (R), and the teacher evaluates (E)
the correctness of the student response. Another variation of traditional questioning is
an IRF sequence if the follow-up by the teacher (F) is not explicitly evaluation. This
58


traditional approach to questioning doesn't include the teacher making adjustments
along the way. The constructivist or inquiry type of questioning follows a different
sequence that Chin followed others in describing as IRFRF. In this sequence, student
responses are followed by teacher feedback or action that elicits and additional student
response, then more feedback etc. This approach does include the teacher making
adjustments. Chin's constructivist or inquiry type of questioning more closely describes
questioning as a formative use of data. Chin investigated teacher questioning patterns in
six seventh grade science classrooms for six lessons each to develop a typology of
teacher questioning strategies that "stimulate productive thinking" (p. 823). The
attributes she identified of this type of questioning suggest potentially critical attributes
of formative questioning. They include the following.
". .teachers elicited responses from different students that progressively
added more information to existing ones contributing to a growing
framework of ideas; asked questions in a progressive way that enabled
students to gradually ascend to higher levels of knowledge and
understanding; reiterated students' responses following their questions
not only to affirming them, but also to making the ideas available to the
full class; used students' responses as a platform for further inquiry to
lead students towards the targeted learning goals; and bridged the
cognitive gap between the questions they asked and the knowledge base
of their students", (p. 823).
Ruiz-Primo and Furtak (2004, 2007) studied assessment conversations occurring
in 3 science classrooms. They described assessment conversations as an example of
informal formative assessment that involved a sequence similar to that described by
Chin (2006, 2007). Their sequence (ESRU) included: the teacher eliciting information by
asking a question (E), the student responding (S), the teacher recognizing the student
response(R), and the teacher using the information to improve student learning (U).
59


They clarified how their ESRU sequence was different from the traditional IRF
questioning sequence, in part based on defining the "Use" as different from the
"Feedback" in the IRF sequence. They defined "use" as including the teacher engaging in
one or more of the following: "provide students with specific information on actions
they may take to reach learning goals, ask another question that challenges or redirects
the students' thinking, model communication, promote the exploration and contrast of
students' ideas, make connections between new ideas and familiar ones, recognize a
student's contribution with respect to the topic under discussion, or increase the
difficulty of the task at hand," (p.61). Ruiz-Primo and Furtak developed a coding
structure to characterize different strategies teachers used in each of the steps in this
cycle and identified an approach to defining when the ESRU cycle was complete (i.e. the
teachers used the informally collected student learning data). They found that their
coding approach could differentiate the quality of the assessment conversations
occurring in the three science classrooms, that they could identify incomplete ESRU
cycles, and that the teacher with more completed ESRU cycles had students with higher
performance. Their coding approach and their findings related to impact on student
learning outcomes suggest some potentially important characteristics of formative
questioning. First, their efforts confirm the importance of completing the cycle, that
teacher questioning includes somehow using the information gained. Second, they
identified some key attributes related to the different steps in this cycle including the
following: completed cycles involved consecutive interactions with the same students,
the types of questions asked to elicit information from students varied and pertained
60


specifically to the science content, teachers used repeating and re-voicing most
frequently to recognize student responses, teachers used a strategy of comparing and
contrasting student responses less frequently but the most frequent user of this
strategy was the teacher with the best student outcomes, and two of the teachers
provided helpful feedback again with the most frequent user of this strategy the teacher
with the best outcomes. Some of the strategies other studies have pointed to as likely to
improve student outcomes were not present in the classrooms studied by Ruiz-Primo
and Furtak.
Black and Wiliam joined with three additional researchers to conduct a follow-up
to their 1998 review in six middle-level schools (students ages 11-15) in math and
science classrooms, with results published as a book for practitioners in 2003. Teachers'
formative use of questioning was one of the formative practices upon which this work
focused. Across all of the formative practices included in this study, the researchers
found evidence of an impact on student learning outcomes, although the specific impact
of formative questioning was not isolated. Black et al. (1998) introduced teachers to the
following features of questioning used formatively: provide students "wait time" or a
gap between asking a question and prompting for a response, use open-ended
questions (those without one correct response), plan in advance questions that have the
potential to promote thinking and discourse. The authors worked with teachers to plan
what questions they would use and anticipate student responses as a means of
identifying meaningful questions. The authors reported a number of changes in the
questioning practices within the classrooms in which they worked including teachers
61


doing the following: using incorrect answers (in homework or class work) as discussion
points; asking questions that encourage students to explore answers together such as
those that challenge misconceptions, explore ambiguity, or create conflict; expecting all
students to be willing to answer any question even if the answer is they don't know;
establishing a climate where students are comfortable giving incorrect answers because
they know incorrect answers are useful; and using information gained from questioning
to plan the next steps in learning.
Across the studies reviewed here, several potentially important attributes of
formative questioning emerge. First, questioning becomes formative when it includes a
step beyond the teacher eliciting or initiating by asking a question, the student
responding, and the teacher evaluating the correctness of the response. What I am
calling formative questioning and other authors have called constructivist questioning or
assessment conversations includes additional interaction(s) that involve the teacher and
or student(s) somehow using the information. When this additional step is included,
empirical evidence from studies with small sample sizes, indicate a positive impact on
student learning outcomes (Chin, 2006, 2007; Ruiz-Primo & Furtak, 2004, 2008). Second,
investigation of the details of the interaction between teachers and students across
these studies suggest several potentially important attributes of formative questioning,
including: elicit responses from different students to add more information; consider
coming back to the same student for multiple interactions; ask progressive questions
that build conceptual understanding; reiterate (repeat or re-voice) student responses
not only to affirm them, but to make them available to other students; compare and
62


contrast multiple student responses; use incorrect responses as discussion points;
expect all students to be ready to answer any question even if the answer is I don't
know; provide instructional scaffolding, as part of questioning, to bridge gaps in student
understanding; provide specific actionable feedback regarding student responses;
establish a climate where students believe that incorrect answers are useful; and use
information gained to plan next steps.
Adjusting instruction.
In recent years, several research teams have studied interventions intended to
facilitate teacher formative assessment practice that included some focus on teacher
instructional adjustments based on student learning data (Black et al., 2005; Frohbieter
et al., 2011; Herman et al., 2011; Shavelson et al., 2008), with mixed results
In the U.S., Shavelson et al. (2008) developed assessment tasks that teachers
could use formatively as part of a commonly used middle level science program. They
embedded the tasks at critical points in the unit "where an important sub-goal should
have been reached before students go on to the next lesson" (p. 301). They provided
teachers training and guiding documents regarding how to use the tasks to "check
student understanding at key points during instruction and reflect on the next steps
needed to move students forward in their learning," (Furtak et al., 2008, p. 363). Then
they evaluated the impact on teaching and learning on a small randomized trial with 12
teacher participants (6 experimental and 6 controls) identified as experts in the
program. They found that most of the teachers in the experimental group did not
implement a number of the suggested instructional responses to the data generated by
63


the embedded assessments (Furtak et al., 2008) and that being in the experimental
group did not have a significant influence on students' achievement compared to the
control group that used the program without the embedded assessments, (Yi et al.,
2008). The authors concluded that embedding assessment tasks designed for formative
use, even with some training and guidance was not enough for teachers to consistently
use these data to adjust their instructional practice. They also raised questions
regarding teacher skill and ability to use assessment resources available to them,
suggesting that additional teacher capacity building may be necessary to realize the
promise of formative assessment.
Herman, Osmundson, & Silver (2010) and Herman et al. (2011) focused on the
relationship to between teacher content-pedagogical knowledge, their formative
assessment practice and student learning outcomes as part of a larger study of the
effects of implementing curriculum based assessments in a hands-on science program in
39 upper elementary classrooms. To measure student content-pedagogical knowledge,
these researchers asked teachers to self-report and separately administered
performance tasks to the teachers which required them to analyze and interpret
student responses to tasks from the science program. They used teachers' weekly
practice logs to measure their use of data to adjust instruction. Herman et al. (2011)
found a significant positive relationship between teachers' formative assessment use
and students' performance, and a marginally positive relationship between teachers'
content-pedagogical knowledge their assessment use. They concluded that while it
seems evident that content-pedagogical knowledge is necessary for teachers to make
64


effective use of assessment to form learning, this study did little to clarify the
relationship.
Frohbieter et al. (2011) examined "the kinds of information teachers gleaned
from assessment and how they used that information in their teaching." (p. 2) when
using three different assessment systems developed for teachers to use formatively in
the context of middle level math instruction. The researchers used two-part interviews
with teachers to determine what type of information they got from assessment and
how they used it. Then they developed a scheme for categorizing both.
Frohbieter et al. (2011) first characterized the level of "nuance" of the
information teachers took away from assessing with a formative purpose. This included
the following: least nuanced, a judgment of whether or not students got it; moderately
nuanced, "varying degrees of mastery, relative mastery of topics, types of mastery (e.g.,
procedural vs. conceptual), and common errors" (p. 17); and highly nuanced, "detailed
insights into the thinking behind their students mathematical performance . that can
help them assist their students in moving forward in their understanding of
mathematics," (p. 22). The type of information teachers gleaned did not seem to relate
to the type of information that the instruments they used could provide. In other words,
many teachers chose to take away from their results a less nuanced interpretation than
was possible.
They also characterized teacher use of assessment results by level of
responsiveness, including the following: 1) least, which they characterized as the
teacher deciding to move on regardless of assessment results; 2) moderate, which
65


included reviewing content again, simplifying examples and/or slowing down the pacing
of the lessons, simplifying the original lesson and re-teaching it to the entire class, or
engaging learners in additional practice; and 3) high, which included targeting particular
areas of student difficulty that were common, grouping students for targeted support,
and using of self- and/or peer assessments.
Frohbieter et al. (2011) established the ideal as teachers gleaning highly nuanced
information from assessment and using it in highly responsive ways. According to these
authors, "it was rare to encounter instances in which practices embodied all the ideal
characteristics of formative assessment (i.e., assessment with instructional
improvement as its purpose), which occurred frequently and were related to content
currently being taught, as well as were integrated thoughtfully with instruction." (p. 27).
They looked for the integration of assessment and instruction and reported that,
"Teachers for whom assessment and instruction were closely integrated reported
selecting or designing assessment in order to learn very specific information about their
students understanding that would be useful for planning instruction." They also
characterized the "cycle of use" as when, in relationship to collecting data, teachers
made instructional adjustments. They identified adjustments made immediately or in
planning for the next day or within the week as "true" formative assessment. The
characterization of instructional response described by Frohbieter et al. (2011)
contributes to my characterization of different types of teacher instructional
adjustments based on student learning data, and the scale of those adjustments.
66


In the U.K., Black, Wiliam, and three other colleagues took a different approach.
Their King's-Medway-Oxfordshire project (KMOFAP) was an 18 month study with 24
middle level science and math teachers focused on developing teacher formative use of
assessment. These researchers introduced teachers' to research about formative
assessment practices, engaged in them in action planning related to trying-out the
formative assessment practices about which they had learned, provided feedback
related to classroom observations, and facilitated teacher learning community dialogue
about their practice change efforts (Harris, Irving & Peterson, 2008). Teachers chose
their own focus for their practice change, including one or more of the following:
questioning, involving students in self- and peer-assessment, and marking student work
(which included changes to grading and providing feedback). These authors reported
changes in teacher practices, changes in the environments of teachers' classrooms, and
changes in teachers' perception of their role. Although the evidence was inconsistent
across participating teachers, they also found evidence of improvements in student
learning, (Black et al., 2003).
What do these different approaches to considering instructional adjustments as
part of formative assessment practice say about the most critical characteristics of this
type of data use? They provide less specific guidance, but suggest the following:
somehow additional formative consideration of student assessment results, regardless
of the specific instructional response, does seem to improve student learning outcomes;
teacher use of high quality assessment resources designed for formative use do not
necessarily influence instruction; teachers may choose to glean less information than is
67


available regardless of the quality of the instrument used in data collection; while
content-pedagogical knowledge should matter, it is unclear how; and the level of
responsiveness of teacher actions in relationship to student learning data is discernible
and seems to relate to the nuance of the information they gleaned from student
assessment results. Next, I consider socio-cultural theories of learning to provide more
guidance regarding what counts as formative assessment and what might be most
critical for formative assessment practice to be effective.
A Socio-Cultural Interpretation of the Role of Formative Assessment in Learning
Socio-cultural theories of learning evolved from the ground-breaking work of Lev
Vygotsky (1978) who made a number of contributions which clarify the role of formative
assessment in learning. His contributions include: 1) the conceptualization of learning as
situated within the social plane, 2) the central role of tools and symbols as cultural
mediators of learning, and 3) the concept of a zone of proximal development (Oxenford-
O'Brian, Nocon, & Sands, 2010). Drawing upon the many authors who have expanded
Vygotsky's work to apply a social-cultural conceptualization of learning to practice that
occurs within school-based contexts (Cole, 1990, 1999; Cole & Engestrom, 2006;
Gallimore & Tharp, 1990; Giest & Lompscher, 2003; Gipps, 1999, 2002; Lave & Wenger,
1991; Moss, 2008; Shepard, 2000; Torrance & Pryor, 1998; Wells, 1999, 2002; Wertsch,
1990), I describe these themes, their relationship to learning within school-based
contexts, and their application to explaining the role of formative assessment practice in
teaching and learning and how the social context of classrooms influences formative
assessment practice.
68


Learning as social interaction.
Vygotsky (1978) described learning and development as fundamentally social,
"Learning awakens a variety of internal developmental processes that are able to
operate only (emphasis added) when the child is interacting with people in his
environment and in cooperation with his peers," (p. 90). This places socio-cultural
theories of learning somewhere between behaviorist learning theories (with learning
stimulated by sources external to the learner and which focuses on observable behavior
change) and constructivist theories (with learners individually constructing meaning
from experience and which focuses on underlying mental processes). Socio-cultural
theories build upon constructivist notions of the learner as the active agent his or her
own development, but also consider how a child develops through culturally and socially
mediated participation in meaningful, practical activity (Rueda & Moll, 1994).
Within a socio-cultural theoretical framework, development occurs through
learning and learning occurs first as social action and then as internal cognitive
development. Or as Vygotsky (1978) described it, "Any function in the child's cultural
development appears twice . first it appears between people as an interpsychological
category, and then within the child as an intrapsychological category," (p. 57). Vygotsky
called this internal reconstruction of external experience "internalization." One critical
implication of a conceptualization of learning as internalization is that instruction should
lead development, rather than respond to it (as is proscribed within developmental
constructivist paradigms). Another implication is that the unit of analysis for
investigating learning and development must go beyond the individual. It must focus on
69


the teacher to student and student to student inter-psychological functioning within the
contexts where learning is occurring (Wertsch, 1990), a concept which various authors
have extended and described as social practice (Lave & Wenger, 1991) and activity
(Cole, 1999; Cole &Engestrom, 2006; Wells, 1999, 2002). I describe this as the social
context of the classroom, and focus analysis on learning activity as it occurs within this
social context.
Lave and Wenger (1991) developed the concept of legitimate peripheral
participation, as "a descriptor of engagement in social practice that entails learning as
an integral constituent" (Lave and Wenger, 1991, p. 35). Legitimate participation refers
to being a part of a practice, yet not at full participation with "peripheraiity" suggesting
"an opening, a way of gaining access to sources for understanding through growing
involvement," (p. 37). Thus, according to Lave and Wenger, it is legitimate peripheral
participation in social practice that facilitates learning. Lave and Wenger (1991)
intentionally did not apply legitimate peripheral participation to the context of school-
based learning. However, the analytical perspective of legitimate peripheral
participation helped to shape my theoretical framework, particularly with regard to
considering social practices that include assessment and students' roles in those social
practices and defining the role of formative assessment in learning. Within school-based
contexts, this perspective makes it important to ask to what degree students participate
in social processes that entail learning as a critical constituent, and are they legitimate
peripheral participants in those social processes. In other words, within classrooms, are
students participating substantively and moving towards more full participation as
70


learners, or not? Legitimate peripheral participation occurs only when students engage
(peripherally) in learning activity, not when the teacher is doing something to a passive
student audience. Thus, my framework includes the roles that students play in learning
activity, as part of the social context of the classroom.
Two related concepts, initially proposed by Vygotsky, further explicate how the
social practice of formative assessment within classrooms could increase student
learning. The first is how tools (signs and symbols) used during social practice culturally
mediate internalization. The second is the role of more knowledgeable others in the
internalization of social practice, explained through the zone of proximal development.
Cultural mediation.
Vygotsky (1978) asserted that the relationship between the individual learners
and higher mental processes (learning) is always "mediated" by culturally and
historically developed artifacts or tools, ". . each human being's capacities. . are
crucially dependent on the practices and artifacts, developed over time within particular
cultures, that are appropriated in the course of goal-oriented joint activity," (Wells,
1999, p. 135). Mediating artifacts can be physical tools or symbolic tools, such as sign
systems in language or mathematical symbols (Vygotsky, 1978). The tools that mediate
any practice are products of the historical context and the culture in which they are
created. "Human beings live in an environment transformed by the artifacts of prior
generations. . the basic function of which is to coordinate human beings with the
physical world and each other,. . in that they mediate interaction with the world,
71


cultural artifacts can be considered tools," (Cole, 1990, p.83). The cultural and historical
context mediates learning through the use of tools during social practice (Wells, 1999).
Cultural tools are more than aids to learning (Vygotsky, 1978). As artifacts of the
culture that created them, the tools used actually shape the learning. According to
Wertsch (1990), "human activity (on both the inter-psychological and intra-
psychological plane) can be understood only if we take into consideration the 'technical
tools' and 'psychological tools' or 'signs' that mediate this activity," (p. 114). Or as Cole
(1990) describes it, "cultural mediation fundamentally changes the structure of human
psychological functions," (p. 83).
Formative assessment practice is also a culturally mediating tool. Torrance and
Pryor (1998) describe formative assessment as the teacher and student jointly
appropriating information about the student's learning. "The teacher must appropriate
the child's response in order to realize the social construction of the lesson and in order
to scaffold the social construction of cognition. The children must appropriate the
cultural tools being presented to them in to their developing understanding of the
subject matter at hand and the social processes of schooling" (p. 20).
Defining formative assessment as social process means that cultural tools could
also mediate formative assessment. The actions that are part of any formative
assessment episode make use of a variety of cultural tools. For example, teachers draw
learning targets from state or district standards or curriculum documents, curricular
resources, and/or the prior experience of teachers. Data collection can use a task, an
entire assignment, a test, or an oral question, all of which are culturally mediating tools.
72


Even the approach used by a teacher to analyze and/or summarize student-learning
data is a culturally mediating tool (a scoring method). Understanding formative
assessment as social practice and how the social context of the classroom shapes
formative assessment practice includes interrogating the tools used as part of the
practice.
Feedback provided by the teacher to students based on her/his analysis and
interpretation of the students' learning data mediates learning also. In his research,
Vygotsky (1978) focused on how speech, as it occurs in the context of schooling, informs
the development of higher mental functions (learning). "He was concerned with how
the forms of discourse encountered in the social institution of formal schooling provide
the underlying framework within which concept development occurs," (Wertsch, 1990,
p. 116). Speech does more than mediate social interaction; it also mediates individual
remembering, thinking and reasoning when it becomes inner speech (Wells, 1999).
Wells (1999) built upon Vygotsky's investigation of the role of speech in schools arguing
that learning in schools can be thought of as a semiotic apprenticeship, that is, an
apprenticeship in the ways in which people make meaning through communication
using signs and symbols. For Wells, the semiotic aspect of the apprenticeship involves
learners making meaning through speech both as a mediator of action and as a
mediator of understanding or reflection within disciplinary learning. It is within this
context that feedback provided by the teacher to individual or groups of students as a
response to student learning data, is a psychological tool a semiotic mediator of
learning. Whether the teacher provides feedback orally, visually, in writing, or in some
73


combination of these formats, feedback can mediate learner reflection. That is,
depending on the content of the feedback it can provide external versions of the
learners "inner speech" as they reflect about their own understanding and the progress
of their learning. Depending on the content and context within which teachers provide
feedback, it may also mediate learner action. For example, feedback provided to a small
group of learners as they engage in a group activity may cause them to change their
approach. Within this study, I consider how teachers use feedback about student
learning within their instructional processes.
Zone of proximal development.
According to Wells (1999), "It is the zone of proximal development that has been
Vygotsky's most important legacy to education," (p. 313). Vygotsky (1978) defined the
zone of proximal development, as "the distance between the actual developmental level
as determined by independent problem solving and the level of potential development
as determined through problem solving under adult guidance or in collaboration with
more capable peers," (p. 86). Put simply, a learner's zone of proximal development is
the knowledge, skills and understanding that she/he could not demonstrate on her/his
own, but could with help. This concept has important implications for instructional
practice. In contrast to constructivists like Piaget, who suggested learning experiences
must be appropriate to the child's current level of development, Vygotsky asserted that
the most beneficial learning experiences are those that are just beyond a child's current
developmental level those that are in their zone of proximal development.
74


Furthermore, Vygotsky conceptualized the zone of proximal development as shaping
both assessment and instructional practices (Wells, 1999).
As described above, a formative assessment episode includes the following
actions: identifying a learning target(s), collecting information about student learning,
analyzing and interpreting that information, and taking action based on the
interpretation. Among the many implications of the concept of a zone of proximal
development for assessment practice, perhaps the most obvious is that assessment
should involve collecting information regarding the knowledge/skills/understanding that
is beyond a student to demonstrate on their own but that they could demonstrate with
assistance (Shepard, 2000; Gipps, 1999, 2002). This is not consistent with constructivist
theories of learning which would suggest assessment should only include data collected
about the learner's current level of development.
A second important implication of this learning theory is that to identify the
upper bound of learners' zone of proximal development, information about learning
should be collected during learning activity while learners have access to assistance
from the teacher and/or more knowledgeable peers and that information collection
should be embedded within the social context in which learning activity takes place
(Shepard, 2000; Gipps, 2002). This necessitates the use of informal methods for
collecting this information such as observation or oral questioning.
Moss (2008) asks what social practices within classrooms "count as assessment"
(p. 222). She suggests that what we have called assessment in the past, with a focus on
assessment instruments like tests and the evidence of learning they provide, is too
75


narrow. Rather, the conceptualization of assessment should "incorporate all of the
evidence-based evaluations and judgments that occur in interaction in classroom
learning environments," (p. 223). In other words, the social practice of assessment in
classrooms must include specific consideration of less formal interactions regarding
evidence of student learning.
Jordan and Putz (2004) define a continuum of assessment practice from formal
to informal that includes three distinct types of assessment: documentary, discursive
and inherent. The most formal type of assessment identified by Jordan and Putz (2004),
documentary assessment, is what we have traditionally called assessment within K-12
contexts. It includes the end of unit tests, weekly quizzes, and also those tests
administered outside of the classroom including annual state assessments or district
interim assessments. These assessments are "documentary" because they result in
documentation that is available outside of the classroom. At the other end of the
continuum, Jordan and Putz (2004) describe inherent assessment as "a natural part of all
socially situated activities" (p. 348). Within the context of conversation, they offer this
example of inherent assessment, "a listener looks puzzled the speaker rephrases what
she just said" (p. 348). Inherent assessment happens within a classroom context when a
teacher listens into a group of students engaged in a learning activity, hears that their
talk is on track and about the learning focus and moves on to the next group without
comment. Inherent assessment can include the daily activity and social interaction
within classrooms between and among teachers and learners. In the middle of their
continuum, Jordan and Putz (2004) describe discursive assessment as activity that
76


"makes the unspoken, inherent assessment explicit" (p. 350). Discursive assessment is
the "reflective talk generated by a group of people engaged in a particular activity,
about that activity" (p. 350). ." Within the context of a K-12 classroom, oral feedback
shared with a group of students is an example of discursive assessment. Because the
group shares discursive assessment, unlike inherent assessments, it becomes something
that the group can refer back to later or what Jordan and Putz call a "social object."
Jordan and Putz assert that to be an effective and accepted member of any group, "the
ability to talk about the ongoing activity in an evaluative way to produce a discursive
assessment is crucial" (p. 350). In other words, they define discursive assessment as
legitimate peripheral participation. Discursive assessment (or informal assessment) is a
key element of the theoretical framework for this study. This means I will consider the
degree to which teachers and student engage in discursively assessing learning, the
roles students play in these processes and how students talk about their own learning or
the learning of their peers in an evaluative ways. This also means I will investigate when
and how student work and/or oral feedback about student work, provided to groups of
students, becomes a social object.
It is from Vygotsky's concept of the zone of proximal development that
Gallimore and Tharp (1990) derive a definition of instruction as, "assisting (emphasis
added) performance through the Zone of Proximal Development," (p. 177). This is
consistent with Wells (1999), description of the teacher role as "engaging with learners
in activities to which they are committed, observing what they can already do unaided;
then providing assistance (emphasis added) and guidance that helps them to . bring
77


the activity to a satisfactory completion," (p.159). The assistance provided by teachers is
dependent on the teacher identifying, through assessment (conceptualized as evidence
based evaluations), the zone of proximal development of the learner. The zone of
proximal development identifies the window for instruction, or at what level the
assistance should be targeted without regard for the form of that assistance (Wells,
1999). Once targeted at their zone of proximal development, assistance provided by
teachers can take many forms, including the following: modeling, identification of
appropriate tasks or learning opportunities, practical scaffolding through feedback,
guidance, or cognitive structuring, providing explicit explanations of principles and
procedures, or providing encouragement (Gallimore & Tharp, 1990; Wells, 1999; Wells
& Claxton, 2002; Shepard, 2006). Framed as one component of formative assessment
practice, these instructional moves all become descriptions of the potential uses that
teachers can make of student learning data.
Moss (2008) asserts that understanding formative assessment as social practice
involves considering the scale of teachers' use of assessment in determining what to do
next. The scale of use could range from what to do next in an interaction with an
individual student, in an interaction with a group of students, in adjusting the current
activity of the entire class or rearranging plans for the entire unit or school year. This
idea that the scale of uses teachers make of evidence about student learning matters is
also included in my theoretical framework.
Identifying learners' zone of proximal development through assessment
(frequently informal) and then using the zone of proximal development as the target for
78


instruction (defined as assistance provided by the teacher) aligns directly with the
model of formative assessment as articulated by Sadler(1989) and summarized in the
following formative assessment questions: 1) Where do you want to go? 2) Where are
you now? and, 3) How will you get there?, (Heritage, 2010; Shepard, 2006). According to
Shepard (2006), "This formative assessment model . directly corresponds to the zone
of proximal development," (p. 628). The first two questions define the boundaries of the
learners' zone of proximal development. That is, the learners' zone of proximal
development is either between them or beyond "where the learner wants to go." The
third question, how will you get there, identifies the target for the instructional
assistance that is provided within the zone of proximal development. Sadler (1989)
additionally explains that only assistance that helps students make determinations
about "how they will get there" facilitates learning. Thus, instructional uses of
information about learning should direct learners towards where they want to go (their
learning goal) but should help them take steps within their zone of proximal
development. This also provides an explanation for why learners might fail to respond
to instruction, if the suggested next steps are beyond their zone of proximal
development.
Applying socio-cultural concepts to the theoretical framework.
What is the role of formative assessment practice in learning? The socio-cultural
answer to this question is formative assessment culturally and socially mediates
learning. Figure 2.2 is a modified version of the triangle originally proposed by Vygotsky
(1978) to illustrate how cultural tools mediate learning. In this figure, formative
79


assessment replaces Vygotsky's generic tool as a mediator of student learning to
illustrate the role of formative assessment practice in learning.
Formative Assessment
Student
Student
Learning
Figure 2.2. An illustration of formative assessment as a mediator of learning.
Participation in formative assessment as a social practice within classrooms is a
perspective of how learning takes place. When formative assessment mediates student
mental representations, then changes will be evident in their use of disciplinary
language, regulation of their learning activities, and how they approach disciplinary
problems. When formative assessment mediates student participation in current and/or
future learning activity, this will be evident in their discussion with peers, how they
participate in learning activity, and the products they create during learning activity. The
feedback learners receive about the progress of their learning also can serve as a
semiotic mediator of learning, whether it involves interaction with other learners, the
teacher, or other cultural tools (such as written text). It provides the external speech
(the language, words, and symbols) that become the "inner" speech of students' higher
80


mental processes, including understanding and reflection of disciplinary concepts.
Effective feedback can change student mental representations, amplify and improve
their self-regulation and may transform their participation in learning activity. Examples
include when feedback on a piece of written work results in learners changing their
approach to similar tasks in the future, or when oral feedback about the process in
which students are engaged during a group activity changes the next steps they take, or
how they approach the next activity.
Formative assessment practices mediate teacher instructional practice,
transforming both teacher thinking about learning activity and how s/he shapes the
learning activity within the classroom. As Moss (2008), describes it, formative
assessment becomes "a way of looking at the evidence available for the ongoing
interactions in the learning environment" (p. 224). This should be evident in teacher talk
about their practice how the teacher plans learning activity, and if and why s/he
deviates from the plan. This should also be evident in the learning activity that occurs in
the classroom and changes the teacher makes to different scales of instructional
practice based on student learning data, from adjustments to interactions with an
individual student or small groups of students during learning activity, to adjustments to
the current or a subsequent learning activity for the entire class, to changes in how
students are grouped for subsequent learning activity, to rearranging plans for the
entire unit or school year.
A socio-cultural perspective of learning helps to explain not only what it means
for formative assessment, as a social practice, to mediate student learning but also on
81


what analysis should focus, learning activity as it occurs within the context of the
classroom. This perspective also identifies key characteristics of the social context of
classroom to consider as shapers of the social practice of formative assessment.
This perspective emphasizes the importance of interrogating the tools used as part of
formative assessment practices. The tools include the data collected as part of formative
assessment practices. For formative assessment practice to mediate learning, data
should be collected during the learning process (as well as after) and should reflect
students' efforts when being assisted by more knowledgeable others or other culturally
mediating tools. This highlights the importance of teachers using informal collection
methods such as observation, or oral questioning. Information about learning should be
interpreted in reference to the desired learning target, making it possible to identify the
learners' zone of proximal development and its relationship to the learning goal (i.e.
whether the learners zone of proximal development contains the learning goal or not).
A socio-cultural orientation to learning suggests the need to consider not only in
the specific formative assessment practices evident in classrooms and the tools used,
but also who participates in the classroom, the nature of that participation (student and
teacher roles), and the rules and norms that guide participation (Moss, 2008). The type
of roles learners should play generally in the classroom and as participants in the social
practice of formative assessment, are roles characterized as legitimate peripheral
participation. More knowledgeable others, whether the teacher or other students, play
roles of assisting learning within the students' zone of proximal development.
Educators' roles should include assisting students in constructing their next steps to
82


either internalize understanding or transform their participation in learning activity
within their zone of proximal development. This assistance could be in the form of
discursive assessment which, when provided orally to a group of students or the full
class, becomes a social object that both learners and the teacher can refer back to later.
Assistance could be grouping students with more knowledgeable peers. In fact, it may
be important for students to get support from each other, as this is one way that
students can move towards more full participation. It can be teacher using questioning
to scaffold learners' next steps. I follow Moss (2008) and Oxenford-O'Brian, Nocon and
Sands (2010) in using the components of an activity system proposed by Engestrom
(1987, 1993, 2001) as an analytical tool to draw attention to the context of K-12
classroom within which formative assessment practices occur.
Tools
Norms
Figure 2.3. The components of an activity system as identified by Engestrom.
83


This framework, depicted in Figure 2.3, represents an expansion of Vygotsky's
mediational triangle to include the social context of the classroom. By identifying
theoretical categories to describe the social context (object, tools, participants, roles,
and rules or norms), I foreground this context and the relationship between the
components of the classroom social context and formative assessment practice.
Bringing Together the Components of the Theoretical Framework
My development of my theoretical framework for this study began with an
historical derivation of the current practice of formative assessment evident in K-12
classrooms within the U.S. today. I arrived at the concept of a formative assessment
episode to distinguishing formative assessment practice from other instructional
practices evident in K-12 classrooms and to show formative assessment practice as
including specific actions. This provided an initial anchor for the theoretical framework
for this study.
Next, I reviewed empirical research related to formative uses of student learning
data the final step in a formative assessment episode that distinguishes it from other
types of assessment activity. This review provided additional details that will allow me
to distinguish between different types of formative data use within classrooms, and
guidance regarding what attributes of those types of data use may be important.
However, much of this empirical literature did not offer a specific theory of learning as a
framework from which to interpreting the findings. So, while I found and reported
evidence of a relationship between specific attributes of different types of data use and
84


improvements in student learning, why different attributes were important was not
always clear. This resulted in a list of potentially important attributes for formative
feedback, self-assessment, peer-assessment, formative questioning, and instructional
adjustments based on student learning data that I will consider as I examine these types
of data use in two mathematics classrooms.
Finally, I applied a socio-cultural perspective to describe the role of formative
assessment in teaching and learning, and to explain the interaction between formative
assessment practice and the social context of a K-12 classroom. Through this process, I
identified the role of formative assessment as a social practice, or tool that mediates
teaching and learning. I developed a description of how formative assessment mediates
instructional planning, instructional content and strategies, and student learning tactics,
which became the second anchor for my theoretical framework for this study.
I also identified critical attributes of the different actions of a formative
assessment episode from a socio-cultural perspective. Data collection should occur
during, rather than after the learning process while students have access to assistance
from more knowledgeable others and may be done informally. Data analysis and
interpretation should consider the learners' zone of proximal development. Data use
should include feedback that learners can use and internalize as inner speech about
their learning. Both teachers and students can participate in providing support to other
students in their zone of proximal development, once identified through formative
assessment. Understanding teacher use of learning data to determine what to do next
instructional^ includes considering the scale of that use.
85


Describing formative assessment as a social practice also clarified that formative
assessment could mediate and be mediated by other components of the social context
of the classroom within which it occurs. Considering formative assessment as social
practice foregrounds the relationship between formative assessment and the social
context of the classroom as critical to understand how formative assessment works. This
established that at my analysis should focus on the learning activity within classrooms
for which formative assessment is an integral part, or how formative assessment is
something teachers and students do together. It also suggests that my analysis must
consider the specific elements of the social context of the classroom over-all how it
changes for different learning activities.
86


Chapter III
Methodology
This chapter describes the research process in which I engaged to deepen
understanding of formative assessment practice as it occurred within the context of
upper elementary/middle-level mathematics instruction. It includes information about
the purpose and associated design of the study, including restating and further
explicating the research questions that guided the study. It also provides details related
to what data were collected about which subjects, where, when, and how. Finally, it
describes how I analyzed the data presented findings presented in later chapters. This
chapter includes the following sections: Purpose, Research Questions (and
propositions), Data Collection, and Data Analysis.
Purpose
The purpose of this study was to further develop and deepen understanding of
formative assessment practice within the K-12 classroom context applying a socio-
cultural theoretical framework. In the most general sense, this research describes the
most critical attributes of formative assessment practice as it occurs within actual
classrooms, and explains the relationship between formative assessment and the social
context of K-12 classrooms. My aim was to illuminate how formative assessment
practice works in real classrooms and how the construction of the social context
influences formative assessment practice.
To meet these purposes, I employed a cross-case analysis, considering the
multiple formative assessment episodes occurring in one upper-elementary and one
87


middle-level classroom during one mathematics instructional unit in each. The focus of
each case was episodes of formative assessment practice that occur within the context
of an instructional unit.
Case studies depend on an analytical rather than statistical approach for
interpreting findings. I based my approach on theoretical propositions and a defined
logic of replication "analogous to that used in multiple experiments" (Yin, 2003, p. 47).
That is, the study started with a series of propositions about formative assessment
practices in situ. I compared various sources of data against the propositions to establish
a chain of reasoning regarding the dimensions and attributes of formative assessment
practice, and the critical aspects of the classroom context that facilitated or did not
facilitate this practice. Through an iterative process of comparing the data to the
propositions, revising the statements, then comparing additional data to the revised
propositions, I established a chain of reasoning. I used this iterative process to analyze
data from each classroom at the formative assessment episode level within the social
learning context of the classroom. Then, I compared the two classrooms further to
develop my chain of evidence about each proposition.
This case study approach requires clearly defined propositions, a description of
how the propositions relate to the unit of analysis, explication of the logic linking data
from different sources to each of the propositions, and detailed description regarding
how I analyzed data both to adjust and to make judgments about the propositions (Yin,
2009). I describe these study components in the following sections: 1) Research
88


Questions (research questions and associated study propositions), 2) Data Collection,
and 3) Data Analysis.
Research Questions
This study was guided by two research questions: 1) What are the critical
attributes of formative assessment practice as it occurs in situ? 2) How does the social
context of a K-12 classroom (i.e. object, tools, participation/roles, and rules/norms)
influence formative assessment practice in situ?
Propositions, related to each research question, actualized the theoretical
framework for the study. They defined the boundaries around what I considered within
the scope of the study, and determined what data we collected to respond to each
research question (Yin, 2009). The propositions also established the starting point for
data analysis. The initial propositions associated with each research question emerged
from my theoretical framework and included the following.
Propositions related critical attributes of formative assessment episodes.
1. Formative assessment episodes are defined as including the following teacher
and/or student actions: a) the teacher identifying (implicitly or explicitly) the
learning target, and clarifying the learning target with the students; b) the teacher
collecting data about student learning; c) the teacher and/or students analyzing
learning data; d) the teacher and/or students interpreting data about student
learning with regard to what it means for students' learning and/or instructional
practice; and e) the teacher making adjustments to learning activity, and/or
student(s) making adjustments to learning tactics.
89


2. When formative assessment practice occurs, a variety of assessment methods,
including informal methods, are used during (in addition to after) learning activity to
collect information about student learning.
3. Teacher use of information to determine what to do next occurs at different scales;
from "in-the-moment" adjustments to instruction or learning tactics, to shifts in the
next activity, to changes in the unit as a whole or the next time the unit is taught.
Propositions for classroom context.
4. Key features of the classroom learning context are perceptible and can be described
within the following categories: the object or focus of activity, the tools that mediate
learning (physical and semiotic), who participates, how they participate (roles and
responsibilities), and the rules or norms guiding participation.
5. The classroom learning context shapes how data about learning is used by teachers
and students.
6. The features of cultural tools used during formative assessment episodes and how
they are used illuminate critical features of formative assessment practices.
7. In a classroom where formative assessment practices are evident, information about
learning (including mistakes or misunderstandings) becomes a "social object" (a
tool) that is valued both in terms of how it is described and how it is used.
Data Collection
My description of the data collection for this study includes the following topics:
the unit of analysis (what subjects, where and when); the data sources (what data we
90


Full Text

PAGE 1

FORMATIVE ASSESSMENT IN CONTEXT By JULIE OXENFORD OÂ’BRIAN B.A., University of Colorado, Boulder, 1991 M.P.P., Georgetown University, 1996 A thesis submitted to the Faculty of the Graduate School of the University of Colorado in partial fulfillment of the requirements for the degree of Doctor of Philosophy Educational Studies and Research 2013

PAGE 2

2013 JULIE OXENFORD OÂ’BRIAN ALL RIGHTS RESERVED

PAGE 3

ii This thesis for the Doctor of Philosophy degree by Julie Oxenford OÂ’Brian has been approved for the Educational Studies a nd Research Program By Deanna Iceman Sands, Chair and Advisor Elliott Asp Honorine Nocon Maria Araceli Ruiz-Primo September 2nd, 2013

PAGE 4

iii Oxenford OÂ’Brian, Julie (Ph. D., Educational Studies and Research) Formative Assessment in Context Dissertation directed by Professor Deanna Iceman Sands. ABSTRACT This dissertation responds to critical gaps in current research on formative assessment practice which could limit successful implementation of this practice within the K 12 classroom context. The study applies a socio cultural perspective of learning to interpret a cross case analysis of formative assessment practice occurring during one mathematics instructional unit in a 5th and one in a 6th grade classroom. It illustrates how a fully defined theoretical foundation deepens understanding of the roles of formative assessment in learning, posits a working definition by which the describe what formative assessment practice looks like and sounds like as it is occurring in actual classrooms, and explains how the classroom social context influences formative assessment practice. The study has implications for future researchers investigating formative assessment practice; practitioners interested in implementing formative assessment practice; and policy makers evaluating the effectiveness of teachersÂ’ instructional practice. The form and content of this abstract are approved. I recommend its publication. Approved: Deanna Iceman Sands

PAGE 5

iv ACKNOWLEDGEMENTS So many people provided support and encouragement during the development of this dissertation. First, I want to thank my husband, Jeff Oxenford and my daughter Alyse Oxenford for tolerating the gaps in my presence in their lives and their steadfast support throughout. I want to thank my parents, Betty Keddington, Don O’Brian and Nancy O’Brian, for maintaining faith that I would finish even when I lost mine. I want to thank my colleague and friend Mary Romke for reminding me that I would finish and for celebrating along the way. I want to thank my advisor, Deanna Sands, for reading and re reading and challenging me to continue to refine even when I just wanted to be done. I want to thank the other members of my committee, Elliott Asp, Honorine Nocon, and Maria Ruiz Primo for their attention to the details, for reading and re reading and still having the energy to talk with me about my findings. I want to thank Helen Douglas for helping me to collect the data that I analyzed in this study and caring about the outcome. I want to thank Bonita Hamilton for letting me think aloud, providing encouragement when I needed it most and helping me figure out how to identify formative assessment episodes in context. I want to thank members of the DEMFAP team who helped organize and manage all of the data used in this study – Jen Feehan, Erin Sago, and Kendra Occhipinti. I couldn’t have kept it together without you. I want to thank two members of my Center staff who also helped me manage data, and provided copy edits and transcription – Travis Grotewold and Star Hess. Finally, I want to thank the student services staff who helped me overcome many bureaucratic hurdles that could have paralyzed my progress, Rebecca Schell and Sandra Snyder Mondragon.

PAGE 6

v TABLE OF CONTENTS CHAPTER I: INTRODUCTION 1 Defining the Problem 2 Background on the Problem 4 Theoretical Framework 14 Research Questions 16 Methodology 16 Researcher Perspective 20 Organization 21 II: REVIEW OF THE LITERATURE AND THEORETICAL FRAMEWORK 22 An Historical Derivation of the Practice of Formative Assessment 26 Empirical Research on the Attributes of Formative Assessment Practice 39 A Socio Cultural Interpretation of the Role of Formative Assessment in Learning 68 Bringing Together the Components of the Theoretical Framework 84 III: METHODOLOGY 87 Purpose 87 Research Questions 89 Data Collection 90 Data Analysis 105 IV: CASE REPORT ONE 131 Critical Attributes of Formative Assessment Practice 132 How the Social Context Influences Formative Assessment Practice in Situ 169 V: CASE REPORT TWO 226

PAGE 7

vi Critical Attributes of Formative Assessment Practice 227 How the Social Context Influences Formative Assessment Practice 278 VI: CROSS CASE ANALYSIS 337 A Comparison of the Critical Attributes of Formative Assessment Practice 337 A Comparison of How the Social Context Influenced Formative Assessment Practice 360 VII: CONCLUSIONS AND IMPLICATIONS 398 Introduction 398 Discussion of the Critical Attributes of Formative Assessment Practice 400 Discussion of How Social Context Influences Formative Assessment Practice 435 Limitations of Findings 449 Conclusions and Implications for Future Research 451 REFERENCES 455 APPENDIX A: INTERVIEW PROTOCOLS 464 APPENDIX B: OBSERVATION NOTES TABLE 480 APPENDIX C: DAILY CLASS SESSION MEMO EXAMPLE 481 APPENDIX D: TEACHER DEVELOPED MATERIALS USED IN LIZAÂ’S CLASSROOM 491 APPENDIX E: TEACHER DEVELOPED MATERIALS USED IN KARIÂ’S CLASSROOM 499

PAGE 8

vii LIST OF TABLES TABLE 2.1 Historical Development of Formative assessment in the U.S. ................................ 23 2.2 The Role of Feedback in Student Self Regulation ................................................... 47 3.1 Data Sources for each Proposition .......................................................................... 98 3.2 Learning Activity Names .......................................................................................... 108 3.3 Formative Assessment Episode Codes..................................................................... 120 4.1 Frequency of Types of Uses of Student Learning Data (Liza) .................................. 135 4.2 Percent of Data Collection Method by Types of Data Use (Liza) ............................. 149 4.3 Learning Targets and Mathematical Tasks (Liza) .................................................... 152 4.4 Analysis and Interpretation Approaches by Work Product (Liza) ............................ 162 4.5 Levels of Uses of Student Learning Information by Types of Uses (Liza) ................ 163 4.6 Teacher Developed Tools (Liza) ............................................................................... 177 4.7 Learning Activity Participation Structures and Tools (Liza) ..................................... 185 4.8 Typical Daily Activity Pattern ................................................................................... 194 4.9 Formative Assessment Episode Frequency by Learning Activity (Liza) ................... 202 4.10 Learning Activity and Participation Structure (Liza) ................................................ 218 5.1 Frequency of Types of Uses of Student Learning Data (Kari) ................................. 230 5.2 Data Collection Methods by User ............................................................................ 252 5.3 Percent of Data Collection Methods by Types of Data Use (Kari) ........................... 255 5.4 Learning Targets and Mathematical Tasks (Kari) ..................................................... 258 5.5 Analysis and Interpretation Approaches by Work Product (Kari) ........................... 266 5.6 Levels of Uses of Student Learning Information by Types of Uses (Kari) ................ 267 5.7 Levels of Use by Who Was Using ............................................................................. 269 5.8 Teacher Developed Tools (Kari) ............................................................................... 285

PAGE 9

viii 5.9 Learning Activity Participation Structures and Tools (Kari) ..................................... 290 5.10 Formative Assessment Episode Frequency by Learning Activity (Kari) ................... 301 5.11 Teacher Object and Student Engagement by Type of Learning Activity ................. 305 5.12 Frequency of Formative Assessment Episodes by Tools Used ................................ 312 5.13 Learning Activity by Participation Structure (Kari) .................................................. 325 6.1 Relative Frequency of Types of Data Use by Classroom .......................................... 340 6.2 Data Collection Methods across Classrooms ........................................................... 348 6.3 Types of Learning Activities across Classrooms ....................................................... 365 6.4 Comparison of Participation Structures for Learning Activities .............................. 385 7.1 Potentially Critical Attributes of Different Types of Data Use ................................. 416 7.2 Attributes of Formative Assessment Episodes ........................................................ 433

PAGE 10

ix LIST OF FIGURES FIGURE 1.1 The actions included in a formative assessment episode ....................................... 14 1.2 The components of the social context of a classroom ............................................ 15 2.1 The actions included in a formative assessment episode ....................................... 39 2.2 An illustration of formative assessment as a mediator of learning ......................... 80 2.3 The components of an activity system as identified by Engestrm ........................ 83 3.1 The units of analysis for the social context of formative assessment episodes occurring within a mathematics instructional unit .................................................................. 93 3.2 The data collection sequence .................................................................................. 97 3.3 An illustration of my analysis approach ................................................................... 106 3.4 An example of an expanded daily class session memo outline ............................... 110 4.1 An illustration of data collection methods .............................................................. 146

PAGE 11

1 Chapter I Introduction The purposes of the research study described in this dissertation was to illuminate how formative assessment practice works in real classrooms and how the construction of the social context of classrooms influences, and is influenced by, the formative assessment practice that occurs within them. Two theses guided this study. The first was that I could establish what counts as formative assessment from a socio cultural perspective, a formative assessment episode, and using that definition to identify and describe the attributes of formative practice within two case study classrooms. The second was that the social context of the classrooms would influence, and be influenced by the formative assessment occurring within them. To examine these theses, I applied a socio cultural perspective of learning to interpret a cross case analysis of formative assessment practice occurring during one mathematics instructional unit in a 5th and one in a 6th grade classroom. In this chapter, I provide context and introduce the major components of this dissertation. I first provide background information about the problem of focus, and then introduce the theoretical framework and its associated components, the research questions, and the research methods. Since no interpretation is independent from the interpreter, I share relevant personal background information to provide a window to the perspective I brought to the study. Finally, I describe the organization of the remainder of the dissertation.

PAGE 12

2 Defining the Problem There is widespread agreement within the educational research community that formative assessment plays a critical role in learning (Black & Wiliam, 1998a; Hattie & Timperley, 2007; Li, Yin, Ruiz Primo & Morozov, 2011; Shepard, 2000). Numerous reports, studies, and “how to” books aimed at practitioners (K 12 teachers and administrators) recommend formative assessment among practices most likely to improve student learning (Brookhart, 2009; CCSSO, 2008; Hattie, 2009; Marzano, 2001; OECD, 2005; Popham, 2009, 2011; Wiliam, 2011). A substantial body of research evidence supports claims that formative assessment has a significant impact on student learning (Black & Wiliam, 1998a; Hattie, 2009; Hattie & Timperley, 2007; Herman, Osmundson, & Silver, 2010; James et al., 2007; Kluger & DeNisi, 1996; Rodriguez, 2004; Ruiz Primo & Furtak, 2006, 2007; Torrance & Pryor, 2001). This literature, however, has a number of critical gaps and recent studies have identified challenges for teachers in effectively implementing formative assessment practices (Heritage, Jones & White, 2010; Herman, Osmundson, & Silver, 2010). The first gap is the lack of a consistent definition of formative assessment practice and the critical attributes of that practice. Definitions vary from using a web based program to comment on students’ homework (Lipnevich & Smith, 2008), to providing extensive written feedback on student work (Ponte, Paek, Braun, Trapani, & Powers, 2009), to engaging students in a sequence of question answer response (Ruiz Primo & Furtak, 2007; Chin, 2006). The attributes necessary for a practice to be “formative assessment practice” are unclear within these various definitions. Even

PAGE 13

3 authors of literature reviews who cite one another define the critical attributes of formative assessment somewhat differently (Black & Wiliam, 1998a; Brookhart, 2009; Frohbeiter et al., 2011; Hattie & Timperley, 2007; Li, Yin, Ruiz Primo, & Morozov, 2010; Mory, 2004; Shute, 2008). Second, the lack of a fully defined theoretical foundation has hampered consistent conceptualization of formative assessment (Black & Wiliam, 2009; Brookhart, 2004; Perrenoud, 1998). In an influential review, Black and Wiliam (1998a) took a pragmatic approach to describing formative assessment practices with evidence of a positive impact on student learning without contextualizing the practices within a theoretical model of learning (Perrenoud, 1998). In some contexts, formative assessment has a positive impact on learning and sometimes it does not (Kluger & DeNisi, 1996). However, without a theoretical framework, it is difficult to explain why. Finally, the lack of a consistently defined theoretical foundation for formative assessment has limited efforts to identify aspects of the context that amplify or diminish the effects of formative assessment practice. Without an understanding of the context within which formative assessment is most likely to have a positive impact on learning, educators and educational leaders will continue to implement these practices with uneven success. I responded to these gaps through the study described herein by applying a socio cultural perspective of learning to researching the presence and critical attributes of formative assessment through a cross case analysis of formative assessment practices as they occur in two math classrooms (one 5th and one 6th grade). My primary

PAGE 14

4 audience is researchers, with secondary audiences including practitioners and educational policy makers. The data collected for this study were part of a 4 year, IES funded research project, Developing and Evaluating Measures of Formative Assessment Practice (DEMFAP) (PIÂ’s, Ruiz Primo & Sands, 2009). I was part of the team who developed the initial DEMFAP project proposal and negotiated school district participation in the project. I was the lead researcher collecting data in one of the classrooms included in this study and shared lead responsibility for collecting data in the other classroom. The DEMFAP project has a broader focus than this study. It aims to develop measures to identify the presence and depth of formative assessment practice occurring within classrooms. This study only focused on identifying the critical attributes of formative assessment practice in two classrooms and explaining how the social context of the classrooms within which formative assessment practice occurred influenced that practice. Background on the Problem During the last twenty five years, a number of meta analyses and literature reviews have focused on the impact of formative assessment practice on student learning outcomes. In their 1998 review of over 250 empirical studies, Black and Wiliam (1998a) found that when teachers used assessment data formatively to inform their teaching and studentsÂ’ learning, they produced learning gains with effect sizes ranging between 0.4 and 0.7. The impact of this review within schools and districts was

PAGE 15

5 magnified when practitioners were exposed to it through a companion article published in the Phi Delta Kappan (Black & Wiliam, 1998b), a practitioner focused magazine. While some claims in the Black & Wiliam review have come under fire (Benner, 2011), interest in and commitment to formative assessment as a reform strategy remains strong, as evidenced by its inclusion in recent U.S. Department of Education Race to the Top grants (Herman, Osmundson, Dai, Ringstaff, & Timms, 2011). In addition, despite contradictory definitions, formative assessment has been considered a classroom version of “data driven decision making,” a strategy many state and national level policy makers advocate as a mechanism for transforming low performing schools (Frohbieter et al., 2011). Initial evidence and subsequent policy support for formative assessment has been strong enough to ensure that policy makers and educators will continue to advocate for implementation of formative assessment within K 12 schools in the U.S., regardless of challenges to the research base. Given this context, it becomes critical to understand what formative assessment practice looks like in actual K 12 classrooms and how the classroom context influences formative assessment practice. I followed Shepard (2000), James (2008), and Black and Wiliam (2009) in building upon socio cultural theories of learning (Brookhart, 2004; Gipps, 1999; Lave & Wenger, 1991; Gee, 2008; Moss, 2008; Torrance & Pryor, 1998, 2001; Vygotsky, 1978; Wells, 1999) to contextualize formative assessment within a model of learning. Based on this perspective, I derived a definition of formative assessment practice, developed a description of the role of formative assessment practice in learning, and identified the

PAGE 16

6 components of the social context of classrooms to consider as influencers or shapers of formative assessment practice. Defining formative assessment. I considering the historical development of assessment practice generally, and classroom uses of assessment results more specifically, to derive a definition of formative assessment practice. My context for this derivation included the shifts in the dominant learning theories influencing public K 12 education in the U.S. from its inception. Once K 12 education became available to all citizens, supported by taxes, and mandated for certain ages of children, the public asked for an accounting of the value of that education. This accounting, in the form of student progress reports, was the first classroom use of assessment in the U.S. As access to secondary education expanded, more efficient methods for accounting for the value of schooling were developed, ultimately resulting in a 5 point grading scale used in the U.S. by 1918 (Guskey & Bailey, 2001) and still used today in many high schools (Marzano, 2006). Assessment administered outside of the classroom expanded significantly during the 1900s following publication of the first intelligence test in 1905 and of achievement tests intended to provide direction for failing schools in 1908. Both types of tests were standardized, intended to be objective, and designed to sort students based on ability (Shepard, 2006; Gipps, 1996). The dominant learning theory of the time, behaviorism, shaped these tests. Behaviorism advocated the use of testing to ensure students mastered one objective before proceeding to the next. Consistent with these theories,

PAGE 17

7 experts developed achievement tests outside of classrooms, and districts used them to identify program level improvements rather than to support teachers’ instructional decisions (Shepard, 2006). Behaviorist learning theories and associated achievement tests established beliefs among teachers that assessment needed to be separate from instruction, uniformly administered, and “objective” to be fair (Shepard, 2000). It was not until 1967 that the term “formative evaluation” was first used by Michael Scriven as a process that included using student assessment data to evaluate the effectiveness of school programs during rather than just after, implementation (Cizek, 2010; Wiliam, 2010). Bloom subsequently, in 1971, suggested that classroom assessment should be part of formative evaluation “to both provide students feedback on their learning progress and to guide correction of learning errors” (Guskey, 2010, p. 108). Initial interest in formative uses of classroom assessment occurred as cognitive theories of learning – emphasizing the importance of the process of learning rather than just mastery of discrete objectives – were gaining prominence in the U.S. However, despite the growing influence of cognitive learning theories on instructional practice, it was not until 1989 that U.S. educational measurement experts first suggested a shift “away from pass fail post tests of student mastery to richer assessments of students’ understandings and proficiency in a domain” (Shepard, 2006, p. 625). Even then, they developed these newer tests outside of the classroom and administered them in standardized formats (Shepard, 2006). Today, almost all states use the same type of

PAGE 18

8 achievement tests for school and district accountability, and many states are beginning to use these tests in educator evaluation. The measurement community outside of the U.S. responded more quickly to changes in learning theories. The Assessment Reform Group was formed in the United Kingdom in 1989 to focus on the relationship between classroom assessment and teaching and learning, (Shepard, 2006). Two of its members, Paul Black and Dylan Wiliam (1998a) conducted the influential review of literature on formative assessment practice and learning, referenced above, launching much of the current interest in formative assessment practice. The popularization of formative assessment following the Black and Wiliam (1998a) review resulted in widespread appropriation of the term. In the U.S., vendors of interim/benchmark assessments (formal tests administered several times during a school year) called their products “formative assessments” making inappropriate claims to this evidence as supporting the use of these products (Shepard, 2005a). In 2006, because of wide spread inconsistency in the definition of formative assessment in the U.S., the Council of Chief State School Officers (CCSSO) brought together educational leaders and researchers to develop a common definition (CCSSO, 2008). They defined formative assessment as “a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes” (p. 3). This definition described formative assessment as a process or practice, that occurs during the learning process rather than a type if instrument. It also highlighted the relationship

PAGE 19

9 between formative assessment and feedback, “formative assessment . provides feedback [emphasis added] to teachers and students” (CCSSO, 2008). In developing the definition of formative assessment used in this study, I began with the CCSSO definition but considered also efforts by Ramaprasad (1983), Sadler (1989), and Black and Wiliam (1998a) to define formative feedback as a means of adding greater depth to the CCSSO definition. These authors suggested that feedback is formative only if it includes information about actual performance, some point of reference for contextualizing that information, and mechanisms for comparing the two and closing any apparent gap. I integrated the CCSSO definition of formative assessment and these additional clarifications regarding when feedback is formative in my definition. In collaboration with my colleagues from the DEMFAP project, I built on descriptions of other authors (Airasian, 1991; Bell & Cowie, 2001; Mavrommatis, 1997) who coined the term “assessment episode” to operationalize my definition of formative assessment practice within a classroom setting. I defined a formative assessment episode as encompassing both the concept of an episode (something that involves discernible actions) and the definitions of formative assessment and formative feedback described above. Similarly to the DEMFAP project, I define a formative assessment episode as including the following steps or actions: 1) Specifying (explicitly or implicitly) a learning target(s); 2) Collecting data (through formal or informal methods) about the actual level of student learning in reference to the learning target(s); 3) Analyzing student learning data (in reference to the learning target); 4) Interpreting student

PAGE 20

10 learning data (in reference to teacher practice and/or student learning needs); and, 5) Taking action, based on the interpretation, to improve student learning and/or teacher practice. Both the teacher and students can be the actors taking these steps. However, assessment episodes are “formative” only if they include the last step, the teacher and/or student(s) using the information to improve teaching and learning. A socio cultural interpretation of the role of formative assessment in learning. My next step was to describe the role of formative assessment in learning from a socio cultural perspective and identify what aspects of the social context of the classroom had the potential to influence formative assessment practice. My description stemmed from socio cultural theories of learning based upon the seminal contributions of Lev Vygotsky, and building upon the many authors who expanded Vygotsky’s work to school based contexts (Cole, 1990, 1999; Cole & Engestrm, 2006; Gallimore & Tharp, 1990; Giest & Lompscher, 2003; Gipps, 1999, 2002; Lave & Wenger, 1991; Moss, 2008; Shepard, 2000; Torrance & Pryor, 1998; Wells, 1999, 2002; Wertsch, 1990). Vygotsky (1978) conceptualized learning and development as situated within a social context. According to Vygotsky, learning is internalized social interaction. This conceptualization of learning places socio cultural theories somewhere between behaviorist and constructivist theories. Conceptualizing learning as social emphasizes the importance of considering teacher to student and student to student interaction in any investigation of learning (Wertsch, 1990). I describe this as the social context of the classroom. As a result of taking this perspective, I focused analysis on the learning activity occurring within classrooms.

PAGE 21

11 Building upon Vygotsky’s work, Lave and Wenger (1991) developed the concept of legitimate peripheral participation (LPP) to describe how individuals participate in activity that includes learning. LPP suggests that “how” learners participate in the social interaction that happens in the classroom matters, and emphasizes the importance of considering teacher and student roles in learning activity and how those roles are structured by behavioral norms. Vygotsky (1978) introduced two additional concepts that helped to further explicate the role of formative assessment in student learning – tools as mediators of learning, and the zone of proximal development. Vygotsky (1978) asserted that learning is always mediated by culturally and historically developed artifacts and tools; tools are more than aids to learning, tools shape the learning. According Wertsch (1990) we can only understand human activity if we consider the tools that mediate it. From this perspective, formative assessment practice itself is a cultural tool that mediates learning. In other words, we can conceptualize the role of formative assessment in learning as a social practice that mediates learning. If formative assessment is a social practice, then cultural tools can also mediate formative assessment. Examples of the tools that mediate formative assessment include the following: tasks used in data collection, learning targets on which data collection focuses, and protocols or material resources used to structure informal interactions regarding evidence of student learning. I considered each of these types of tools in developing my understanding of how the social context of a classroom influences formative assessment practice.

PAGE 22

12 Vygotsky’s concept of the zone of proximal development (ZPD) was perhaps his most important contribution to explaining how learning occurs (Wells, 1999). A learner’s ZPD is the knowledge, skills and understanding that she/he could not demonstrate on her/his own, but could with help (Vygotsky, 1978). This conceptualization of the ZPD highlights one key difference between socio cultural learning theories and constructivist theories; it shifts the focus away from what learners can demonstrate on their own (constructivist) and towards what they can demonstrate with help in the process of learning (socio cultural). Each learner’s ZPD becomes the target for assistance provided by the teacher. By establishing a target for instructional assistance, formative assessment also mediates teacher instructional practice. The ZPD further implies that it is important to collect data about student learning during learning activity, while students have access to help from the teacher or their peers, and within the social context in which learning activity takes place (Shepard, 2000; Gipps, 2002). This means that one critical attribute of data collection, as part of formative assessment practice, is that it must occur during learning while students have access to help from the teacher or other students. Moss (2008) asked what should count as assessment in classrooms from a socio cultural perspective. She recommended a definition of assessment that goes beyond instruments used to collect student performance information and defined assessment as social practice. She asserted that informal interactions regarding evidence of student learning are as important to consider as the formal ones (those that are more typically called ‘assessment’). Jordan and Putz (2004) also defined a continuum of assessment

PAGE 23

13 practice from formal to informal that included documentary (resulting in written or electronic evidence of learning), discursive, and inherent assessment. Both discursive and inherent assessment fit within Moss’s expanded definition of what should count as assessment in classrooms. Jordan and Putz further described something they called a “social object,” shared discursive assessment that becomes something to which the teacher and students can later refer. Both Moss’s description of what should count as assessment from a socio cultural perspective and the Jordan and Putz continuum of assessment practice highlight the importance of explicitly considering informal data collection as part of what counts as formative assessment. A socio cultural orientation to learning helps define the role of formative assessment in learning and suggests a framework for identifying the critical attributes formative assessment practices. This perspective highlights the need to consider the tools used, who participates, the nature of that participation (student and teacher roles), and the rules and norms that guide participation in explaining how the social context of the classroom influences and is influenced by formative assessment practice (Moss, 2008). Thus, I followed Moss (2008) and Oxenford O’Brian, Nocon, and Sands (2010) in using the components of an activity system proposed by Engestrm (1987, 1993, 2001) as an analytical tool to draw attention to the social context of K 12 classrooms within which formative assessment practices occur.

PAGE 24

T c o t h o fo f r i n F i heoretical F My t h o nstituents, h e classroo m I deri v perationaliz o rmative as s r om other r e n a formativ e i gure 1.1. T h F ramework h eoretical fr a definition o m social con t v ed historic a ed the acti o s essment ep e lated pract i e assessme n h e actions in Using D a a mework e m o f a f ormati v t ext that inf l a lly a definit i o ns included isode This a i ces found i n n t episode. cluded in a f a ta14 m erged fro m v e assessme n l uence form i on of form a in the prac t a llowed me t n classroom s f ormative a s Identifyin g Learning Target(s ) Collectin g Data Interpreti n Data m the literat u n t episode a ative assess a tive assess m t ice of form a t o distingui s s Figure 1.1 s sessment e p g g ) g n g u re and incl u a nd identifi e ment practi m ent, and t h a tive assess m s h formative depicts the p isode Analyzing Data u ded two e d elements ce. h en m ent as a e assessmen t actions incl of t uded

PAGE 25

15 I adopted a socio cultural perspective to describe the role of formative assessment in learning in the K 12 educational context which supported my identification of the key elements of the social context of classrooms that are potential shapers of formative assessment practices. I adopted the components of an activity system proposed by Engestrm (1987, 1993, and 2001) as an analytical tool to draw attention to the social context of K 12 classrooms within which formative assessment practices occur. As a result, I focused on the following aspects of the social learning context: the object (the purpose of learning activity), tools (both physical and semiotic that mediate learning), participants and roles (how teachers and learners participate in learning activity), and rules or norms (what guides or structures teacher and student participation in learning activity). Figure 1.2 identifies theoretical categories I later used to describe the social context of K 12 classrooms. Object Subject Rules or Norms Roles Participants Outcome Tools Figure 1.2. The components of the social context of a classroom.

PAGE 26

16 Research Questions My study was guided by two research questions: 1) what are the critical attributes of formative assessment practice as it occurs in K 12 classrooms?, and 2) How does the social context of a K 12 classroom (i.e. object, tools, participation/roles, and rules/norms) influence formative assessment practice in situ? I focused my study further by propositions for each question that actualize the details of the theoretical framework while simultaneously creating boundaries for data analysis. I introduce these propositions in Chapter 3. Methodology I chose to conduct individual case studies in two classrooms and a cross case analysis to meet the purpose and respond to the study research questions. Including more than one classroom in the study enhanced my description and explanation of formative assessment practice in situ. I considered the multiple formative assessment episodes occurring in one upper elementary and one middle level classroom during one mathematics instructional unit in each classroom. During the second year of the DEMFAP project, we collected data in fourteen upper elementary and middle level classrooms in three different Colorado school districts. The two classrooms selected for this study were among the fourteen. One of my foci for each case was the attributes of formative assessment practices evident across the many episodes of formative assessment practice that occurred within a single instructional unit. The second foci of each case was the learning activities within which formative assessment episodes occurred and relationship

PAGE 27

17 between the social context of those learning activities and the attributes of the formative assessment episodes. Formative assessment practices occur during instructional units within the social context of the classroom; however, that context often varies as the teacher engages students in different learning activities during the instructional unit. As a result, I defined a learning activity as a subcomponent of a class session identified by a change in one or more component of the social context (object, tools used, roles, participation structures, and norms). For example, when students complete a “warm up” task that is a learning activity distinct from students grading homework from the day before; these two activities have different objects, use different tools and involve students engaging in different roles. Both are sub components of a single class session and the instructional unit. I located formative assessment episodes within the constituent learning activities of the observed unit of instruction. I was able to consider then how the social context of different learning activities influenced, or were influenced by, their respective formative assessment episodes. DEMFAP researchers and I collected data for all class sessions within a single instructional unit in both of my case study classrooms. These data included teacher interviews (entrance, pre and post unit, and daily post observation), video/audio recordings of daily mathematics class sessions across the entire unit, in person class session observations, and classroom artifacts (digitized). The pre and post unit interviews in each classroom were structured. The daily post observation interviews were semi structured, with questions based on what had occurred during the class

PAGE 28

18 session preceding the interview. A videographer recorded each class session during the unit and the teacher wore a microphone. In addition to the videographer, a separate researcher was in the classroom capturing observation notes during each class session. Observers captured notes about the social context of each learning activity during each class session. I organized the observation notes by the key components of the social context of the classroom, with an additional column to list formative assessment episodes occurring during each learning activity. Artifacts, including all handouts, pages used from instructional resources, and student work done during each class session were also collected. Each data source provided information related to both of the study research questions. My general approach to analysis within each classroom, resulting in a case report, included the following steps: 1. Characterize the classroom learning context for the unit over all (unit learning context) based on multiple types of data. 2. For each class session, code the video/audio recordings, observation notes, and transcripts of the post observation interviews (using process codes for the every step of each formative assessment episode and descriptive codes for elements of the classroom social context). 3. Develop a class session memo (using coded video and audio recordings and remaining data sources) including the following: a. Characterize the learning context for each type of learning activity that occurred (activity learning context),

PAGE 29

19 b. Identify formative assessment episodes (based on data about student learning being used and associated data collection of a discernible learning target being identified), and c. Locate each formative assessment episode within the learning activity during which it occurred. 4. Create a table of formative assessment episodes (one row per episode); populate the table based on data captured in the class session memos supplemented by additional reference to the primary data sources as needed. 5. Analyze and characterize the formative assessment episodes that occurred across all of the class sessions within the unit, responding to the propositions associated with the first research question. 6. Analyze and characterize the social context of each type of learning activity that occurred across all of the class sessions in the unit, describing how each component of the social context related to the formative assessment episodes occurring within the learning activity, and responding to the propositions associated with the second research question. After both case reports were developed, I compared the unit and activity learning contexts, formative assessment episodes, and how the activity learning contexts related to the formative assessment episodes across the two classrooms to develop a cross case comparison report. Then I provided a discussion of my results, limitations of the study and implications for future research.

PAGE 30

20 Researcher Perspective For the past ten years, I have been the director of a training center located within a school of education at a university. I established the center at the university to build capacity among in service teachers and educational leaders. Two multi year, multi million dollar grants provided start up financial support for the center at the university, which then focused on promoting “data driven instructional practice” within school districts and teacher preparation programs. Early in the genesis of both projects, it became clear to me that much of the educator training that was available in the marketplace focused on data too distant from the daily decisions of teachers to have a meaningful impact on instructional practice. I learned about the work of Paul Black and Dylan Wiliam in the U.K. after several attempts to make use of existing training resources developed in the U.S. The U.K. work fit the projects’ intended impact on instructional practice better than anything else we had come across. Subsequently, we developed two strands of training, the first for educational leaders, focused on data driven decision making at the school level and the second, for teachers, focused on classroom formative assessment practice. We based the second strand on international research available at that time. Project staff saw amazing results in some classrooms. Learners who started more than a year behind caught up. Teachers transformed the experience of their learners, and learners were able to talk about what it meant to take responsibility and own their learning. We saw no change in other classrooms with the same training and coaching. We didn’t know why or what could account for these differences. This was the genesis

PAGE 31

21 of my interest in better understanding how formative assessment practice works in real classrooms, what attributes of formative assessment practice are most important, and under what conditions it can result in meaningful changes for students. I brought to this study a belief in the power of formative assessment to transform learning experiences for students. I also brought an appreciation for the work of Paul Black and Dylan Wiliam and a respect for the role their efforts have played in moving teachers towards using formative assessment practices in schools in my home state. However, I also brought skepticism regarding implementation of formative assessment practice – something about the practice makes it difficult for many teachers to implement. I believe it is difficult for some educators to make the necessary shifts in their instructional practice and I wanted to understand better why. Organization I organized my dissertation into seven chapters. They include the following: Chapter One: Introduction to the problem, theoretical perspective, and methods; Chapter Two: Review of the literature about the problem and the theoretical perspective, and presentation of the theoretical framework; Chapter Three: Methodology of the study including research questions and associated propositions, data collection, and data analysis; Chapter Four: Case study report for the first classroom; Chapter Five: Case study report for the second classroom; Chapter Six: Cross Case analysis; and Chapter Seven: Results and implications for practice and future research.

PAGE 32

22 Chapter II Review of the Literature and Theoretical Framework Formative assessment is learning activity that involves students interacting with one another, the teacher, and learning resources situated within the context of collective classroom activity. Formative assessment shapes social development, distributed learning, and individual cognitive development within the K 12 classroom context. Thus, I build upon socio cultural theories of learning (Brookhart, 2004; Gipps, 1999; Lave & Wenger, 1991; Gee, 2008; Moss, 2008; Torrance & Pryor, 1998, 2001; Vygotsky, 1978; Wells, 1999) to contextualize formative assessment practice within a model of learning to develop the theoretical framework for this study. From a socio cultural perspective, “the present state can be understood only by studying the stages of development that preceded it” (Wells, 1999, p. 5). To understand better the mix and context of formative assessment practices present in K 12 classrooms within the US today, I consider the genesis of formative assessment practice at the cultural level within the US over the past century, define formative assessment, and develop the concept of a formative assessment episode. Then I review empirical research related to the attributes of formative assessment practice associated with improvements in student learning to deepen my description of what effective formative assessment practice includes. Finally, I apply a socio cultural perspective to describe the role of formative assessment in teaching and learning and to further develop my framework and to explain how the social context of a K 12 classroom influences, and is influenced by, formative assessment practice.

PAGE 33

23 Table 2.1 Historical Development of Formative Assessment in the U.S. Year Assessment Classroom Use of Assessment Learning Theory 1818 Public education becomes widely available in the US. 1850 1900 Formal progress evaluations of student work introduced as measures of the impact of public education Early 1900 Rapid growth of high schools perpetuates a shift to percentage grading. 1905 Binet publishes intelligence tests used to sort students into learning experiences based on ability. 1908 Thorndike publishes achievement tests, using the same formats and scoring as IQ tests, to compare schools. 1912 Reliability of percentage scores as grades questioned. 1918 Most high schools move to a grading scale with fewer categories (A, B, C, D, F). 1920’s Use of standardized, objective achievement tests expand based on the army’s success in using them to place recruits during WWI. Behaviorist learning theories enforce beliefs among teachers that assessment needs to be separate from instruction, uniformly administered, and “objective” to be fair. Behaviorist theories of learning gain momentum in US. Vygotsky conducts research in cognitive development, forming basis of socio cultural learning theories, but it is not available outside of Russia. 1930’s Political rhetoric suggests that assessments should be used to improve learning; tests still developed and used outside the classroom. Grading on a curve is introduced to reflect the same normal distribution as intelligence.

PAGE 34

24 Year Assessment Classroom Use of Assessment Learning Theory 1931 Thorndike claims evaluative feedback (rewards and punishment) facilitates learning. Early 1960’s Cognitive theories of learning gain prominence with a focus on Piagetian constructivist theories. 1962 Vygotsky’s socio cultural learning theoretical research first published in the west. 1967 Scriven introduces formative evaluation as using student achievement data to evaluate school programs and curriculum during rather than after the program. 1971 Benjamin Bloom asserts classroom assessments should be used “to provide students feedback on their learning progress and to guide correction of learning errors.” 1977 Bandura proposes Social Learning Theory: 1) people learn through observation; 2) internal mental states are essential to learning, and 3) learning does not always result in behavior change. 1983 Ramprasad defines feedback as “information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap”.

PAGE 35

25 Year Assessment Classroom Use of Assessment Learning Theory 1989 Shift away from pass fail post tests of student mastery to richer assessments of students’ understanding; assessments still standardized and developed outside of the classroom. Constructivist influence on instruction gets ahead of assessment. Curriculum developers propose alternatives to standardized tests for use in classrooms. 1989 UK Assessment Reform Group formed to consider the relationship between assessment and learning Sadler proposes, “Formative assessment is concerned with how judgments about the quality of student responses can be used to shape and improve the student's competence by short circuiting the randomness and inefficiency of trial and error learning.” 1998 Black & Wiliam review reports that practices that use assessment formatively produce large learning gains; Companion article aimed at practitioners is published. 2000 National Research Council report, How People Learn: Brain, Mind, Experience and School, a summary of the last 40 years of research on learning, emphasizes constructivist and socio cultural learning theories (Bransford, Brown, & Cocking, 2000). Early 2000’s US vendors of interim assessments make inappropriate claims to evidence that formative assessment improves learning.

PAGE 36

26 Year Assessment Classroom Use of Assessment Learning Theory 2001 National Research Council publishes Knowing What Students Know: The Science and Design of Educational Assessment, mapping new directions for assessment in the US including a greater role for classroom assessment (Pellegrino, Chudowsky, & Glaser, 2001). 2006 Council of Chief State School Officers (2006) develops a common definition of formative assessment. 2009 The Third international Conference on Assessment for learning proposes another revised definition of formative assessment. An Historical Derivation of the Practice of Formative Assessment Historically deriving the current form of the practice of formative assessment in the US includes consideration of the antecedents and practices that are constituents of formative assessment as it occurs within the context of K 12 classrooms in the US. Thus, I consider the development of assessment practice more broadly, evolving classroom uses of assessment (including grading), and related shifts in dominant theories of learning from the beginning of public education in the U.S. Table 2.1 is an abbreviated timeline of developments between the mid 1800Â’s and the present day. Most trace the beginning of public education in the U.S. to efforts of Horace Mann and Henry Barnard who advocated for schools paid for by taxes and free to all citizens by the mid 1800Â’s (Public Broadcasting Service, 2001). Associated mandatory attendance laws (passed in all states for the elementary level by 1818) brought greater attention to questions about what students were getting out of going to school. The result was an early classroom use of assessment, to develop formal student progress

PAGE 37

27 evaluations, introduced in the form of narrative descriptions of the skills that students had gained. By the early 1900’s, rapid growth of high schools created the need for more efficient mechanisms for providing information about what students were getting out of going to school, resulting in another use of assessment results, to calculate percentages to certify accomplishment in different subject areas. In 1912, the reliability of percentage scores was questioned when a study of English and geometry percentage grades in high school found dramatic variation in teachers’ evaluation of the same student work. By 1918, most high schools moved to a grading scale with fewer categories (A, B, C, D, or F) to reduce the variability in grading if not to address grading subjectivity (Guskey & Bailey, 2001). By the 1930’s the assumption that intelligence was normally distributed in the population began to influence grading. The practice of grading on a curve became more common so that grades would reflect the same distribution as intelligence. It wasn’t until the 1940’s that the assumption that intelligence was normally distributed began to be challenged (Gipps, 1999). This scale (A, B, C, D, F) still dominates grading practices in K 12 contexts in the US today (Marzano, 2006). Use of assessment outside of classrooms, followed a similar progression. In 1905, a French psychologist, Binet, published the first intelligence test designed to and largely used to sort students into learning experiences based on their abilities. “Items of an educational nature were chosen for their effectiveness in distinguishing between children who were judged by their teachers to be bright or dull," (Gipps, 1999). Then in 1908, Thorndike published achievement tests aimed at documenting the need for and

PAGE 38

28 setting direction for failing schools. These tests focused on measuring the information gained by those being assessed (rather than their reasoning), and because they were developed in the same time period as early IQ tests, shared the same formats and scoring approach based on discerning individual differences. Their standardized format allowed for comparisons across schools and was seen as a remedy for the unreliability of individual teacher tests (Shepard, 2006). The use of standardized objective achievement tests expanded further after World War I because of the army’s success in using objective tests to place recruits into different roles (Gipps, 1999). While the rhetoric at the time suggested assessment should improve learning this was to be accomplished by district leadership using the results of tests that were developed outside the classroom to shape programmatic improvements rather than by teachers making decisions about changes in their own practice (Shepard, 2006). This use of standardized achievement tests in the U.S. continues through the present day. The expansion of standardized objective testing corresponded with the development of behaviorist learning theories in the U.S. which “conceived of learning as the accumulation of stimulus response associations,” and advocated the use of testing to “ensure mastery before proceeding to the next objective,” (Shepard, 2000, p. 5). Not surprisingly, Thorndike was a leading figure in both objective testing and behaviorist approaches to learning, (Shepard, 2000). Within a behaviorist paradigm, it is important to determine if connections have become habitual. Thus, the role of assessment is to measure typical performance, or the performance a student would be likely to sustain

PAGE 39

29 over time (Gipps, 1999). Assessment items that require no judgment on the part of the scorer, such as multiple choice items, are preferable to those in which the scorer plays a role, such as extended response or essay questions. Educators can then use assessment results to provide the appropriate response to learners. The behaviorist learning paradigm helped establish beliefs among teachers that assessment needs to be separate from instruction, uniformly administered, and “objective” to be fair (Shepard, 2000). Within the behaviorist theoretical framework, feedback plays an important role in the learning process. According to Shepard (2006), “one of the oldest findings in psychological research is that feedback facilitates learning,” (p. 631). However, from a behaviorist perspective, the feedback that teachers provide to students should be evaluative; it should be in the form of rewards or encouragement for mastering content, and punishments or discouragement for failing to master content (Tunstall & Gipps, 1996). The notion that assessment or feedback about assessment could be formative came much later. Russian psychologist Lev Vygotsky conducted extensive research on developmental child psychology and cognitive development, forming the basis of socio cultural learning theories, during the same period that behaviorist theories dominated in the U.S. However, his work was not available outside of Russia until the 1960’s. A number of authors give Scriven credit for coining the term formative evaluation in 1967 in a monograph created for the American Educational Research Association (Cizek, 2010; Wiliam, 2010). Scriven introduced this term in the context of

PAGE 40

30 using student achievement data as part of evaluating the effectiveness of school programs during rather than just at the end of the program (Cizek, 2010). The idea that assessment results could be used in formative evaluation gained greater recognition when Benjamin Bloom and his associates introduced a related idea in the Handbook of Formative and Summative Evaluation of Student learning published in 1971, (Cizek, 2010). In it, Bloom proposed a solution to the problem of teachers failing to vary instruction even while students came to classrooms with vastly different backgrounds and skills. He suggested that teachers could use classroom assessments as “learning tools, both to provide students feedback on their learning progress and to guide correction of learning errors,” (Guskey, 2010, p. 108). Then Bloom went on to describe how teachers could use assessments formatively as a part of regular classroom routines (Guskey, 2010). This initial description of formative uses of assessment occurred about the same time that cognitive theories of learning became more prominent in the U.S. Constructivism grew out of cognitive psychological research and offers that learning is an individual process of constructing meaning from experience. As Phillips (2000) put it “knowledge is made, not acquired” (p. 7). Constructivist theories of learning, such as those espoused by the Swiss scientist Jean Piaget, began gaining momentum in the 1960’s (Crain, 1992). The influence of constructivist theories on assessment lagged behind the influence on instruction (Shepard, 2000). It was not until the third edition of Educational Measurement published in 1989 that the authors first suggested a shift “away from

PAGE 41

31 pass fail post tests of student mastery to richer assessments of students’ understandings and proficiency in a domain” (Shepard, 2006, p. 625). Even then, the authors assumed the assessments should influence classroom decisions, but experts should develop them outside of the classroom and provide them in standardized formats (Shepard, 2006). While the U.S. measurement community continued to focus on large scale standardized assessment, curriculum developers and subject matter experts began to develop alternatives to standardized tests for use in classrooms. These alternatives included informal strategies for collecting information about learning such as checking on reading while reading was happening, observing students in action, teacher questioning, and student journaling (Shepard, 2001). The measurement community outside of the U.S. responded more quickly to changes in learning theory. In 1989, an Australian researcher, Royce Sadler, proposed a widely accepted model of formative assessment (Shepard, 2006; Wiliam, 2010). Sadler(1989) described formative assessment as being “concerned with how judgments about the quality of student responses (performances, pieces, or works) can be used to shape and improve the student's competence by short circuiting the randomness and inefficiency of trial and error learning,” (p. 120). He further asserted that for improvement to occur, the learner has to (a) possess a concept of the standard (or goal, or reference level) being aimed for, (b) compare the actual (or current) level of performance with the standard, and (c) engage in appropriate action which leads to some closure of the gap. During the 1990’s and beyond, researchers continued to build upon Sadler’s model of formative assessment.

PAGE 42

32 Also in 1989, in the U.K., the Assessment Reform Group was formed to focus on the relationship between classroom assessment, and teaching and learning, (Shepard, 2006). In 1998, two members of the U.K. Assessment Reform Group, Paul Black and Dylan Wiliam, conducted a review of research on the relationship between formative assessment practice and learning, including studies conducted in the years following Sadler’s 1989 presentation of his model of formative assessment. Black and Wiliam found that innovations that included strengthening practices which used assessment formatively produced learning gains with effect sizes ranging between 0.4 and 0.7. While some have raised methodological questions about the Black and Wiliam (1998) review (see for example Bennett, 2011), interest in formative assessment in the U.S. has grown substantially since the publication of the review and a companion article aimed at practitioners in the Phi Delta Kappan (CCSSO, 2008). This resulted in a considerable increase in studies on the impact of the formative use of assessment on student learning (e.g. Andrade, Du, & Wang, 2008; Chin, 2006; Hattie & Timperley, 2007; Lee & Gavine, 2003; McDonald & Boud, 2003; Rodriguez, 2004; Sebba et al., 2008; Torrance & Pryor, 1998, 2001). However, the definition of formative assessment used in these studies has been far from consistent. With the popularization of the term following the Black and Wiliam 1998 review, in the U.S., vendors of interim or benchmark assessments appropriated the term “formative assessment” as a description of their products, making inappropriate claims to this evidence as supporting the use of their products (Shepard, 2005a).

PAGE 43

33 Concurrent with these developments in classroom use of assessment, and the evolving definition of formative assessment, socio cultural and social learning theories continued to gain influence in the U.S. Lev Vygotsky’s socio cultural learning theoretical research was first published outside of Russia an 1962. Then in 1977, Albert Bandura proposed “social learning theory” which asserted that: 1) people can learn through observation; 2) internal mental states are an essential part of this process, and 3) just because an individual learns something, does not mean his/her behavior will change. In 2000, the U.S. National Research Council published, How People Learn: Brain, Mind, Experience and School, a summary of the last 40 years of research on learning (Bransford, Brown, & Cocking). This work emphasized constructivist and socio cultural perspectives over behaviorist perspectives. One year later, the National Research Council published Knowing What Students Know: The Science and Design of Educational Assessment (Pellegrino, Chudowsky, & Glaser, 2001). This publication mapped new directions for assessment in the U.S., based on constructivist and socio cultural learning theories. It identified a larger and more specific role for classroom assessment, representing a greater alignment between the assessment research community in the U.S. and those outside the U.S. This publication also provided a common definition of assessment as “a process by which educators use students’ responses to specially created or naturally occurring stimuli to draw inferences about the students’ knowledge and skills” (p. 8). In it, the authors specified that assessment included three components, the aspect(s) of student learning that are to be assessed (cognition), the tasks used to collect evidence about students’ achievement (observation), and the

PAGE 44

34 approach used to analyze and interpret the evidence resulting from the tasks (interpretation). These three components apply to assessment used formatively. Defining formative assessment. In an attempt to remedy confusion related to the definition of formative assessment in the K 12 classroom context, in 2006, the Council of Chief State School Officers (CCSSO, 2008) brought together researchers and state officials to develop a common definition of formative assessment. According to their joint and unanimous effort, “formative assessment is a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes” (p. 3). The CCSSO definition is similar to definitions posited later (Black & Wiliam 2009; Nichols, Meyers, & Burling, 2009; Popham, 2008; Shepard, 2009; Third International Conference on Assessment for Learning, 2009) in making a clear distinction between practices that formatively use assessment results and the assessment instruments through which student learning data are collected. Practices are distinct from the more general human activity because a practice is activity that adds meaning and often involves the use of instruments or tools (Wenger, 1998). The practice of formative assessment adds meaning when educators use information collected about student learning to adjust their instruction and/or students use it to adjust their learning tactics (Popham, 2008). The assessment instruments used to generate information about students’ learning are critical tools that support formative assessment practices. The implication is that, “it is the use of an instrument rather than the instrument that must be shown, with evidence, to warrant

PAGE 45

35 the claim of formative assessment,” (Shepard, 2009). In other words, determining if formative assessment has occurred requires considering the practices in which students and teachers engage while using assessment results in the formation of learning rather than the instruments used in the process of collecting data (Brookhart, 2009; Nichols, Meyers, & Burling, 2009; Shepard, 2009). A second common thread across the definitions of formative assessment is that assessment results become feedback to teaching and learning. “Information from assessment is fed back within the system and actually used to improve the performance of the system in some way,” (Wiliam & Thompson, 2008). Finally, each definition affirms that formative assessment practices actively involve students. “One of the critical aspects of formative assessment is that both students and teachers participate in generating and using the assessment information,” (Brookhart, 2009, p. 1). According to Black and Wiliam (1998), the two concepts, formative assessment and feedback, overlap significantly. This is evident in the definition of formative assessment, which describes it as a process that provides feedback (CCSSO, 2008). According to Ramaprasad (1983), feedback is “information about the gap between the actual level and the reference level of a system parameter which is used to alter the gap in some way" (p. 4). Ramaprasad asserts that feedback depends on information about the reference level (how good is good enough) and the actual level of the “system parameter” both being present, and comparable. This roughly corresponds with the first component of assessment identified in the National Research Council publication, Knowing What Students Know: The Science and Design of Educational Assessment

PAGE 46

36 (Pellegrino, Chudowsky, & Glaser, 2001). According to that publication, the first component of assessment is the aspect of student learning to be measured. For formative assessment to be a process that provides feedback, both the reference and actual level of a system parameter must be present. Also according to Ramaprasad, in the context of education, the “system parameter” could include disposition (affect), effort, self regulation, meta cognitive process, or the actual response (performance). In the case of formative assessment, the system parameter is student learning or performance. Ramaprasad also established a condition for feedback to be formative; he asserted that if information about the gap between the reference and actual values of the system parameter is only stored or reported and not used to alter the gap then it is not feedback. This reinforces the idea that assessment practice is only formative if the information gained from assessment is used to improve teaching and learning. Black and Wiliam (1998) built on Ramaprasad’s definition in describing formative feedback as including: 1) information about the actual level of some measurable attribute; 2) information about the reference level of that attribute, 3) a mechanism for comparing the two levels and generating information about the gap, and 4) a mechanism for making use of the information to alter the gap. Applied to the definition of formative assessment, this suggests that formative assessment depends on information about the actual and reference level of student performance, a mechanism for comparing the two levels to generate information about any gaps, and a mechanism for using that information to alter the gap.

PAGE 47

37 Sadler (1989) extended Ramaprasad’s definition of feedback by emphasizing the importance of the student understanding the reference parameter, “The indispensable conditions for improvement are that the student comes to hold a concept of quality roughly similar to that held by the teacher . .” (p. 121). He asserted that students should understand the target or goal for their learning. My definition of formative assessment incorporates currently used definitions of the term and efforts by several authors to clarify what counts as formative feedback. Defining formative assessment episodes. Operationalizing the components of the definition of formative assessment within an actual classroom context requires distinguishing between occurrences of formative assessment and occurrences of other instructional practices also found in classrooms. Thus, in collaboration with my DEMFAP colleagues, I built upon others (Airasian, 1991; Bell & Cowie, 2001; Mavrommatis, 1997) who have described and used the concept of an “assessment episode” in their analysis of classroom practice, to derive a definition of a formative assessment episode Mavrommatis (1997) identified four “phases” of an assessment episode: evidence collection (using a variety of methods from informal to formal), evidence interpretation (comparison of evidence to desired standard), teacher response, and impact on students. These phases roughly parallel the components of formative assessment as described by Bell and Cowie (2001): (1) gathering information about learning, (2) analyzing/interpreting the gathered information about learning, and (3) acting on/using this information to improve student learning. Neither Mavrommatis nor Bell and Cowie explicitly include a “reference

PAGE 48

38 parameter” in their definition of the components of an assessment episode or their explication of how analysis and interpretation occurs. I built on several sources to define a formative assessment episode as including the following elements: 1) Identifying (explicitly or implicitly) learning target(s) and clarifying it with students, 2) Collecting data about the actual level of student learning in reference to the learning target (through formal or informal methods), 3) Analyzing student learning data (in reference to the learning target), 4) Interpreting student learning data (in reference to teacher practice and/or student learning needs), and 5) the teacher and/or student(s) taking action, based on the interpretation, to improve student learning and/or instructional practice. To be formative, assessment episodes must include the last step – the teacher and/or student(s) using the information to improve learning and teaching. Examples of instructional practices that include these elements, but are not a formative assessment episodes, include: the teacher administering an end of unit test, scoring, grading and returning the test to students and using the score in calculating an end of term grade; or the teacher collecting data about student learning (i.e. a homework assignment) but not relating the data to a learning target. Figure 2.1 depicts the elements of a formative assessment episode as a continuous cycle of actions that considered from both the teacher and student perspective. The learning target or reference parameter is in the middle of the cycle because it establishes the context for the other steps in the cycle. This historically derived depiction of a formative assessment episode forms the first piece of the

PAGE 49

t h e F i E a a s e x p B w h eoretical fr pisodes tha t i gure 2.1. T h mpirical Re s Follo w lignment of s sessment r x pand, addi n ractice mos t lack and Wi w hich they f o amework f o t include th e h e actions in s earch on t h w ing publica definitions o esearch co m n g to the lit e t likely to re s l iam identifi o und eviden Usi o r this study e actions de p cluded in a F h e Attribute tion of the B o f formativ e m munity, re s e rature rela t s ult in impr o ed six differ ce of impro v ng Data39 I identify f o p icted in Fig u F ormative A s of Format i B lack and W e assessmen t s earch on fo t ed to the a t o vements in ent formati v v ements in s Ide n Le a Ta r Col l D Inte r D o rmative as s ure 2.1. ssessment E i ve Assess m iliam revie w t between t h rmative ass e t tributes of f student lea v e uses of s t s tudent lear n tifying a rning r get(s) lecting D ata r preting D ata s essment pr a E pisode. m ent Practic e w (1998) and he U.S. and e ssment co n f ormative a s a rning. In th e t udent learn ning outco m Ana D a ctices in si t e subsequen t internation a n tinued to s sessment e ir 1998 rev i ing data for m es. These lyzing D ata t u as t a l i ew,

PAGE 50

40 included: 1) the teacher sharing the criteria for evaluating learning with their students, 2) the teacher providing descriptive (as opposed to evaluative) feedback to students, 3) students engaging in self assessment, 4) student to student peer assessment, 5) the teacher using oral questioning to learn about and form learning, and 6) the teacher using student learning data to adjust instructional practice. A number of researchers (Brookhart, 2008; Cho & MacArthur, 2010; Hattie & Timperley, 2007; Li, Yin, Ruiz Primo & Morozov, 2011; McDonald & Boud, 2003; Orsmond, Merry, & Callaghan, 2004; Pointe et al., 2009; Ross et al., 1999, 2002; Sebba et al., 2008; Zundert et al., 2010) focused more narrowly on one or more formative use of student learning data. This literature base contributes to my definition of formative assessment by providing additional evidence regarding the attributes of different formative uses of student learning data that may be important. The use of student learning data with the most substantial research base is the teacher providing formative feedback to students, with much of the research pre dating the Black and Wiliam (1998a) review. I provide a more extensive review of the literature related to critical attributes of formative feedback below. Then I provide abbreviated summaries of literature related to the critical attributes of student self assessment, student to student peer assessment, and formative questioning. Finally, I summarize some of the more recent literature on teachers using assessment formatively to adjust their instructional practice. In my model, I characterize the teacher sharing the criteria for evaluating learning with their students is a step in the process of formative assessment, rather than a use of student learning data. Several of the studies reviewed below

PAGE 51

41 consider sharing learning criteria with their students, or the student and the teacher coming to joint understanding of the learning criteria as part of different uses. Formative feedback. Given that feedback has been described as one of the most powerful influences enhancing achievement (Hattie, 2009), it is surprising how few recent meta analyses and empirical studies have confirmed a positive relationship between feedback and student learning (Li, Yin, Ruiz Primo & Morozov, 2011). Numerous studies have demonstrated a negative or no relationship between feedback and student learning (Bangert Drowns, Kulik, Kulik & Morgan, 1991; Kluger & DeNisi, 1996; Li, Yin, Ruiz Primo & Morozov, 2011; Morozov, Yin, Li, and Ruiz Primo, 2010; Wadell, 2004), pointing to the importance of specifically considering the attributes of feedback, the context in which the feedback was provided and, some argue, the characteristics of learners, any of which could mediate the relationship between feedback and learning. More recent (since 1991) meta analyses, empirical studies, and research syntheses focused on the relationship between feedback and student learning, provide some clues regarding the attributes of feedback most likely to impact student learning Meta analyses. Kluger and DeNisi conducted a seminal review and meta analysis of the effectiveness of feedback in improving learning in 1996 (Black & Wiliam, 1998). Kluger and DeNisi’s meta analysis considered “action(s) taken by (an) external agent(s) to provide information regarding some aspect (s) of task performance” (p. 255). The authors estimated 607 effect sizes from studies of feedback meeting this definition, within an average effect size of 0.41. Probably the most remarkable finding of this meta

PAGE 52

42 analysis was that over 38% of the effect sizes they calculated were negative. They proposed a feedback intervention model to identify mediators of the effect of feedback on student performance with two key findings emerging from their meta analysis: 1) feedback that provides information about changes from prior performance and/or that supplies the correct solution, improves effects on student performance; 2) feedback that directs attention to the self (including both criticism and praise) and/or that references others, reduces effects on student performance; and 3) whether the feedback is negative or positive is not significant. Bangert Drowns, Kulik, Kulik and Morgan (1991) focused on the information provided to students through written text or a computer, that was in response to a formal event (like a test), and addressed achievement. This review suggested that feedback in more typical classroom settings can be more effective than that provided in computer based settings, and feedback which tells learners whether or not their answers were correct is the most effective. Li et al., (2011) defined feedback as, “information provided by teachers (or peers) to students in order to reduce the difference between their current understanding and what is expected in their performance . the essence of [which] lies in its' formative role in shaping subsequent learning actions" (Morozov et al., 2011, p. 1). In their meta analysis of the relationship between feedback and learning in mathematics, Li et al., (2011) applied strict inclusion criteria to select empirical studies of feedback between 1988 and 2010. This left them with only 18 papers that included a total of 33 studies from which they calculated effect sizes. They found statistically

PAGE 53

43 significance positive impacts on student learning when the feedback was about the content of what students were learning (three studies); and even when the only feedback provided was how well the students had performed, learning outcomes increased in comparison to no feedback at all. Taken together these meta analyses provide some clues about the attributes of formative feedback that can mediate the effects of the feedback on student learning. Within the context of typical classroom instruction, feedback is effective in improving performance if it provides knowledge of results (Bangert Drowns et al., 1991; Li et al., 2011), or if it provides information about changes from prior performance, or supplies the correct solution (Kluger & DeNisi, 1996; Li et al., 2011). Feedback is less helpful if it directs attention to the student him/herself (such as criticism or praise) or references the performance of other students (Bangert Drowns et al., 1991; Kluger & DeNisi, 1996). Empirical studies Additional empirical studies further clarify the attributes of feedback that mediate its impact on learning. Three studies by Butler, and one including a co author (Butler, 1987, 1988; Butler & Nisan, 1986) and a two more recent studies (Lipnevich & Smith, 2008; Pointe et al., 2009) considered the relationship between normative feedback (or grades) and student learning in comparison to feedback that was not normative. These studies were consistent with the meta analyses (Bangert Drowns et al., 1991; Kluger & DeNisi, 1996) in finding that normative feedback was not associated with improvements in performance. Butler (1987, 1988) described negative effects of normative comments or normative grades. In general, grades and grades plus comments had similar and generally undermining effects on performance. However,

PAGE 54

44 high achievers who received grades maintained high interest and performed well on additional tasks when they anticipated further grades. Pointe et al. (2009) found that the students of teachers who reported that they provided more detailed feedback (not grades, but rather explanations about where students made mistakes and how to address them) had higher than expected performance on the Biology AP exam (when PSAT scores were used to predict performance). Lipnevich and Smith (2008) found that students who received descriptive feedback performed significantly better than those who did not receive feedback; students who received a grade and praise, scored higher subsequently than students who received a grade but no praise; and in the absence of detailed feedback, students who received a grade did somewhat better than those who did not. Other empirical studies provide insight into the relationship between praise and student performance. Dweck (2000) found that students who received what might seem like the most ego boosting forms of praise were at a clear disadvantage when it came to later coping with additional challenges, and students whose positive feedback focused on their effort or their strategy were better prepared to cope with later obstacles. Follow up studies with older children extended these findings with the negative effects of person oriented feedback showing up even before subsequent failure. Elwar and Corno (1995) found that students who received “specific comments on errors and faulty strategy tempered by suggestions on how to improve, plus at least one positive remark on work done well” (p. 164) had statistically significantly higher scores on the post tests than students who received no feedback and that these positive effects on student

PAGE 55

45 learning were present across student ability levels. Both the Dweck (2000) and Elwar and Corno (1995)studies suggest that praise results in higher student performance only as long as it is directed towards effort and not ability. What emerges from the empirical literature (meta analysis and individual empirical studies) regarding the relationship between feedback and student learning is a messy picture of which feedback attributes actually make a difference. However, some findings are consistent. As long as the feedback is not normative (e.g. it does not compare student performance to that of other students) and doesn’t reference fixed characteristics of the student, some feedback increases performance over no feedback, even if the feedback is just “knowledge of results.” Feedback directed at the individual or fixed attributes of the individual has a negative effect on performance. This includes praise. Finally, feedback that includes information about the process or strategy in which students are engaging improves both performance and motivation. Research syntheses. Research syntheses further explicate the attributes of feedback that improve student learning (Black & Wiliam, 1998; Brookhart, 2008; Butler & Winne, 1995; Hattie & Timperley, 2007, 2002; Nicol & Macfarlane Dick, 2006; Shute; 2008). Several provide models of the relationship between feedback and learning and, in some cases, build upon one another. Black and Wiliam (1998) took the feedback intervention theory proposed by Kluger and DeNisi (1996) as a starting point for their explanation of the relationship between feedback and student learning. They expanded upon the studies that assert “feedback interventions that cue individuals to direct attention to the self rather than

PAGE 56

46 the task appear to be likely to have negative effects on performance,” (p. 49), summarizing with the idea that teacher feedback should encourage students to believe that “success is due to internal, unstable, specific factors such as effort, rather than stable general factors such as ability (internal) or whether one is positively regarded by the teacher (external)” (p. 51). Butler and Winne (1995) developed a theoretical model for the role of feedback in self regulation and then re examined earlier feedback studies to explain their findings about feedback’s effects on learning. They asserted researchers must consider “how feedback mediates performance through a series of recursively linked self regulatory cognitive engagements” (p. 255). The concept of self regulation helps explain what happens while students are engaged in a task, thus, monitoring mid task is key to self regulation. Butler and Winne (1995) proposed functions for feedback in addressing both student difficulties in implementing tactics needed to complete tasks and in monitoring his/her engagement in the task. They asserted that external feedback is effective if it improves students’ knowledge or their processes. Table 2.2 includes their description of the relationship between specific student difficulties and feedback’s potential roles in addressing each difficulty simplified from the original (Butler & Winne, 1995, p. 267). This model provides specific suggestions regarding how teachers should aim their feedback, while assuming feedback occurs during the learning process.

PAGE 57

47 Table 2.2 The Role of Feedback in Student Self Regulation (simplified from the original) Student difficulty related to. . FeedbackÂ’s potential roles in Implementing strategies Monitoring Developing knowledge Improving Self Regulation Fails to recognize task conditions that cue strategy use Add to or correct knowledge about task Analyze tasks and set goals Misperceives task conditions and selects the wrong strategies Sets inappropriate criteria for judging performance Reshape knowledge about task and improve knowledge about strategies Analyze tasks, set goals, select strategies, monitor DoesnÂ’t recognize the relationship between task conditions and performance Reshape knowledge about task and improve knowledge about strategies Analyze tasks, set goals, monitor Experiences problems executing selected strategies Is overly challenged by cognitive demands during monitoring Tune knowledge about tasks (combining or chunking); tune and automate knowledge about strategies Select and implement strategies, monitor Provides too little effort to deploy strategies Lacks motivation to monitor actions and results Address action control strategies, motivational beliefs, or epistemological beliefs Select and implement action control strategies, monitor Nicol and Macfarlane Dick (2006) built upon the work of Butler and Winne (1995), positioning the research on feedback within a model of self regulated learning, although their analysis and resulting recommendations were much more general. They conceptualized their model partially within a socio cultural perspective, identifying seven principles of feedback practice that facilitates learner self regulation. According to

PAGE 58

48 Nicol and Macfarlane Dick, effective feedback: “helps clarify what good performance is (providing goals, criteria, expected standards), facilitates the development of self assessment (reflection) in learning, delivers high quality information to students about their learning, encourages teacher and peer dialogue around learning, encourages positive motivational beliefs and self esteem, provides opportunities to close the gap between current and desired performance, and provides information to teachers that can be used to help shape teaching,” (p. 205 ). Hattie and Timperley (2007) also applied a model of feedback that builds upon those proposed by Kluger and DeNisi (1996) and Butler and Winne (1995), to help explain the “circumstances under which feedback has the greatest impact” (p. 81). They proposed that the effectiveness of feedback is determined by the level at which the feedback helps to answers three questions: “Where am I going? (What are the goals?) How am I going? (What progress is being made toward the goal?) and Where to next? (What activities need to be undertaken to make better progress?)” (p. 86). By level they meant the following: task performance, process of understanding how to do a task, regulatory or meta cognitive process level, and/or the self or personal level. According to Hattie and Timperley (2007) feedback focused at the task level is most effective if it helps students reject erroneous hypotheses and provides cues as to directions for searching and strategizing and should not be too complex. At this level, teachers can provide feedback to individuals or groups of students. Task level feedback is more effective if provided immediately (timing) after completion of the task, and can be positive or negative as long as the negative feedback includes information about how to

PAGE 59

49 correct mistakes. Feedback focused at the process level should relate to student construction of meaning and strategies for error detection, should cue student searching for information and use of effective strategies, and should assist students in rejecting erroneous hypotheses. Process level feedback is more effective than task level feedback. Students' capability to create internal feedback for self assessment and their willingness to invest effort into seeking and responding to feedback mediates feedback focused at the self regulation level The studentÂ’s degree of confidence in the correctness of his/her response, how he/she attributes success or failure, and his/her level of proficiency at seeking help all influence this level of feedback. Finally, feedback focused at the self or personal level is not effective. Hattie and Timperley (2007) also confirm several findings presented earlier, including that rewards should not be considered feedback at all, praise (unconnected to the task) is not helpful at improving performance or understanding (even though students like it), and feedback that attributes performance to effort may increase performance. A couple of more recent reviews by Brookhart (2008) and Shute (2008) are best characterized as lists of characteristics or attributes of effective feedback. BrookhartÂ’s (2008) review, published as a guide for practitioners on how to give effective feedback, brings together much of the same research already cited to provide recommendations to practitioners about how to provide effective feedback based on the attributes of the feedback message content and the delivery strategy. Her dimensions of feedback content include the following: focus (the four categories identified by Hattie & Timperley, 2007), basis for comparison (criteria, other students, past performance),

PAGE 60

50 function (descriptive or evaluative), valence (positive or negative), clarity to student (vocabulary, amount), specificity, and tone (implications and word choice). She made recommendations from each about what effective feedback messages include consistent with the literature cited already. Brookhart also made recommendations regarding the delivery strategy for the feedback, including the following: determine the timing of feedback based on the type of learning goal but never delay feedback beyond when it would make a difference to the learning, prioritize the points upon which to provide feedback choosing those that related to major learning goals and consider learners’ developmental levels, select the best mode for the message (oral, written or demonstration) based on the content of the feedback, interactive feedback (oral) is best when possible, providing feedback to individual students indicates that that students learning is valued, and provide group/class feedback if most of the class missed the same concept. Shute’s (2008) review focuses more narrowly on task level feedback. Like Brookhart (2008), Shute summarizes recommendations regarding what makes feedback effective. Her recommendations included: “Focus feedback on the task, not the learner; Provide elaborated feedback to enhance learning; Present elaborated feedback in manageable units; Be specific and clear with feedback message; Keep feedback as simple as possible, but no simpler (based on learning needs and instructional constraints); Reduce uncertainty between performance and goals; Give unbiased, objective feedback, written or via computer; Promote a learning goal orientation via feedback; and Provide feedback after learners have attempted a solution,” (p. 177 178). Student self assessment. The literature on student self assessment and the attributes of the practice that make it most effective is much less substantial than that related to feedback. I report

PAGE 61

51 here on empirical studies that point to possibly important attributes of self assessment, and a summary review by one of the authors. In two different studies, Ross et al. considered the impact on student learning of training students how to self assess in writing (1999) and in mathematics (2002). They found that training increased student accuracy in self assessment and improved student learning outcomes, although the improvements were greater in math than in writing. They concluded that subject matter makes a difference with regard to the effectives of self assessment. In their 1999 study, they also identified joint development and use of criteria between the teacher and students as critical to the success of self assessment, and asserted that student self assessment could take the place of other assessment data. Their 2002 study on training on self assessment in math classrooms, building on the 1999 study, included the following processes: (a) involve students in defining evaluation criteria, (b) teach students how to apply the criteria, (c) give students feedback on their self evaluations, and (d) help students use evaluation data to develop action plans. These four elements are potentially critical attributes of student self assessment. McDonald and Boud (2003) provided 12 training modules to Caribbean teachers in how to introduce self assessment practices to their students and investigated the impact on their studentsÂ’ learning. They found that students who received self assessment training outperformed their peers in every content area. While the authors didnÂ’t provide detailed information about the attributes of self assessment practice supported by their training, their general description of the content their training

PAGE 62

52 sessions provides clues regarding attributes of the practice on which they focused. They reported their training included the following content: “teachers constructing, validating, applying and evaluating criteria to apply to students’ work and . students making reasoned choices, to assess responses to questions by applying given criteria, to write a variety of question types, to allocate marks to such questions, to evaluate their work and to make and use self assessment activities of their own,” (p. 213). Again, the characteristics of the training that resulted in significant student learning improvements point to possible important attributes of self assessment practice. In a 2008 review of research on self assessment and peer assessment in secondary schools, Sebba et al. reported evidence of positive effects on student attainment (9 out of 15 studies), self esteem (7 out of 9 studies) and engagement with learning (17 out of 20 studies). Across these studies, they identified the following conditions that affect the impact of self or peer assessment: teacher commitment to learners having control over the process, the teacher discussing learning to develop effective student feedback, and a move from a dependent to an interdependent relationship between teacher and students that enables teachers to adjust their teaching in response to student feedback. Although they found no relationship between students owning the process and positive outcomes, they recommended that teachers involve students in ‘co designing’ the criteria for evaluation. In addition, seven of the studies they reviewed (including those reported on above) identified student training in self assessment as important to its success.

PAGE 63

53 Andrade, Du, and Wang (2008) investigated the impact of self assessment on 3rd and 4th grade students’ writing. They used a matched comparison design to identify the impact of providing students model papers to generate criteria and engaging students in using rubrics as part of their self assessment practice. They found a statistically significant, positive correlation between both strategies and students’ essay scores controlling for previous achievement. In a review of the conditions leading to success with self assessment, Andrade and Valtcheva (2009) defined self assessment as “a process of formative assessment during which students reflect on the quality of their work, judge the degree to which it reflects explicitly stated goals or criteria, and revise accordingly,” (p. 13). Building on Ross et al. (2002), they provided a summary of attributes of self assessment associated with positive gains in student learning that summarize many of the studies presented here. These attributes include the following: “define the criteria by which students assess their work, teach students how to apply the criteria, give students feedback on their self assessments, give students help in using self assessment data to improve performance, provide sufficient time for revision after self assessment, and do not turn self assessment into self evaluation by counting it toward a grade” (p. 17). Student peer assessment. The literature focused on the most effective attributes of peer assessment is less well developed than research on other formative uses of student learning data. Over time, many researchers focused on peer grading and the alignment of peer provided grades with teacher provided grades, a summative rather than a formative use of

PAGE 64

54 student learning data. This definition of peer assessment falls short of the conceptualization of peer assessment as students providing formative feedback to one another as proposed by Black and Wiliam (1998). According to Van Zundert, Sluijsmans, and Merrinboer (2010) research on peer assessment practices “has hardly evolved beyond a summative and quantitative view of peer assessment with a strong reliance on scoring and grading,” (p. 266). In their review of research on peer assessment in higher education, Gielen, Dochy, Onghena, Struyven, and Smeets (2011) asserted that determining the quality of peer assessment practice depends upon educators’ purpose in engaging learners in the practice. They identified five “goals” for peer assessment, only three of which could be considered formative uses of student learning data. The three included the following: using peer assessment as a learning tool for those receiving the feedback, using peer assessment to develop student self assessment skills, and using peer assessment to actively engage students in their learning. While the authors proposed how to measure quality for each of these three goals, they offered that only the first has been the focus of empirical research. They also asserted that when the goal of peer assessment is to use it as a learning tool, measuring quality depended on determining the effects on student learning within a particular content area and that quality criteria thus varies by content area, (e.g. just as learning goals in mathematics vary from writing). Thus, studies conducted in one content area on the quality criteria for peer assessment may not apply to another content area.

PAGE 65

55 Currently, most empirical studies focus on peer assessment in post secondary educational contexts, rather than K 12, and almost exclusively on peer assessment in the context of writing (Cho & MacArthur, 2010; Orsmond, Merry, & Callaghan, 2004; Topping, 1998, 2010; Zundert et al., 2010). Despite these limitations, I report on a few empirical studies that have provided evidence regarding critical attributes of peer assessment used as a learning tool that impact student performance and may have application within the context of K 12 math instruction. In their examination of the impact of peer assessment on student writing performance in post secondary contexts, Orsmond, Merry, and Callaghan (2004) and Cho and MacArthur (2010) found that feedback from multiple peers improved the quality of subsequent writing drafts because this resulted in students making more complex revisions to their writing (e.g. revisions that clarified meaning at the sentence or paragraph level). Orsmond et al., (2004) compared the content of peer feedback in these instances across four categories: directive, non directive, praise, and criticism. They found that students making complex revisions to their writing subsequent to peer assessment was associated with the content of the peer feedback being non directive (feedback that included nonspecific observations that could apply to any paper). In a special issue of the journal Learning and Instruction that focused on peer feedback Topping (2010) summarized the lessons learned from six empirical studies focused on critical attributes of peer feedback leading to improvements in student learning outcomes. Researchers conducted four of the six in post secondary educational contexts. Topping asserted the following:

PAGE 66

56 Peer feedback content that is non directive, and/or includes justification (especially with students who have less skills) leads to better student performance outcomes; Peer assessment leading to more elaborate feedback is only effective it if is non directive Peer assessment leading to grades is not effective; Student training in peer assessment that includes modeling and observation results in more effective peer feedback; The attributes of effective peer feedback depend upon the degree to which students have previously participated in peer assessment more experience may lead to increased acceptance and use of peer feedback whether or not it includes justification; and Finally, if the difference in competence between the student providing and the student receiving peer feedback is large, the student receiving may reject more elaborate peer feedback; if the difference is small, students may consider elaborate feedback as sharing ideas. One of the studies summarized by Topping (2010) bears additional mention because it focused on a K 12 context. Gielen et al., (2010) used a quasi experimental design to investigate the impact on learning of various characteristics of peer feedback with seventh grade Dutch students in writing. They paired students with similar ability to provide feedback over various writing tasks. The study tested the use of a specific feedback form completed by the student receiving the peer feedback after the peer feedback event as well as evaluating the impact of different qualities in the feedback

PAGE 67

57 message. They found that “clear formulation, and presence of suggestions, did not have a significant impact on performance improvement. . and the only characteristic with a significant impact on performance, namely justification, is also among the most difficult to teach.” (p. 312). They concluded that it is more important for the student providing the peer feedback to justify their feedback than to be accurate in their comments, and that training students to explain their feedback may have as great an impact on performance as ensuring student comments are accurate. The current research base on peer assessment is somewhat limited with regard to what it contributes to this study. Researchers have focused on higher education settings and writing instruction while my study focused on a K 12 mathematics context. In spite of these limitations, I consider the following as potentially important attributes of peer assessments that emerge from this research: how students are paired for peer assessment matters, more experience with peer assessment may make students more likely to use the peer assessment, providing students with training in how to peer assessment can improve the practice, students should not just provide suggestions about the other students’ work but should explain their suggestions, and feedback that is non directive (could be applied to many different similar tasks) may be more effective. Formative questioning. Within the context of formative assessment, questioning is both a data collection method and a type of data use. Here, however, my focus is on research related to the attributes of questioning as a formative data use. According to Ruiz Primo (2011), while

PAGE 68

58 there are a number of studies focused on classroom discourse and questioning, “few focus on how the quality of these interactions influences students’ learning,” (p. 23). She also argues that the nature of the practice and the cost of researching it may limit generalizability of empirical studies. Research studies that examine the quality of interactions necessarily involve video recording individual teacher to student and student to student interactions within classrooms. As a result, many have relatively small sample sizes – involving only a few teachers/classrooms. Several authors (Chin, 2006, 2007; Mortimer & Scott, 2003; Ruiz Primo & Furtak, 2004, 2006; Ruiz Primo, 2011) have considered formative use as part of a questioning cycle with small numbers of teachers. Although the results of these studies may not be broadly generalizable, they help clarify potentially important attributes of formative questioning. Chin (2006, 2007) investigated question based discourse in the context of science instruction. In her 2007, study she made a distinction between traditional uses of questioning used to evaluate what students know and “constructivist based” questioning used to “to elicit what students think (such as their explanations and predictions, especially if these are different from what scientists think), encourage them to elaborate on their previous answers and ideas, and to help students construct conceptual knowledge,” (2007, p. 818). According to Chin (2007) more traditional questioning follows a sequence of actions described by others as IRF, the teacher initiates by asking a question (I), the student responds (R), and the teacher evaluates (E) the correctness of the student response. Another variation of traditional questioning is an IRF sequence if the follow up by the teacher (F) is not explicitly evaluation. This

PAGE 69

59 traditional approach to questioning doesn’t include the teacher making adjustments along the way. The constructivist or inquiry type of questioning follows a different sequence that Chin followed others in describing as IRFRF. In this sequence, student responses are followed by teacher feedback or action that elicits and additional student response, then more feedback etc. This approach does include the teacher making adjustments. Chin’s constructivist or inquiry type of questioning more closely describes questioning as a formative use of data. Chin investigated teacher questioning patterns in six seventh grade science classrooms for six lessons each to develop a typology of teacher questioning strategies that “stimulate productive thinking” (p. 823). The attributes she identified of this type of questioning suggest potentially critical attributes of formative questioning. They include the following. “. .teachers elicited responses from different students that progressively added more information to existing ones contributing to a growing framework of ideas; asked questions in a progressive way that enabled students to gradually ascend to higher levels of knowledge and understanding; reiterated students’ responses following their questions not only to affirming them, but also to making the ideas available to the full class; used students’ responses as a platform for further inquiry to lead students towards the targeted learning goals; and bridged the cognitive gap between the questions they asked and the knowledge base of their students”, (p. 823). Ruiz Primo and Furtak (2004, 2007) studied assessment conversations occurring in 3 science classrooms. They described assessment conversations as an example of informal formative assessment that involved a sequence similar to that described by Chin (2006, 2007). Their sequence (ESRU) included: the teacher eliciting information by asking a question (E), the student responding (S), the teacher recognizing the student response(R), and the teacher using the information to improve student learning (U).

PAGE 70

60 They clarified how their ESRU sequence was different from the traditional IRF questioning sequence, in part based on defining the “Use” as different from the “Feedback” in the IRF sequence. They defined “use” as including the teacher engaging in one or more of the following: “provide students with specific information on actions they may take to reach learning goals, ask another question that challenges or redirects the students’ thinking, model communication, promote the exploration and contrast of students’ ideas, make connections between new ideas and familiar ones, recognize a student’s contribution with respect to the topic under discussion, or increase the difficulty of the task at hand,” (p.61). Ruiz Primo and Furtak developed a coding structure to characterize different strategies teachers used in each of the steps in this cycle and identified an approach to defining when the ESRU cycle was complete (i.e. the teachers used the informally collected student learning data). They found that their coding approach could differentiate the quality of the assessment conversations occurring in the three science classrooms, that they could identify incomplete ESRU cycles, and that the teacher with more completed ESRU cycles had students with higher performance. Their coding approach and their findings related to impact on student learning outcomes suggest some potentially important characteristics of formative questioning. First, their efforts confirm the importance of completing the cycle, that teacher questioning includes somehow using the information gained. Second, they identified some key attributes related to the different steps in this cycle including the following: completed cycles involved consecutive interactions with the same students, the types of questions asked to elicit information from students varied and pertained

PAGE 71

61 specifically to the science content, teachers used repeating and re voicing most frequently to recognize student responses, teachers used a strategy of comparing and contrasting student responses less frequently but the most frequent user of this strategy was the teacher with the best student outcomes, and two of the teachers provided helpful feedback again with the most frequent user of this strategy the teacher with the best outcomes. Some of the strategies other studies have pointed to as likely to improve student outcomes were not present in the classrooms studied by Ruiz Primo and Furtak. Black and Wiliam joined with three additional researchers to conduct a follow up to their 1998 review in six middle level schools (students ages 11 15) in math and science classrooms, with results published as a book for practitioners in 2003. Teachers’ formative use of questioning was one of the formative practices upon which this work focused. Across all of the formative practices included in this study, the researchers found evidence of an impact on student learning outcomes, although the specific impact of formative questioning was not isolated. Black et al. (1998) introduced teachers to the following features of questioning used formatively: provide students “wait time” or a gap between asking a question and prompting for a response, use open ended questions (those without one correct response), plan in advance questions that have the potential to promote thinking and discourse. The authors worked with teachers to plan what questions they would use and anticipate student responses as a means of identifying meaningful questions. The authors reported a number of changes in the questioning practices within the classrooms in which they worked including teachers

PAGE 72

62 doing the following: using incorrect answers (in homework or class work) as discussion points; asking questions that encourage students to explore answers together such as those that challenge misconceptions, explore ambiguity, or create conflict; expecting all students to be willing to answer any question even if the answer is they donÂ’t know; establishing a climate where students are comfortable giving incorrect answers because they know incorrect answers are useful; and using information gained from questioning to plan the next steps in learning. Across the studies reviewed here, several potentially important attributes of formative questioning emerge. First, questioning becomes formative when it includes a step beyond the teacher eliciting or initiating by asking a question, the student responding, and the teacher evaluating the correctness of the response. What I am calling formative questioning and other authors have called constructivist questioning or assessment conversations includes additional interaction(s) that involve the teacher and or student(s) somehow using the information. When this additional step is included, empirical evidence from studies with small sample sizes, indicate a positive impact on student learning outcomes (Chin, 2006, 2007; Ruiz Primo & Furtak, 2004, 2008). Second, investigation of the details of the interaction between teachers and students across these studies suggest several potentially important attributes of formative questioning, including: elicit responses from different students to add more information; consider coming back to the same student for multiple interactions; ask progressive questions that build conceptual understanding; reiterate (repeat or re voice) student responses not only to affirm them, but to make them available to other students; compare and

PAGE 73

63 contrast multiple student responses; use incorrect responses as discussion points; expect all students to be ready to answer any question even if the answer is I don’t know; provide instructional scaffolding, as part of questioning, to bridge gaps in student understanding; provide specific actionable feedback regarding student responses; establish a climate where students believe that incorrect answers are useful; and use information gained to plan next steps. Adjusting instruction. In recent years, several research teams have studied interventions intended to facilitate teacher formative assessment practice that included some focus on teacher instructional adjustments based on student learning data (Black et al., 2005; Frohbieter et al., 2011; Herman et al., 2011; Shavelson et al., 2008), with mixed results In the U.S., Shavelson et al. (2008) developed assessment tasks that teachers could use formatively as part of a commonly used middle level science program. They embedded the tasks at critical points in the unit “where an important sub goal should have been reached before students go on to the next lesson” (p. 301). They provided teachers training and guiding documents regarding how to use the tasks to “check student understanding at key points during instruction and reflect on the next steps needed to move students forward in their learning,” (Furtak et al., 2008, p. 363). Then they evaluated the impact on teaching and learning on a small randomized trial with 12 teacher participants (6 experimental and 6 controls) identified as experts in the program. They found that most of the teachers in the experimental group did not implement a number of the suggested instructional responses to the data generated by

PAGE 74

64 the embedded assessments (Furtak et al., 2008) and that being in the experimental group did not have a significant influence on studentsÂ’ achievement compared to the control group that used the program without the embedded assessments, (Yi et al., 2008). The authors concluded that embedding assessment tasks designed for formative use, even with some training and guidance was not enough for teachers to consistently use these data to adjust their instructional practice. They also raised questions regarding teacher skill and ability to use assessment resources available to them, suggesting that additional teacher capacity building may be necessary to realize the promise of formative assessment. Herman, Osmundson, & Silver (2010) and Herman et al. (2011) focused on the relationship to between teacher content pedagogical knowledge, their formative assessment practice and student learning outcomes as part of a larger study of the effects of implementing curriculum based assessments in a hands on science program in 39 upper elementary classrooms. To measure student content pedagogical knowledge, these researchers asked teachers to self report and separately administered performance tasks to the teachers which required them to analyze and interpret student responses to tasks from the science program. They used teachersÂ’ weekly practice logs to measure their use of data to adjust instruction. Herman et al. (2011) found a significant positive relationship between teachersÂ’ formative assessment use and studentsÂ’ performance, and a marginally positive relationship between teachersÂ’ content pedagogical knowledge their assessment use. They concluded that while it seems evident that content pedagogical knowledge is necessary for teachers to make

PAGE 75

65 effective use of assessment to form learning, this study did little to clarify the relationship. Frohbieter et al. (2011) examined “the kinds of information teachers gleaned from assessment and how they used that information in their teaching.” (p. 2) when using three different assessment systems developed for teachers to use formatively in the context of middle level math instruction. The researchers used two part interviews with teachers to determine what type of information they got from assessment and how they used it. Then they developed a scheme for categorizing both. Frohbieter et al. (2011) first characterized the level of “nuance” of the information teachers took away from assessing with a formative purpose. This included the following: least nuanced, a judgment of whether or not students got it; moderately nuanced, “varying degrees of mastery, relative mastery of topics, types of mastery (e.g., procedural vs. conceptual), and common errors” (p. 17); and highly nuanced, “detailed insights into the thinking behind their students mathematical performance . that can help them assist their students in moving forward in their understanding of mathematics,” (p. 22). The type of information teachers gleaned did not seem to relate to the type of information that the instruments they used could provide. In other words, many teachers chose to take away from their results a less nuanced interpretation than was possible. They also characterized teacher use of assessment results by level of responsiveness, including the following: 1) least, which they characterized as the teacher deciding to move on regardless of assessment results; 2) moderate, which

PAGE 76

66 included reviewing content again, simplifying examples and/or slowing down the pacing of the lessons, simplifying the original lesson and re teaching it to the entire class, or engaging learners in additional practice; and 3) high, which included targeting particular areas of student difficulty that were common, grouping students for targeted support, and using of self and/or peer assessments. Frohbieter et al. (2011) established the ideal as teachers gleaning highly nuanced information from assessment and using it in highly responsive ways. According to these authors, “it was rare to encounter instances in which practices embodied all the ideal characteristics of formative assessment (i.e., assessment with instructional improvement as its purpose), which occurred frequently and were related to content currently being taught, as well as were integrated thoughtfully with instruction.” (p. 27). They looked for the integration of assessment and instruction and reported that, “Teachers for whom assessment and instruction were closely integrated reported selecting or designing assessment in order to learn very specific information about their students understanding that would be useful for planning instruction.” They also characterized the “cycle of use” as when, in relationship to collecting data, teachers made instructional adjustments. They identified adjustments made immediately or in planning for the next day or within the week as “true” formative assessment. The characterization of instructional response described by Frohbieter et al. (2011) contributes to my characterization of different types of teacher instructional adjustments based on student learning data, and the scale of those adjustments.

PAGE 77

67 In the U.K., Black, Wiliam, and three other colleagues took a different approach. Their KingÂ’s Medway Oxfordshire project (KMOFAP) was an 18 month study with 24 middle level science and math teachers focused on developing teacher formative use of assessment. These researchers introduced teachersÂ’ to research about formative assessment practices, engaged in them in action planning related to trying out the formative assessment practices about which they had learned, provided feedback related to classroom observations, and facilitated teacher learning community dialogue about their practice change efforts (Harris, Irving & Peterson, 2008). Teachers chose their own focus for their practice change, including one or more of the following: questioning, involving students in self and peer assessment, and marking student work (which included changes to grading and providing feedback). These authors reported changes in teacher practices, changes in the environments of teachersÂ’ classrooms, and changes in teachersÂ’ perception of their role. Although the evidence was inconsistent across participating teachers, they also found evidence of improvements in student learning, (Black et al., 2003). What do these different approaches to considering instructional adjustments as part of formative assessment practice say about the most critical characteristics of this type of data use? They provide less specific guidance, but suggest the following: somehow additional formative consideration of student assessment results, regardless of the specific instructional response, does seem to improve student learning outcomes; teacher use of high quality assessment resources designed for formative use do not necessarily influence instruction; teachers may choose to glean less information than is

PAGE 78

68 available regardless of the quality of the instrument used in data collection; while content pedagogical knowledge should matter, it is unclear how; and the level of responsiveness of teacher actions in relationship to student learning data is discernible and seems to relate to the nuance of the information they gleaned from student assessment results. Next, I consider socio cultural theories of learning to provide more guidance regarding what counts as formative assessment and what might be most critical for formative assessment practice to be effective. A Socio Cultural Interpretation of the Role of Formative Assessment in Learning Socio cultural theories of learning evolved from the ground breaking work of Lev Vygotsky (1978) who made a number of contributions which clarify the role of formative assessment in learning. His contributions include: 1) the conceptualization of learning as situated within the social plane 2) the central role of tools and symbols as cultural mediators of learning and 3) the concept of a zone of proximal development (Oxenford OÂ’Brian, Nocon, & Sands, 2010) Drawing upon the many authors who have expanded VygotskyÂ’s work to apply a social cultural conceptualization of learning to practice that occurs within school based contexts (Cole, 1990, 1999; Cole & Engestrm, 2006; Gallimore & Tharp, 1990; Giest & Lompscher, 2003; Gipps, 1999, 2002; Lave & Wenger, 1991; Moss, 2008; Shepard, 2000; Torrance & Pryor, 1998; Wells, 1999, 2002; Wertsch, 1990), I describe these themes, their relationship to learning within school based contexts, and their application to explaining the role of formative assessment practice in teaching and learning and how the social context of classrooms influences formative assessment practice.

PAGE 79

69 Learning as social interaction. Vygotsky (1978) described learning and development as fundamentally social, “Learning awakens a variety of internal developmental processes that are able to operate only (emphasis added) when the child is interacting with people in his environment and in cooperation with his peers,” (p. 90). This places socio cultural theories of learning somewhere between behaviorist learning theories (with learning stimulated by sources external to the learner and which focuses on observable behavior change) and constructivist theories (with learners individually constructing meaning from experience and which focuses on underlying mental processes). Socio cultural theories build upon constructivist notions of the learner as the active agent his or her own development, but also consider how a child develops through culturally and socially mediated participation in meaningful, practical activity (Rueda & Moll, 1994). Within a socio cultural theoretical framework, development occurs through learning and learning occurs first as social action and then as internal cognitive development. Or as Vygotsky (1978) described it, “Any function in the child’s cultural development appears twice . first it appears between people as an interpsychological category, and then within the child as an intrapsychological category,” (p. 57). Vygotsky called this internal reconstruction of external experience “internalization.” One critical implication of a conceptualization of learning as internalization is that instruction should lead development, rather than respond to it (as is proscribed within developmental constructivist paradigms). Another implication is that the unit of analysis for investigating learning and development must go beyond the individual. It must focus on

PAGE 80

70 the teacher to student and student to student inter psychological functioning within the contexts where learning is occurring (Wertsch, 1990), a concept which various authors have extended and described as social practice (Lave & Wenger, 1991) and activity (Cole, 1999; Cole &Engestrm, 2006; Wells, 1999, 2002). I describe this as the social context of the classroom, and focus analysis on learning activity as it occurs within this social context. Lave and Wenger (1991) developed the concept of legitimate peripheral participation, as “a descriptor of engagement in social practice that entails learning as an integral constituent” (Lave and Wenger, 1991, p. 35). Legitimate participation refers to being a part of a practice, yet not at full participation with “ peripherality” suggesting “an opening, a way of gaining access to sources for understanding through growing involvement,” (p. 37). Thus, according to Lave and Wenger, it is legitimate peripheral participation in social practice that facilitates learning. Lave and Wenger (1991) intentionally did not apply legitimate peripheral participation to the context of school based learning. However, the analytical perspective of legitimate peripheral participation helped to shape my theoretical framework, particularly with regard to considering social practices that include assessment and students’ roles in those social practices and defining the role of formative assessment in learning. Within school based contexts, this perspective makes it important to ask to what degree students participate in social process es that entail learning as a critical constituent, and are they legitimate peripheral participants in those social processes. In other words, within classrooms, are students participating substantively and moving towards more full participation as

PAGE 81

71 learners, or not? Legitimate peripheral participation occurs only when students engage (peripherally) in learning activity, not when the teacher is doing something to a passive student audience. Thus, my framework includes the roles that students play in learning activity, as part of the social context of the classroom. Two related concepts, initially proposed by Vygotsky, further explicate how the social practice of formative assessment within classrooms could increase student learning. The first is how tools (signs and symbols) used during social practice culturally mediate internalization. The second is the role of more knowledgeable others in the internalization of social practice, explained through the zone of proximal development. Cultural mediation. Vygotsky (1978) asserted that the relationship between the individual learners and higher mental processes (learning) is always “mediated” by culturally and historically developed artifacts or tools, “. . each human being’s capacities. . are crucially dependent on the practices and artifacts, developed over time within particular cultures, that are appropriated in the course of goal oriented joint activity,” (Wells, 1999, p. 135). Mediating artifacts can be physical tools or symbolic tools, such as sign systems in language or mathematical symbols (Vygotsky, 1978). The tools that mediate any practice are products of the historical context and the culture in which they are created. “Human beings live in an environment transformed by the artifacts of prior generations. . the basic function of which is to coordinate human beings with the physical world and each other,. . in that they mediate interaction with the world,

PAGE 82

72 cultural artifacts can be considered tools,” (Cole, 1990, p.83). The cultural and historical context mediates learning through the use of tools during social practice (Wells, 1999). Cultural tools are more than aids to learning (Vygotsky, 1978). As artifacts of the culture that created them, the tools used actually shape the learning. According to Wertsch (1990), “human activity (on both the inter psychological and intra psychological plane) can be understood only if we take into consideration the ‘technical tools’ and ‘psychological tools’ or ‘signs’ that mediate this activity,” (p. 114). Or as Cole (1990) describes it, “cultural mediation fundamentally changes the structure of human psychological functions,” (p. 83). Formative assessment practice is also a culturally mediating tool. Torrance and Pryor (1998) describe formative assessment as the teacher and student jointly appropriating information about the student’s learning. “The teacher must appropriate the child’s response in order to realize the social construction of the lesson and in order to scaffold the social construction of cognition. The children must appropriate the cultural tools being presented to them in to their developing understanding of the subject matter at hand and the social processes of schooling” (p. 20). Defining formative assessment as social process means that cultural tools could also mediate formative assessment. The actions that are part of any formative assessment episode make use of a variety of cultural tools. For example, teachers draw learning targets from state or district standards or curriculum documents, curricular resources, and/or the prior experience of teachers. Data collection can use a task, an entire assignment, a test, or an oral question, all of which are culturally mediating tools.

PAGE 83

73 Even the approach used by a teacher to analyze and/or summarize student learning data is a culturally mediating tool (a scoring method). Understanding formative assessment as social practice and how the social context of the classroom shapes formative assessment practice includes interrogating the tools used as part of the practice. Feedback provided by the teacher to students based on her/his analysis and interpretation of the students’ learning data mediates learning also. In his research, Vygotsky (1978) focused on how speech, as it occurs in the context of schooling, informs the development of higher mental functions (learning). “He was concerned with how the forms of discourse encountered in the social institution of formal schooling provide the underlying framework within which concept development occurs,” (Wertsch, 1990, p. 116). Speech does more than mediate social interaction; it also mediates individual remembering, thinking and reasoning when it becomes inner speech (Wells, 1999). Wells (1999) built upon Vygotsky’s investigation of the role of speech in schools arguing that learning in schools can be thought of as a semiotic apprenticeship, that is, an apprenticeship in the ways in which people make meaning through communication using signs and symbols. For Wells, the semiotic aspect of the apprenticeship involves learners making meaning through speech both as a mediator of action and as a mediator of understanding or reflection within disciplinary learning. It is within this context that feedback provided by the teacher to individual or groups of students as a response to student learning data, is a psychological tool – a semiotic mediator of learning. Whether the teacher provides feedback orally, visually, in writing, or in some

PAGE 84

74 combination of these formats, feedback can mediate learner reflection. That is, depending on the content of the feedback it can provide external versions of the learners “inner speech” as they reflect about their own understanding and the progress of their learning. Depending on the content and context within which teachers provide feedback, it may also mediate learner action. For example, feedback provided to a small group of learners as they engage in a group activity may cause them to change their approach. Within this study, I consider how teachers use feedback about student learning within their instructional processes. Zone of proximal development. According to Wells (1999), “It is the zone of proximal development that has been Vygotsky’s most important legacy to education,” (p. 313). Vygotsky (1978) defined the zone of proximal development, as “the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers,” (p. 86). Put simply, a learner’s zone of proximal development is the knowledge, skills and understanding that she/he could not demonstrate on her/his own, but could with help. This concept has important implications for instructional practice. In contrast to constructivists like Piaget, who suggested learning experiences must be appropriate to the child’s current level of development, Vygotsky asserted that the most beneficial learning experiences are those that are just beyond a child’s current developmental level those that are in their zone of proximal development.

PAGE 85

75 Furthermore, Vygotsky conceptualized the zone of proximal development as shaping both assessment and instructional practices (Wells, 1999). As described above, a formative assessment episode includes the following actions: identifying a learning target(s), collecting information about student learning, analyzing and interpreting that information, and taking action based on the interpretation. Among the many implications of the concept of a zone of proximal development for assessment practice, perhaps the most obvious is that assessment should involve collecting information regarding the knowledge/skills/understanding that is beyond a student to demonstrate on their own but that they could demonstrate with assistance (Shepard, 2000; Gipps, 1999, 2002). This is not consistent with constructivist theories of learning which would suggest assessment should only include data collected about the learner’s current level of development. A second important implication of this learning theory is that to identify the upper bound of learners’ zone of proximal development, information about learning should be collected during learning activity while learners have access to assistance from the teacher and/or more knowledgeable peers and that information collection should be embedded within the social context in which learning activity takes place (Shepard, 2000; Gipps, 2002). This necessitates the use of informal methods for collecting this information such as observation or oral questioning. Moss (2008) asks what social practices within classrooms “count as assessment” (p. 222). She suggests that what we have called assessment in the past, with a focus on assessment instruments like tests and the evidence of learning they provide, is too

PAGE 86

76 narrow. Rather, the conceptualization of assessment should “incorporate all of the evidence based evaluations and judgments that occur in interaction in classroom learning environments,” (p. 223). In other words, the social practice of assessment in classrooms must include specific consideration of less formal interactions regarding evidence of student learning. Jordan and Putz (2004) define a continuum of assessment practice from formal to informal that includes three distinct types of assessment: documentary, discursive and inherent The most formal type of assessment identified by Jordan and Putz (2004), documentary assessment, is what we have traditionally called assessment within K 12 contexts. It includes the end of unit tests, weekly quizzes, and also those tests administered outside of the classroom including annual state assessments or district interim assessments. These assessments are “documentary” because they result in documentation that is available outside of the classroom. At the other end of the continuum, Jordan and Putz (2004) describe inherent assessment as “a natural part of all socially situated activities” (p. 348). Within the context of conversation, they offer this example of inherent assessment, “a listener looks puzzled – the speaker rephrases what she just said” (p. 348). Inherent assessment happens within a classroom context when a teacher listens into a group of students engaged in a learning activity, hears that their talk is on track and about the learning focus and moves on to the next group without comment. Inherent assessment can include the daily activity and social interaction within classrooms between and among teachers and learners. In the middle of their continuum, Jordan and Putz (2004) describe discursive assessment as activity that

PAGE 87

77 “makes the unspoken, inherent assessment explicit” (p. 350). Discursive assessment is the “reflective talk generated by a group of people engaged in a particular activity, about that activity” (p. 350). .” Within the context of a K 12 classroom, oral feedback shared with a group of students is an example of discursive assessment. Because the group shares discursive assessment, unlike inherent assessments, it becomes something that the group can refer back to later or what Jordan and Putz call a “social object.” Jordan and Putz assert that to be an effective and accepted member of any group, “the ability to talk about the ongoing activity in an evaluative way – to produce a discursive assessment – is crucial” (p. 350). In other words, they define discursive assessment as legitimate peripheral participation. Discursive assessment (or informal assessment) is a key element of the theoretical framework for this study. This means I will consider the degree to which teachers and student engage in discursively assessing learning, the roles students play in these processes and how students talk about their own learning or the learning of their peers in an evaluative ways. This also means I will investigate when and how student work and/or oral feedback about student work, provided to groups of students, becomes a social object. It is from Vygotsky’s concept of the zone of proximal development that Gallimore and Tharp (1990) derive a definition of instruction as, “ assisting (emphasis added) performance through the Zone of Proximal Development,” (p. 177). This is consistent with Wells (1999), description of the teacher role as “engaging with learners in activities to which they are committed, observing what they can already do unaided; then providing assistance (emphasis added) and guidance that helps them to . bring

PAGE 88

78 the activity to a satisfactory completion,” (p.159). The assistance provided by teachers is dependent on the teacher identifying, through assessment (conceptualized as evidence based evaluations), the zone of proximal development of the learner. The zone of proximal development identifies the window for instruction, or at what level the assistance should be targeted without regard for the form of that assistance (Wells, 1999). Once targeted at their zone of proximal development, assistance provided by teachers can take many forms, including the following: modeling, identification of appropriate tasks or learning opportunities, practical scaffolding through feedback, guidance, or cognitive structuring, providing explicit explanations of principles and procedures, or providing encouragement (Gallimore & Tharp, 1990; Wells, 1999; Wells & Claxton, 2002; Shepard, 2006). Framed as one component of formative assessment practice, these instructional moves all become descriptions of the potential uses that teachers can make of student learning data. Moss (2008) asserts that understanding formative assessment as social practice involves considering the scale of teachers’ use of assessment in determining what to do next. The scale of use could range from what to do next in an interaction with an individual student, in an interaction with a group of students, in adjusting the current activity of the entire class or rearranging plans for the entire unit or school year. This idea that the scale of uses teachers make of evidence about student learning matters is also included in my theoretical framework. Identifying learners’ zone of proximal development through assessment (frequently informal) and then using the zone of proximal development as the target for

PAGE 89

79 instruction (defined as assistance provided by the teacher) aligns directly with the model of formative assessment as articulated by Sadler(1989) and summarized in the following formative assessment questions: 1) Where do you want to go? 2) Where are you now? and, 3) How will you get there?, (Heritage, 2010; Shepard, 2006). According to Shepard (2006), “This formative assessment model . directly corresponds to the zone of proximal development,” (p. 628). The first two questions define the boundaries of the learners’ zone of proximal development. That is, the learners’ zone of proximal development is either between them or beyond “where the learner wants to go.” The third question, how will you get there, identifies the target for the instructional assistance that is provided within the zone of proximal development. Sadler (1989) additionally explains that only assistance that helps students make determinations about “how they will get there” facilitates learning. Thus, instructional uses of information about learning should direct learners towards where they want to go (their learning goal) but should help them take steps within their zone of proximal development. This also provides an explanation for why learners might fail to respond to instruction, if the suggested next steps are beyond their zone of proximal development. Applying socio cultural concepts to the theoretical framework. What is the role of formative assessment practice in learning? The socio cultural answer to this question is formative assessment culturally and socially mediates learning. Figure 2.2 is a modified version of the triangle originally proposed by Vygotsky (1978) to illustrate how cultural tools mediate learning. In this figure, formative

PAGE 90

80 Student Formative Assessment Student Learning Figure 2.2. An illustration of formative assessment as a mediator of learning. assessment replaces Vygotsky’s generic tool as a mediator of student learning to illustrate the role of formative assessment practice in learning. Participation in formative assessment as a social practice within classrooms is a perspective of how learning takes place. When formative assessment mediates student mental representations, then changes will be evident in their use of disciplinary language, regulation of their learning activities, and how they approach disciplinary problems. When formative assessment mediates student participation in current and/or future learning activity, this will be evident in their discussion with peers, how they participate in learning activity, and the products they create during learning activity. The feedback learners receive about the progress of their learning also can serve as a semiotic mediator of learning, whether it involves interaction with other learners, the teacher, or other cultural tools (such as written text). It provides the external speech (the language, words, and symbols) that become the “inner” speech of students’ higher

PAGE 91

81 mental processes, including understanding and reflection of disciplinary concepts. Effective feedback can change student mental representations, amplify and improve their self regulation and may transform their participation in learning activity. Examples include when feedback on a piece of written work results in learners changing their approach to similar tasks in the future, or when oral feedback about the process in which students are engaged during a group activity changes the next steps they take, or how they approach the next activity. Formative assessment practices mediate teacher instructional practice, transforming both teacher thinking about learning activity and how s/he shapes the learning activity within the classroom. As Moss (2008), describes it, formative assessment becomes “a way of looking at the evidence available for the ongoing interactions in the learning environment” (p. 224). This should be evident in teacher talk about their practice how the teacher plans learning activity, and if and why s/he deviates from the plan. This should also be evident in the learning activity that occurs in the classroom and changes the teacher makes to different scales of instructional practice based on student learning data, from adjustments to interactions with an individual student or small groups of students during learning activity, to adjustments to the current or a subsequent learning activity for the entire class, to changes in how students are grouped for subsequent learning activity, to rearranging plans for the entire unit or school year. A socio cultural perspective of learning helps to explain not only what it means for formative assessment, as a social practice, to mediate student learning but also on

PAGE 92

82 what analysis should focus, learning activity as it occurs within the context of the classroom. This perspective also identifies key characteristics of the social context of classroom to consider as shapers of the social practice of formative assessment. This perspective emphasizes the importance of interrogating the tools used as part of formative assessment practices. The tools include the data collected as part of formative assessment practices. For formative assessment practice to mediate learning, data should be collected during the learning process (as well as after) and should reflect studentsÂ’ efforts when being assisted by more knowledgeable others or other culturally mediating tools. This highlights the importance of teachers using informal collection methods such as observation, or oral questioning. Information about learning should be interpreted in reference to the desired learning target, making it possible to identify the learnersÂ’ zone of proximal development and its relationship to the learning goal (i.e. whether the learners zone of proximal development contains the learning goal or not). A socio cultural orientation to learning suggests the need to consider not only in the specific formative assessment practices evident in classrooms and the tools used, but also who participates in the classroom, the nature of that participation (student and teacher roles), and the rules and norms that guide participation (Moss, 2008). The type of roles learners should play generally in the classroom and as participants in the social practice of formative assessment, are roles characterized as legitimate peripheral participation. More knowledgeable others, whether the teacher or other students, play roles of assisting learning within the studentsÂ’ zone of proximal development. EducatorsÂ’ roles should include assisting students in constructing their next steps to

PAGE 93

83 either internalize understanding or transform their participation in learning activity within their zone of proximal development. This assistance could be in the form of discursive assessment which, when provided orally to a group of students or the full class, becomes a social object that both learners and the teacher can refer back to later. Assistance could be grouping students with more knowledgeable peers. In fact, it may be important for students to get support from each other, as this is one way that students can move towards more full participation. It can be teacher using questioning to scaffold learnersÂ’ next steps. I follow Moss (2008) and Oxenford OÂ’Brian, Nocon and Sands (2010) in using the components of an activity system proposed by Engestrm (1987, 1993, 2001) as an analytical tool to draw attention to the context of K 12 classroom within which formative assessment practices occur. Object Subject Rules or Norms Roles Participants Outcome T oo l s Figure 2.3. The components of an activity system as identified by Engestrm

PAGE 94

84 This framework, depicted in Figure 2.3, represents an expansion of Vygotsky’s mediational triangle to include the social context of the classroom. By identifying theoretical categories to describe the social context (object, tools, participants, roles, and rules or norms), I foreground this context and the relationship between the components of the classroom social context and formative assessment practice. Bringing Together the Components of the Theoretical Framework My development of my theoretical framework for this study began with an historical derivation of the current practice of formative assessment evident in K 12 classrooms within the U.S. today. I arrived at the concept of a formative assessment episode to distinguishing formative assessment practice from other instructional practices evident in K 12 classrooms and to show formative assessment practice as including specific actions. This provided an initial anchor for the theoretical framework for this study. Next, I reviewed empirical research related to formative uses of student learning data – the final step in a formative assessment episode that distinguishes it from other types of assessment activity. This review provided additional details that will allow me to distinguish between different types of formative data use within classrooms, and guidance regarding what attributes of those types of data use may be important. However, much of this empirical literature did not offer a specific theory of learning as a framework from which to interpreting the findings. So, while I found and reported evidence of a relationship between specific attributes of different types of data use and

PAGE 95

85 improvements in student learning, why different attributes were important was not always clear. This resulted in a list of potentially important attributes for formative feedback, self assessment, peer assessment, formative questioning, and instructional adjustments based on student learning data that I will consider as I examine these types of data use in two mathematics classrooms. Finally, I applied a socio cultural perspective to describe the role of formative assessment in teaching and learning, and to explain the interaction between formative assessment practice and the social context of a K 12 classroom. Through this process, I identified the role of formative assessment as a social practice, or tool that mediates teaching and learning. I developed a description of how formative assessment mediates instructional planning, instructional content and strategies, and student learning tactics, which became the second anchor for my theoretical framework for this study. I also identified critical attributes of the different actions of a formative assessment episode from a socio cultural perspective. Data collection should occur during, rather than after the learning process while students have access to assistance from more knowledgeable others and may be done informally. Data analysis and interpretation should consider the learnersÂ’ zone of proximal development. Data use should include feedback that learners can use and internalize as inner speech about their learning. Both teachers and students can participate in providing support to other students in their zone of proximal development, once identified through formative assessment. Understanding teacher use of learning data to determine what to do next instructionally includes considering the scale of that use.

PAGE 96

86 Describing formative assessment as a social practice also clarified that formative assessment could mediate and be mediated by other components of the social context of the classroom within which it occurs. Considering formative assessment as social practice foregrounds the relationship between formative assessment and the social context of the classroom as critical to understand how formative assessment works. This established that at my analysis should focus on the learning activity within classrooms for which formative assessment is an integral part, or how formative assessment is something teachers and students do together. It also suggests that my analysis must consider the specific elements of the social context of the classroom over all how it changes for different learning activities.

PAGE 97

87 Chapter III Methodology This chapter describes the research process in which I engaged to deepen understanding of formative assessment practice as it occurred within the context of upper elementary/middle level mathematics instruction. It includes information about the purpose and associated design of the study, including restating and further explicating the research questions that guided the study. It also provides details related to what data were collected about which subjects, where, when, and how. Finally, it describes how I analyzed the data presented findings presented in later chapters. This chapter includes the following sections: Purpose, Research Questions (and propositions), Data Collection, and Data Analysis. Purpose The purpose of this study was to further develop and deepen understanding of formative assessment practice within the K 12 classroom context applying a socio cultural theoretical framework. In the most general sense, this research describes the most critical attributes of formative assessment practice as it occurs within actual classrooms, and explains the relationship between formative assessment and the social context of K 12 classrooms. My aim was to illuminate how formative assessment practice works in real classrooms and how the construction of the social context influences formative assessment practice. To meet these purposes, I employed a cross case analysis, considering the multiple formative assessment episodes occurring in one upper elementary and one

PAGE 98

88 middle level classroom during one mathematics instructional unit in each. The focus of each case was episodes of formative assessment practice that occur within the context of an instructional unit. Case studies depend on an analytical rather than statistical approach for interpreting findings. I based my approach on theoretical propositions and a defined logic of replication “analogous to that used in multiple experiments” (Yin, 2003, p. 47). That is, the study started with a series of propositions about formative assessment practices in situ. I compared various sources of data against the propositions to establish a chain of reasoning regarding the dimensions and attributes of formative assessment practice, and the critical aspects of the classroom context that facilitated or did not facilitate this practice. Through an iterative process of comparing the data to the propositions, revising the statements, then comparing additional data to the revised propositions, I established a chain of reasoning. I used this iterative process to analyze data from each classroom at the formative assessment episode level within the social learning context of the classroom. Then, I compared the two classrooms further to develop my chain of evidence about each proposition. This case study approach requires clearly defined propositions, a description of how the propositions relate to the unit of analysis, explication of the logic linking data from different sources to each of the propositions, and detailed description regarding how I analyzed data both to adjust and to make judgments about the propositions (Yin, 2009). I describe these study components in the following sections: 1) Research

PAGE 99

89 Questions (research questions and associated study propositions), 2) Data Collection, and 3) Data Analysis. Research Questions This study was guided by two research questions: 1) What are the critical attributes of formative assessment practice as it occurs in situ? 2) How does the social context of a K 12 classroom (i.e. object, tools, participation/roles, and rules/norms) influence formative assessment practice in situ? Propositions, related to each research question, actualized the theoretical framework for the study. They defined the boundaries around what I considered within the scope of the study, and determined what data we collected to respond to each research question (Yin, 2009). The propositions also established the starting point for data analysis. The initial propositions associated with each research question emerged from my theoretical framework and included the following. Propositions related critical attributes of formative assessment episodes. 1. Formative assessment episodes are defined as including the following teacher and/or student actions: a) the teacher identifying (implicitly or explicitly) the learning target, and clarifying the learning target with the students; b) the teacher collecting data about student learning; c) the teacher and/or students analyzing learning data; d) the teacher and/or students interpreting data about student learning with regard to what it means for studentsÂ’ learning and/or instructional practice; and e) the teacher making adjustments to learning activity, and/or student(s) making adjustments to learning tactics.

PAGE 100

90 2. When formative assessment practice occurs, a variety of assessment methods, including informal methods, are used during (in addition to after) learning activity to collect information about student learning. 3. Teacher use of information to determine what to do next occurs at different scales; from “in the moment” adjustments to instruction or learning tactics, to shifts in the next activity, to changes in the unit as a whole or the next time the unit is taught. Propositions for classroom context. 4. Key features of the classroom learning context are perceptible and can be described within the following categories: the object or focus of activity, the tools that mediate learning (physical and semiotic), who participates, how they participate (roles and responsibilities), and the rules or norms guiding participation. 5. The classroom learning context shapes how data about learning is used by teachers and students. 6. The features of cultural tools used during formative assessment episodes and how they are used illuminate critical features of formative assessment practices. 7. In a classroom where formative assessment practices are evident, information about learning (including mistakes or misunderstandings) becomes a “social object” (a tool) that is valued both in terms of how it is described and how it is used. Data Collection My description of the data collection for this study includes the following topics: the unit of analysis (what subjects, where and when); the data sources (what data we

PAGE 101

91 collected and how the data sources were logically linked to the propositions); and data management (how the data was organized). Unit of analysis. The primary unit of analysis for this study was formative assessment episodes as they occurred within a mathematics instructional unit in each of two different upper elementary/middle level classrooms. We collected data during each class session of a single instructional unit in both classrooms. This unit of analysis allowed me to consider the propositions related to the prevalence and type of formative assessment practices within each classroom as well as how the social learning context of the classrooms influenced or was influenced by those formative assessment practices. Within each classroom, I considered data at both the formative assessment episode level and the learning activity level. I defined formative assessment episodes as including the following components: 1) a learning target; 2) data being collected about student learning in relationship to the target; 3) the data being analyzed in relationship to the learning target; 4) the data analysis being interpreted with regards to student learning or teacher practice; and 5) the teacher using student learning data to plan or adjust instruction and/or students using that interpretation to adjust their learning tactics. I defined learning activities as the sub parts of a class session distinguished by a change occurring during the class session in one or more of the following: in the object (or purpose), tools used, who participated and how (student roles and participation structures), and/or the rules or norms. Occasionally, a learning activity extended across two class sessions, such as when an activity was started at the end of one class session

PAGE 102

92 and completed during the next class session. Figure 3.1 illustrates the relationship between the unit learning context, the activity learning context and formative assessment episodes. For this study, the learning activity was a more appropriate unit of analysis than the class session as the context that framed formative assessment practice because the social learning context changed for different activities within each class session. Considering both the instructional unit and the learning activity level allowed me to study the social learning context for the unit as a whole, while still making distinctions when the social context changed across learning activities in ways that had the potential to influence formative assessment practice. I considered replication of formative assessment practices across the formative assessment episodes occurring during the entire instructional unit. I also included the replication of formative assessment practices across the formative assessment episodes occurring with a specific type of learning activity. Then, I compared the formative assessment episodes and the social learning context (at the unit and activity level) across the two different classrooms. Considering the differences in the social contexts of the two classrooms both for the unit and learning activities occurring within the units added power to my explanation of how the social context of a K 12 classroom (e.g. tools, participation, rules/norms, and student and teacher roles) influenced formative assessment practice in situ

PAGE 103

u y e A S a s e p t h ( d F o Consi s sed to selec e ar, grant f u A ssessment P a nds, PIs). I e lected for t roject, first s h e use of fo r d irectors of c F igure 3.1. T h o ccurring wi t s tent with t h t the classr o u nded resea P ractices (D E chose the cl t he larger p r s chool distri r mative ass e c urriculum, h e units of a t hin a math e h is replicati o o oms includ e rch project – E MFAP) de s assrooms f o r oject. In sel e cts were id e e ssment pra c instruction a a nalysis for t h e matics inst r 93 o n approach e d in this st u – Developin g s cribed mo r o r this study e cting class r e ntified whe c tices. With i a nd assess m Learning A Formative Assessment Episode Unit Lear n h e social co n r uct i onal un i h purposefu u dy. This stu g and Evalu a r e fully in Ch from amon g r ooms for p a e re district l e i n these dis t m ent, and m a A ctivity Cont Form a Assess Epis o For m Asse s Epi s n ing Contex t n text of for m i t. l sampling ( Y dy was part a ting Meas u apter 1 (Rui g all of the c a rticipation i e adership fo t ricts, distri c a thematics c ext a tive s ment o de m ative s sment s ode t m ative asse s Y in, 2009) w of a larger 4 u res of Form z Primo an d c lassrooms i n the DEM F rmally pro m c t staff mem c oordinator s s sment epis o w as 4 ative d F AP m oted bers s ) o des

PAGE 104

94 recommended teachers who use formative assessment practices and had a record of producing high student performance. DMFAP staff conducted telephone interviews with the recommended teachers and their principals and observed at least one class session to determine if the teacherÂ’s practice was consistent with the representation of her/his practice in the interview. Teachers who exhibited formative assessment practices during the observation were included in the DEMFAP project. I selected the two teachers included in this study from among those participating in the DEMFAP project and collected data in their classrooms for the DEMFAP project. Although this was not determined systematically, the first teacher appeared to demonstrate the most frequent use of formative assessment practices during the initial observation of her classroom among all DEMFAP participants. Through observation of her classroom on three separate days during the first few weeks of the school year, I was able to confirm that formative assessment episodes occurred multiple times during a given class session. Then, during the initial review of video recordings from her classroom and across the classrooms included in the DEMFAP project, various project staff confirmed that formative assessment episodes occurred more frequently in this classroom than in any others in the study. The second teacher appeared to be in the middle, among teachers participating in the DEMFAP project, with regard to frequency in her use of formative assessment practices, both in the initial observation and as DEMFAP project staff reviewed video recordings of her classroom. Because the focus of my study was to explain how teachers enact formative assessment practice and the role of classroom context, I selected the first classroom as a context in which I was likely to

PAGE 105

95 find a large number of formative assessment episodes from which to draw for my analysis. However, I also wanted an example where formative assessment practices occurred less frequently to draw out a potential relationship to differences in the classroom social context. I selected a teacher in the middle with regard to frequency of use of formative assessment practices, as opposed to the bottom, to ensure there would be a number of sessions during which formative assessment practices occurred. Data were collected from both classrooms during the 2011 2012 school year. The first classroom was a sixth grade classroom in a sub urban district, in a middle school (grades 6 8) with a student enrollment of 1,715. The race/ethnicity distribution among the student population included: 1% American Indian or Alaskan Native, 6% Asian, 31% Black or African American, 34% Hispanic or Latino, 22% White, and 5% two or more races indicated. Close to 66% of the students at the school were classified as eligible for free or reduced lunches. The teacher was White and in her fourth year of teaching overall and in that particular school and district. To protect her identity, I used a pseudonym and referred to this teacher as Liza. The second classroom was a fifth grade classroom in a different sub urban district in an elementary school with a student enrollment of 359. The race/ethnicity distribution among the student population included: 1% American Indian or Alaskan Native, 4% Asian, 2% Black or African American, 21% Hispanic or Latino, and 67% White. Close to 31% of the students at the school were classified as eligible for free or reduced lunches. This teacher was also White and in her eighth year of teaching. To protect her identity, I used a pseudonym and referred to this teacher as Kari.

PAGE 106

96 The individual instructional units selected as the focus of data collection were determined by the researcher and the teachers as being typical (e.g. not the first unit of the school year, and not including content that the teacher was teaching for the first time). The selected instructional unit for the first classroom (6th grade) focused on operations with fractions. The selected instructional unit for the second classroom (5th grade) focused on properties of triangles and quadrilaterals. Data sources. Multiple sources of data were collected related to both research questions and to establish a chain of evidence for each proposition. The data sources included the following: teacher interviews (entrance, pre and post unit, and daily post observation), video/audio recordings of daily mathematics class sessions across the entire unit, in person class session observations, and classroom artifacts (digitized). The teacher entrance, pre and post observation interviews occurred only one time for each teacher/instructional unit. A videographer recorded each class session. I and/or another DEMFAP staff member observed and took notes during each class session. We also collected all of the artifacts used during each class session. Finally, at the end of the class session, or sometimes immediately preceding the next class session, we conducted a post observation interview with the teacher. Figure 3.1 illustrates the sequence of data collection across these different sources.

PAGE 107

97 I was the lead researcher for the DEMFAP project for the first classroom and shared lead responsibility for the second classroom. I worked with two partners from the DEMFAP project in completing the observations and post observation interviews in both classrooms. I conducted the pre and post unit interviews for both classrooms (with a partner for the second classroom). I observed the first classroom once a week during the first three weeks of the school year (3 days), every day during the focus unit (10 days) and all but three days of a subsequent unit (15 days) for the ninety minute class period. I observed the second classroom all but three days during the focus unit (11 days) for the sixty minute math instructional period. To increase consistency in our observation notes and post observation interview questions, my observation partners and I jointly observed classrooms for at least two days (we jointly observed the second classroom for over 5 days), identified independently what post observation questions Entrance Interview Pre Unit Interview Classroom Observations Artifacts Video/audio Recordings Post Unit Interview Data collected during each class session for the entire unit Post Observation Interviews Figure 3.2. The data collection sequence.

PAGE 108

98 we would like to ask, and compared our questions. For the first classroom, my partners observed three days during the second (non focus) unit without me. Linking data sources to propositions. Table 3.1 summarizes which data sources provided information about each proposition. I describe in greater detail how we collected data for each source and what information each data source provided about the propositions following the table. Table 3.1 Data Sources for each Proposition Formative Assessment E p isodes1. Formative assessment episode actions x x x x x x x 2. TeachersÂ’ use of informal assessment methods x x x x x 3. Information used across scales of instructional activity x x x x x x x Classroom Context4. Key features of classroom learning context x x x x x x x 5. Classroom context shapes formative assessment practice x x x x 6. Cultural tools illuminate features x x x x 7. Information becomes a social object x x x Teacher interviews. Teachers were interviewed at several different points as part of this study. As part of the DEMFAP project, each teacher was interviewed when he/she entered that project (entrance interview) at the beginning of the 2011 12 school year. Teachers were also interviewed before (pre unit) and after ( post unit ) the focus unit of instruction. Propositions Data Sources Teacher Interviews Observation Video/Audio Recording Artifacts Entrance Pre Unit Post Unit Post Observation

PAGE 109

99 Finally, we interviewed teachers daily after each class session (post observation) All of the interview protocols used for this study were developed as part of the DEMFAP project and are included in Appendix A. I analyzed some of the data collected as part of the DEMFAP entrance interview as part of this study. Data collected from this interview helped with the initial identification of the key features of the classroom social context that could influence formative assessment practice. We conducted the pre unit teacher interview immediately before the beginning of the focus unit. Data collected through the pre unit interview helped to contextualize formative assessment practices within the larger frame of the critical learning goal(s), or object, for the unit, and illuminated the teacherÂ’s planned use of informal assessment methods and how she intended to use learning data across different scales of instructional activity. This interview focused on general information about the instructional unit, including the following: The big ideas or critical learning targets for the unit; How the unit fit with other instructional units during the school year; The general instructional approach being taken (including planned learning activities, assignments, assessments); What the teacher expected would be the progression of learning across the unit and the corresponding activity (or lesson) level learning targets; Critical junctures during the unit when the teacher planned to collect data about student learning;

PAGE 110

100 How the teacher planned to find out about studentsÂ’ learning during the course of the unit; and The teacherÂ’s plans for summative uses of assessment results, including mid or end of unit quizzes/tests and grading. The post unit interview occurred within a few days after the end of the unit and focused on the degree to which the unit went as planned. During this interview, we asked explicit questions about the teacherÂ’s perceptions of her formative assessment practice during the unit. Data collected through this interview also provided information about how the classroom context shaped information use, and how the teacher used student learning information across scales of instructional activity. Post observation interviews of teachers occurred daily, after the class session, and focused on clarifying what happened during the class session. These interviews were semi structured in that we identified suggested questions in advance, but the interview proceeded based on what happened during the class session that day, and how teachers answered initial questions during the interview. In general, these interviews addressed the following: how and why teachers chose different methods for collecting information about student learning (both formally and informally), whether data collection was planned or not, how teachers interpreted the information they collected, and how they decided what to do next.

PAGE 111

101 These post observation interviews also provided an important opportunity to clarify aspects of the classroom social context; for example, what tools were used, why the teacher handled information about learning (explicitly including mistakes or misunderstandings) the way that she/he did, the roles and responsibilities that students were assigned or undertook (e.g. was it assigned in an earlier class session or did the student self assign), the rules or norms guiding student participation in classroom activity, and why the teacher structured activity the way that she did. Post observation interviews were a key source of data about aspects of formative assessment episodes not discernible from classroom video recordings or observation notes including, for example, how the teacher interpreted learning information and how she planned to use it if the use was not obvious during the class session. Classroom video and audio recording. Class sessions were video and audio recorded daily during the focus mathematics unit in each classroom. A videographer was in the room each day using a camera to follow the teacher throughout the class session and the teacher wore an audio microphone. The videographer captured video and still pictures of the walls in the classroom, items posted on boards, and any image projected on classroom walls which were educational in nature. Daily classroom video and audio recordings provided the primary source of data about formative assessment episodes because they captured the details of the interactions between the teacher and individual students. They were also a source of data about certain features of the classroom context including the tools

PAGE 112

102 used, information used as a social object, and how the classroom context contributed to teacher use of student learning data. Direct classroom observation. We observed each class session daily during the focus instructional units. Consistent with the socio cultural framework for the study, the observations focused on the following aspects of the class session: the sequence of learning activity occurring during the class session; the classroom learning environment, including the tools used (e.g. materials, supplies, what was written on the board), and how students were physically arranged during each activity; student participation including how they engaged in learning and the roles and responsibilities they undertook, or were assigned, during each learning activity; teacher interaction with students as part of formative assessment episodes, including what may not be evident through the video or audio recording such as when a teacher referred to student work; and the formative assessment episodes evident during each activity. My partners and I captured daily observation notes in a table (an example can be found in Appendix B) with the following columns: Instructional Activity (critical events, time of events); Classroom Learning Environment (materials, supplies, what is written on the board, students seating arrangements); Student Engagement (what portion of the class, how are they engaged, and student roles and responsibilities); Teacher Interaction with

PAGE 113

103 Students; and Formative Assessment Practices. I entered a new row in the table for each different learning activity that occurred during the class session. The observation notes were a key data source for providing an initial list of the sequence of activities occurring during each class session, identifying the tools that were used during each activity, capturing student undertakings that were not captured on the video recording (since the camera followed the teacher), and establishing the context needed by the researcher to conduct post observation interviews after each class session. While observation notes frequently indicated that a formative assessment episode may have occurred, these notes did not capture the details of the interaction between and among students and the teacher that constituted many of the formative assessment episodes identified in the notes. Artifacts. We collected classroom artifacts used during each class session for the entire unit, including all handouts or instructional resources used by the teacher during the class sessions. Artifacts also included completed student assignments, assessments, or notebook pages on which the teacher and students had written information. We collected a sample of these kinds of artifacts from 9 students three each identified by the teacher as representative of low, typical, and high performance among students. The final type of artifact collected from all students was s ummative student assessment results. The DEMFAP collected two types of summative assessment results; 1) results from assessments developed and/or administered within the class as part of the unit of instruction, and 2) results from district and state administered assessments collected

PAGE 114

104 over the course of the entire school year. I considered only assessments administered within the class for this study. Artifacts were a key source of data regarding the context for what was going on during different learning activities, the tools used as part of formative assessment practice, and one use of student learning information teacher provided, written feedback. Managing data. DEMFAP staff initially managed all of the data sources for the two classrooms included in this study. Through the DEMFAP project, all data sources were converted to digital format, including scanning classroom artifacts with student and teacher writing on them, and audio recordings of teacher interviews were transcribed. The file names assigned to these data sources included a reference to the teacher, the type of data (video, artifact, etc.), and the date on which the data were collected. For this study, I also had the audio from the daily video recordings of each class session transcribed to make it possible to access, at a more detailed level, the interaction between the teacher and individual students. Also for this study, I imported all of the digital files into Nvivo and created folders by teacher name and data source. I renamed all of the files by date collected, and integrated the transcripts with the associated audio and video files. Within Nvivo, I also created sets, by date, in order to facilitate the integration of the various data sources in the process of characterizing the learning activity and formative assessment episodes occurring during each class session.

PAGE 115

105 Data Analysis My data analysis approach reflects the assumption that formative assessment episodes “are accomplished through, and situated within, the everyday routines of teacher pupil interaction in the classroom” (Torrance & Pryor, 1998, p. 131). The norms and rules structuring the activity, the roles students play during the activity, and student access to additional support in the form of tools and/or scaffolding from teachers or more knowledgeable classmates frame formative assessment episodes. Thus, I explicitly considered formative assessment episodes as they occurred within the classroom social or learning context (Cole, 1999; Cole &Engestrm, 2007; Lave & Wenger, 1991; Wells, 1999; Wells & Claxton, 2002). My general approach to analysis included the following steps: 1) Characterize the classroom learning context for the unit over all (unit learning context); 2) Within each class session, characterize the learning context for each activity (activity learning context), and locate each formative assessment within the learning activity during which it occurred; 3) Analyze and characterize the formative assessment episodes that occurred across the entire unit; and 4) Compare the unit and activity learning contexts and the formative assessment episodes across the two classrooms included in the study (compare classrooms). Figure 3.3 illustrates this approach including the broad steps and an overview of how I approached each step. Although my analysis steps started with my second research question (how the social context of the classroom influenced formative assessment practice), my presentation of findings for each classroom and in comparing

PAGE 116

t h f o a O p r e Figure 3.3 h e two class o rmative as s Unit l e To ch a c tivities occ u O ’Brian, Noc o roposed by e presents a n Unit Learnin g Context•Tool: Engestrom Activity System Components •Pattern Level Analysis •Characterize social contex t •Produce Unit Learning Context Overview An illustra t rooms bega s essment pr a e arning con a racterize t h u rring durin g o n and Sand Engestrm ( n expansion g t Acti Con For m Ass e Epi s •To o Se s Ou t •M e No t se q us e str u •Co d se g •M e se g co n for ass epi •M e (to o •Co d po s int e •Fin •Pa t •Ev a for pr o t ion of my a n n with the f i a ctice). text. h e learning c g each class s (2010) in u ( 1987, 1993 of Vygotsk y vity Learning text and m ative e ssment s odes o l: Daily Class s sion Memo t line e mo Observatio n t es (activity q uence, tools e d, participation u ctures) d e video/audio g ments e mo video g ments (activity n text and mative essment sodes) e mo artifacts o ls). d e and memo s t observation e rview. alize Memo t tern analysis a luate evidence context o positions106 n alysis appr o i rst researc h c ontext for t session, I f o u sing the co m and 2001) a y ’s (1978) m e n F o A s E p U• T A ( • A u v r •I a e • E u •I d i t c a • E f a p o ach. h question ( t he unit ove r o llowed Mo s m ponents o a s an analyt e diational t r o rmative s sessment p isodes across nit T ool: Formative A ssessment Epi s ( FAE) table A dd type of dat a u se codes to v ideo/audio r ecordings dentify formati v a ssessment e pisodes E nter type of da t u se teratively revie w d ata sources to dentify: learnin g t arget and data c ollection analy s a nd interpretati o E valuate eviden c f or formative a ssessment pra c p ropositions. t he critical a r all and for s s (2008) an d o f an activity ical tool. Th r iangle to in c s the s ode a v e ta w g s is o n. c e c tice ttributes of the learnin g d Oxenford system is framewo r c lude the so Cross Classroom Comparison•Compare and contrast unit learning contexts •For each formative assessment episode proposition synthesize an evaluate evidence across classrooms. •For each soci a context proposition, synthesize an evaluate evidence across classrooms. g r k cial d a l d

PAGE 117

107 learning context of the classroom and identifies theoretical categories to describe that context, including the following: object, tools, community/participants, roles, and rules or norms. I integrated data collected across all class sessions, and during the various teacher interviews, to create an overview of the learning context for the unit as a whole within each classroom. This analysis responded to my first proposition that key features of the classroom learning context are perceptible and can be described within the following categories: the object or focus of the learning activity, the tools (physical and semiotic) that mediate learning, who participates, how they participate (roles and responsibilities), and the rules or norms guiding participation. I used pattern level analysis (Le Compote & Schensul, 1999) with observation notes, artifacts, and pictures of the classrooms to characterize similarities and differences within and across class sessions related to each component of the classroom context to create a first draft of the Unit Learning Context Overview. Later, I updated this draft based on a more in depth analysis (described below) of the teacher interviews and daily class session recordings. The resulting Unit Learning Context Overview included the following information. The object or student learning outcomes of the unit. The tools used during the unit, including the equipment, supplies, resources posted on classroom walls and ceiling, district provided instructional materials, teacher developed instructional materials, and student notebooks.

PAGE 118

108 The rules and norms related to student behavior and how learning activity was structured; this included rules/norms that were explicit (possibly posted somewhere in the classroom), those that were evident across multiple class sessions in how students behaved while in the classroom, and those that were explained by the teacher as part of the teacher interviews (e.g. when homework was assigned). The roles and participation structures for the specific learning activities. This included “typical” activity patterns that occurred during the unit (activity with the same object, tools, norms, roles and participation patterns) and repeated across multiple class sessions. If the teacher named the “typical” learning activity, these names were included in the Unit Learning Context Overview (e.g. homework circles). If not, I used generic descriptive names (e.g. grading homework). Table 3.2 presents the learning activity names used. Table 3.2 Learning Activity Names Classroom One Classroom Two Warm Up Number Talk Homework Circles Checking Hawk Math Homework Challenge Solving Problems as a Class In Class Practice Ticket Out Reviewing Ticket Out Results Reference Sheet Proficiency Poster Unit Pre Test Summative Quiz/Test Quiz Circles Timed Computational Warm Up Checking Homework Working Problems as a Class Progress Monitoring (homework) In Class Practice Gap Attack Unit Test Reviewing Test Results Part 1, Predicting Performance Reviewing Test Results Part 2, Interpreting and Using Correcting Test Results

PAGE 119

109 Activity learning context and formative assessment episodes. After I developed an initial description of the unit level learning context, I then analyzed multiple data sources (observation notes, video and audio recording, artifacts, and daily post observation teacher interviews) at the class session and learning activity level, and located formative assessment episodes within this context. This analysis addressed the following study propositions: 1) The classroom learning context (the object or focus of activity, the tools that mediate learning, who participates, how they participate, and the rules or norms guiding participation) shapes how information about learning is used by teachers and students; 2) Cultural tools (e.g. tasks, analysis strategies) used during formative assessment episodes illuminate critical features of the social practice of formative assessment; and 3) In a classroom where formative assessment practices are evident, information about learning (including mistakes and misunderstandings) becomes a “social object” (or tool) that is valued both in terms of how it is described and how it is used. My approach to characterizing the social learning context for activity during each class session involved creating a Daily Class Session Memo, using the following outline: 1. Tools used during the class session. 2. Learning Activities (activity name and associated object, roles, norms, and formative assessment episodes). 3. Overview of formative assessment practice episodes (across the class session). 4. Discussion of what was discerned from the different data sources.

PAGE 120

110 To develop each class session memo, I reviewed data sources in the following order: 1) observation notes, 2) video/audio recordings (two passes), 3) artifacts (used during the class session), and 4) post observation interviews. I used an iterative process to develop these memos, in that sometimes I went back to a data source I had previously reviewed to clarify what I found in a later data source. I completed the review of all of the data sources for a class session before I moved on to the next class session. Below, I describe what I added to the Daily Class Session Memos from each data source. Observation notes. For each class session (day), the first data source I reviewed was the observation notes. I organized observation notes into a table with the following column headings: 1) Instructional Activity, 2) Classroom Learning Environment, 3) Student Engagement, 4) Teacher Interaction with Students, and 5) Formative Assessment Practices. First, I added a list of the sequence of learning activity for the class session, taken from the first column of the observation notes table, to the Daily Class Session Memo outline. Figure 3.4 provides an example of a Daily Class Session Memo outline with activity names. Figure 3.4. An example of an expanded daily class session memo outline. Daily Class Session Memo [teacher identifier] [Date] I. Tools II. Learning Activities A. Warm Up/Number Talk B. Homework Circles C. Homework Challenge D. Practice (in class) – story problems E. Ticket Out III. Formative Assessment Practice Episodes IV. What Can Be Discerned from Different Data Sources

PAGE 121

111 Next, I noted any information that was included in the observation notes about what tools the teacher or students used as part of each activity under the sub heading for that activity; and separately listed each tool in the Tools section of the Daily Class Session Memo. Then, under each activity sub heading, I added information about the participation structure for the activity which involved describing what the teacher did, what students did (their roles), and the norms or rules structuring the activity. This data came from the third and fourth column of the observation notes. In the Formative Assessment Episodes section of the Daily Class Session Memo, I listed all of the formative assessment practices that I identified in the fifth column of the observation notes for that day. Finally, I made a note in the final section of the Daily Classroom Session Memo about what I was able to discern from the observation notes for that day. At this point, I had expanded the basic outline of the daily class session memo for that date to include, under the “Learning Activities” heading, sub headings for each learning activity which occurred during that class session with some preliminary notes about the participation structure and tools used during each activity. I also had added a preliminary list of tools used, and formative assessment episodes occurring during the class session, to the respective sections of the memo. Video/audio recordings. Next, I reviewed the video/audio recordings and associated transcripts twice. During the first pass, I applied process and descriptive codes (Saldana, 2009) to segments of the recordings/transcripts using the coding function of the Nvivo software.

PAGE 122

112 During the second pass, I captured notes in the Daily Classroom Session Memo about the coded video/audio segments. During the first review of the video/audio recordings, I used process codes to label sections of the recordings during which the different processes of formative assessment episodes were evident. Those process codes included the following: 1. C larifying learning— instances of the teacher clarifying the learning objective. The learning objective could focus at the unit or activity level. 2. C ollecting data— classroom activity during which student generated and the teacher collected data about student learning. This included a data source such as students completing a math task (or multiple tasks) or talking about a task(s) they had already completed. This also included the teacher somehow collecting information from the data source, such as when she observed students completing or talking about math tasks, or when she collected written student work. 3. Analyzing data— instances of the teacher or student(s) analyzing student learning data. Frequently this involved the teacher talking about how she/he had analyzed student work on a math task. 4. Interpreting data— instances of the teacher or student(s) interpreting student learning data. Again, this frequently involved the teacher talking about her/his interpretation or sharing a tool she/he used. 5. Using data— instances of the teacher or her students using student learning data being used to improve learning. To identify an instance of “using data”, the type of data the teacher or her students used had to be evident; the use had to respond

PAGE 123

113 directly to the learning of an individual student, a group of students, or the entire class; and the use had to inform student learning or teacher instruction. In other words, the use had to be more than repeating the directions for the task. For example, “please complete questions number 1 through 6” was not coded as an instance of data being used, even though the teacher observed that the student had not completed all of the problems before making that comment. During this first review of the video/audio recording and transcript, I also applied two descriptive codes to sections of the recording using Nvivo software. I applied the first descriptive code, participation structure, to sections of the recording/transcript during which explanations of teacher and/or student roles and/or the rules or norms for different learning activities were evident. The teacher provided these explanations verbally, and/or posted them on a board in the classroom. I applied a second descriptive code, tools to video segments, pictures, or audio descriptions of the teacher or students using a tool to mediate learning. After this first round of coding, I reviewed the video/audio recording and transcript a second time in segments based on the learning activity sequence previously captured in the Daily Class Session Memo. During this second pass, I added information to the section of the memo corresponding to each learning activity based on the coded descriptions of student and teacher roles, and the participation structure for the activity (tagged as participation structure in Nvivo). I evaluated the codes captured in each section of the video corresponding to the learning activity to determine the degree to which the codes confirmed, deepened, or disconfirmed the existing description of the

PAGE 124

114 participation structure for that activity initially developed from the observation notes. For example, in one instance the observation notes indicated that students worked in pairs for an activity. In the related video segment that was coded as “participation structure,” the teacher explained how she purposefully selected the student pairs to include one student who did well on a prior task and one student who did not do well. In this case, the video description deepened the existing description of the participation structure for that activity. I then made additional notes in the memo to reflect this evaluation and to update the participation structure description for each activity. Next, I identified the formative assessment episodes occurring during each learning activity. Originally, I focused on instances of the teacher collecting data about student learning as an initial indicator that a formative assessment episode had occurred. After I used this coding approach for two class sessions for each teacher, it became evident that I was coding large sections of a class session as collecting data without locating the multiple uses of data. Next, I tested a strategy of focusing on instances of teachers or students using student learning data for improvement as an indicator that a formative assessment episode could have occurred. With this approach, the individual formative assessment episodes became more evident. Thus, I switched my approach to using instances of the teacher or students using data as a trigger that a formative assessment episode could have occurred. For each instance of using data that occurred during an activity, I worked backwards in the video to identify a corresponding instance of data being collected in reference to a discernible learning target. If all three elements were evident, this indicated that a formative assessment episode had

PAGE 125

115 occurred. Then I continued to work backwards in the video to identify other elements of the formative assessment episodes. Finally, I added notes about the components of the formative assessment episode to the appropriate section of the Daily Class Session Memo. This included copying sections in the transcribed audio into the memo of interactions between the teacher and the student(s). Frequently, multiple formative assessment episodes occurred during each learning activity. For example, during one learning activity, an investigation (students worked in groups to solve math tasks) occurred while the teacher moved around the room observing five different groups and provided oral feedback to each group about what she/he saw in their written work or heard in their conversations about the tasks. In the Daily Class Session Memo, I captured these as five different formative assessment episodes, although the data source and data collection strategy were the same for each episode. I continued this process until I had reviewed the sections of the video/audio recording, corresponding to each learning activity that occurred during the particular class session, and captured notes in the Daily Class Session Memo. Then, I made notes in the final section of the Daily Class Session Memo about what I discerned from the class session video/audio recordings. Artifacts. During the second pass through the video recording, if artifacts were referenced or observable, I concurrently reviewed those artifacts and capture notes about the artifacts in the section of the Daily Class Session Memo corresponding to that particular

PAGE 126

116 activity. I also captured additional notes in the tools section of the memo about the artifact. Post observation interview. Next, I reviewed the audio recording and associated transcript of the daily teacher post observation interview. I coded the recordings using the same process and descriptive codes that I applied to the video/audio recordings and linked transcripts in Nvivo. Frequently, only one or two of the activities that occurred during a given class session were referenced in the post observation interview. However, the post observation interview often provided information about the teacherÂ’s analysis and interpretation of data about student learning that was not evident from other sources. Sometimes, this interview identified additional formative assessment episodes when the teacher used data in ways that were not evident from viewing the video/audio recording of the class session. One frequent example of this was when the teacher altered her plans for the current class session based on her analysis and interpretation of data collected in a prior class session. Once the coding of the post observation interview was complete, I added notes to the Daily Class Session Memo for any learning activity discussed during the interview. Finally, I made a note about what information I had gleaned from the post observation interview for that day. Finalizing the daily class session memo. After I reviewed all of the data sources and captured initial notes, I took two additional steps to finalize the Daily Class Session Memo for that date. First, I finalized the notes summarizing all of the formative assessment episodes that occurred during

PAGE 127

117 that class session, including if the episodes were with individual students, a small group of students, or the whole class. Then, I characterized the data sources the teacher used as part of the formative assessment episodes and how the teacher collected, analyzed and interpreted the data. Next, I characterized the uses of data that were part of the formative assessment episodes, and the frequency of occurrences of the same uses. For example, during this class session, 10 of the uses of data as part of formative assessment episodes involved the teacher providing feedback to individual students. Finally, I reviewed and finalized the notes at the end of the Daily Class Session Memo about what was discernible from each of the data sources that I had analyzed for that class session. An example of a completed Daily Class Session Memo is included in Appendix C. Using the Daily Class Session Memos as a data source. Once all of the Daily Class Session Memos were completed, I reviewed each memo to gain greater understanding of how the unit progressed, the types of learning activities that made up the unit, and how the learning context shaped the formative assessment practice evident in the classroom each day. For example, in one of the classrooms, most days included grading homework. The learning context for grading homework was the same each day this activity occurred and that context influenced the formative assessment episodes evident across the unit. Based on this review of all of the Daily Class Session Memos for the unit, I updated or deepened my description of the unit learning context in the Unit Learning Context Overview for that classroom I then treated each memo as a data source for evaluating the propositions. I compared the data from each activity within each class session to the propositions and

PAGE 128

118 explicated evidence that provided support for and against the proposition, establishing a chain of reasoning for each proposition. Results of these analyses comprise a major section of the case report for each classroom. How I established a chain of reasoning related to the proposition that in a classroom where formative assessment practices are evident, information about learning becomes a social object that is valued both in terms of how it is described and how it is used, provides an example of this analysis process. First, I identified all of the Daily Class Session Memos in which I had indicated information about learning had become a social object. Then, I considered how frequently this occurred across the entire unit and with each class session. I compared this to the frequency of formative assessment episodes more generally across the unit and within the same class sessions. Finally, I considered the learning activities during which this occurred and the degree to which there were occurrences of the same learning activity when student learning information had not become a social object. Thus, I established a chain of evidence that was generally supportive of the proposition, but with additional caveats that: 1) many formative assessment practice episodes occurred without student learning information becoming a social object, and 2) the context of the learning activity was an important determiner of how student learning information became a social object. Formative assessment episodes. My analysis and characterization of formative assessment episodes in response to the first research question, what are the critical attributes of formative assessment practice as it occurs in situ?, built upon my characterization of the learning context

PAGE 129

119 within which formative assessment episodes occurred. The first proposition under this research question related to how I identified formative assessment episodes and the critical components of formative assessment episodes as they occurred in situ. The remaining propositions related to the attributes of one or more of those components (e.g. teachers frequently use informal methods to collect data). Most class sessions included multiple formative assessment episodes, and occasionally formative assessment episodes continued across two or more class sessions. Thus, to analyze and characterize the formative assessment episodes that occurred across the entire unit, I first brought together information from various data sources to describe each formative assessment episode, and then aggregated information across episodes to characterize formative assessment practice more generally across an entire unit. To support this analysis, I developed a spread sheet (separate from Nvivo) with one row per formative assessment episode. I used the major components of a formative assessment episode (learning target, collecting information, analyzing information, interpreting information, and using information) as category headings in the spread sheet. I also included a category heading in the table for the attributes I used to uniquely identify each episode (Episode ID). Within each formative assessment episode component, I identified attributes of the component that I was interested in better understanding; these became my column headings. Table 3.2 includes the category headings (formative assessment episode components and episode ID); associated column headings (attributes); and, for each column, the code values and a description of

PAGE 130

120 each code. For some attributes, I identified a priori codes, or a list of likely code values, based on my theoretical framework and the propositions for the study. For others, I provided descriptive text (indicated in Table 3.2 as “open”). The process I used to construct this spread sheet is described below the table. Table 3.3 Formative Assessment Episode Codes Component Attribute Values Description Episode ID ID 1 to n Unique number assigned to each formative assessment episode Date Year/month/day Date when the episode occurred Activity Open Name of the activity during which the formative assessment episode occurred Text Open Text from the transcripts that was coded as “using data” Learning Target Explicit Yes No Whether or not the learning target was explicitly stated Clarified Yes No Whether or not the learning target was clarified with students Tool Open Tool(s) used to clarifying the learning target Description Open What occurred to clarify the learning target with students Collecting Information (data source) Formality Informal Formal Whether or not the teacher’s data collection was formal, i.e. planned in advance and resulting in a paper based or digital record of the student’s learning Strategy (method) Open The method used by the teacher to collect data; three a priori codes were used, observation, questioning, and product (collecting written work) Initiated by whom Teacher Student Who initiated the interaction during which data were collected About whose Learning Individual Group Class About whose learning data was being collected; an individual student, a group of students, or all of the students in the class Tools Open The data source, or tools (assignments, tasks), used to elicit information from students about their learning Analyzing information Strategy Open Description of the strategy used to analyze information; “on the fly” was used as an a priori code By whom Teacher Individual Group Who engaged in analyzing the data; the teacher, an individual student, or multiple students working together

PAGE 131

121 Component Attribute Values Description Tools Open Description of tool(s) used during the analysis (e.g. scoring guide or rubric) Interpreting Information Strategy Open Description of the strategy used to interpret the information; “on the fly” was used as an a priori code By whom Teacher Individual Group Who engaged in interpreting the data about student learning; the teacher, an individual student, or students working as a group In reference to Student Teacher Whether the interpretation was in reference to student learning or teacher practice Tools Open Tool(s) used in interpreting the data Using Information Who Teacher Student Both Who used the information; the teacher, the student(s), or both the teacher and student(s) Initiated by Teacher Student Who initiated the interaction that resulted in data being used Type of use Activating students as resources for one another Adjusting instruction Grouping students Oral feedback (individual or small group) Oral feedback (full class) Questioning (individual/ small group) Questioning (full class) Students’ self assessing Students’ adjusting learning tactics Written formative feedback Planning Instruction How the data was used (more detail about the definitions of each of these codes is provided below and the scale of the use is also coded separately) Social Object Yes No Indication if the formative assessment included student data being used as a social object. Scale of use In the moment Next activity Next Class Session Unit/Year Next time teaching unit The time scale for taking action based on student learning data; some types of uses could occur at different scales, such as adjusting instruction which could occur at any scale

PAGE 132

122 To develop this Formative Assessment Episode Spread Sheet, and analyze the data captured in the spread sheet, I followed the steps described below. This included a third review of several of the data sources. 1) Identify a possible formative assessment episode based on the trigger of student learning data being used, coded in the class session audio/video recording or the post observation interview as “using data” (using Nvivo). Apply an additional code, in Nvivo, to indicate the type of use indicated. 2) Identify an associated instance of data being collected about a discernible learning target. 3) Fully identify an episode if data use and data collection about a discernible learning target were identified. For each occurrence of “using data”, create a new row in the formative assessment episode spread sheet. Enter the date and activity during which “using data” occurred, and copy relevant text from the transcription of the class session video/audio recording or the teacher post observation interview to uniquely identify the episode. 4) Enter the type of use and who used the data in the appropriate columns. Characterize the scale of the use. 5) Iteratively review additional data sources to identify the attributes of the remaining components of the formative assessment episode, including the following: the learning target, data collection, data analysis, and data interpretation. I began this iterative review with the Daily Class Session Memo. If the memo did not provide enough information to code the attributes of each formative assessment episode

PAGE 133

123 component, I considered additional data sources (segments of the class session video/audio recordings, artifacts, etc.). 6) Evaluate evidence for formative assessment practice propositions. This included considering how often all components of a formative assessment episode were present and explicit. To begin the process of bringing data together from various sources, I reviewed the video/audio recordings of each class session and audio recordings of each post observation interview a third time. In these data sources, I used the process code using data to identify possible formative assessment episodes. I based this coding on the trigger of the teacher using data about student learning to adjust instructional practice, or the student(s) using data to adjust their learning tactics. For each occurrence of using data I added a second process code, within Nvivo, to indicate the “type of use”. These types of use codes emerged from the data and included the following. 1. Activating students as resources for one another (peer assessment) – students provided feedback to each other about their work. 2. Adjusting instruction – the teacher made adjustments to the planned activities based on her interpretation of student learning data or the teacher changed the learning activity for an individual or small group of students based on her interpretation of student learning data. 3. G rouping students —the teacher adjusted how she grouped students for learning activity based on learning data.

PAGE 134

124 4. Oral feedback (individual or small group) – the teacher provided oral feedback or scaffolding to an individual student or small group of students. 5. Oral feedback to full class – the teacher provided oral feedback or scaffolding to the class as a whole. 6. Questioning (individual student or small group) – the teacher engaged an individual student or a small group of students in formative questioning. 7. Questioning (full class) – the teacher engaged the class as a group in formative questioning. 8. Student(s) self assessing – the student(s) provided information about their own analysis and interpretation of their learning. 9. Student(s) adjusting learning tactics— the student(s) identified how they planned to change their learning tactics, or changed their learning tactics based on their own learning data. 10. Written formative feedback – the teacher provided written feedback on a student product. 11. Planning instruction – the teacher indicated she used student data to plan instruction and learning activity for the next day or for subsequent days. Next, I added a row to the spread sheet for each occurrence of “using data”, and identified the date and the activity within which they occurred. Formative assessment episodes that were illuminated during a teacher post observation interview but occurred between class sessions (e.g. as part of teacher planning for the next session) were identified as occurring on the date when it was described by the teacher with the

PAGE 135

125 type of use consisting of planning instruction. Frequently, the student learning data used as part of the formative assessment episode were collected on an earlier date, but I identified the date of the episode as the date when the data was actually used. In some cases, information about a single formative assessment episode was available in both the video/audio recording and in the teacher interviews, so I scrutinized each row to ensure it represented an independent formative assessment episode. I coded instances of the teacher using student work as a social object as part of using student learning data in a separate column. These instances occurred in conjunction with another formative use of data, indicating that the teacher used student work to mediate the learning of students in addition to the one who created it. The teacher used student work as a social object when she described student work independently from the student who produced it, and used the work as a tool to mediate the other studentsÂ’ learning. For example, a student writes his/her work on the board, the teacher refers to the work independently from the name of the student who provided it, and then asks the rest of the students to interact with that work (e.g. indicate if they agree or disagree with the approach to solving the problem). Next, I used the Daily Class Session Memo, from the date of the formative assessment episode, as a source of data about the attributes of the remaining components of each episode. If the information included in the Daily Class Session Memo was not sufficient to fully characterize the formative assessment episode, I reviewed the relevant segment of the video audio recording from the appropriate date. I backtracked in time, from the point within the video/audio recording when using data

PAGE 136

126 was coded, to identify segments of the video related to: the learning target, how and what data were collected, and how the data were analyzed and interpreted. If needed, I reviewed the post observation interviews as well. These interviews helped to illuminate teacher analysis and interpretation of data that was not evident in the video/audio recordings. Additional uses of data occasionally became evident during this additional review of the post observation interviews, resulting in the addition of a row to the table of formative assessment episodes. After I recorded each formative assessment episode in a row of the spread sheet, and characterized the attributes of the episode, I used this table to develop a chain of reasoning related to the first formative assessment episode proposition, my description of the actions included in a formative assessment episode. Considering all of the formative assessment episodes identified in the classroom, I described each of the elements of the formative assessment episodes that had occurred. First, I described how each teacher formatively used student learning data, the frequency of the different types of uses and how each type looked and sounded in that classroom. Second, I described the learning targets that were the focus of each formative assessment episode, how the teacher defined them and the degree to which she clarified each with her students. Next, I described how data were collected across all of the formative assessment episodes occurring in the classroom. This included describing the frequency of different data collection methods, the relationship between data collection methods and types of data uses, and alignment between the collected data and the learning targets. Then, I described how the teacher or her students

PAGE 137

127 analyzed and interpreted data across the formative assessment episodes. This included describing the relationship between the types of data the collected, and how they were analyzed and interpreted. The description also included differences in how data were analyzed and interpreted based on who, the teacher or her students, did it. Finally, I considered how often I was able to code the attributes of each component of a formative assessment episode, which of these components were not explicitly identifiable, and the degree to which the chain of evidence supported the proposition that these components defined a formative assessment episode. The second proposition related to formative assessment episodes focused on how teachers obtain information about student learning. It states, a variety of assessment methods, including informal methods, are used during (in addition to after) learning activity to collect information about student learning This proposition foregrounds the collection of information about learning with a specific focus on the frequency of informal data collection during different learning activities, something specifically captured in the table of formative assessment episodes. In some cases, informal assessment of student learning occurred but did so without an immediate response or direct feedback provided by the teacher. The teacher could have provided information about her interpretation and later use of this data during the post observation interview. I coded both the formality and the specific methods used during data collection in the spreadsheet for each formative assessment episode. This allowed me to consider the frequency of informal assessment during different activities and to describe the variety of methods used across all of the episodes in each classroom.

PAGE 138

128 The final proposition related to the critical attributes of formative assessment practice addressed the scale of uses of student learning data. The proposition states that: Teacher use of information to determine what to do next occurs at different scales, from “in the moment” adjustments to instruction or learning tactics, to shifts in the next activity, to changes in the unit as a whole or the next time the unit is taugh t. My analysis and interpretation of data related to scale of uses involved considering both the range of uses and patterns of uses in the related columns of the formative assessment episode table in relationship to when data were collected. Cross case analysis. After I developed a chain of reasoning related to each of the formative assessment episode propositions within each classroom, I was then able to compare my analysis across the two classrooms. Because it provided points of contrast, this cross classroom comparison deepened my understanding of what counts as formative assessment, and how critical the social or learning context is in influencing formative assessment practice. My cross case analysis followed a similar structure to my within case analysis; however, my analytical focus was on the similarities and differences across the two classrooms. First, I compared the evidence related to the first research question (critical components of formative assessment practice) across the two classrooms, which involved identifying similarities and differences in how each teacher used student learning data and the frequency of different types of uses. Because using the same category for types of data use didn’t ensure these types of data use looked or sounded

PAGE 139

129 the same across the two classrooms, I also described similarities and differences within the details of how each teacher used student learning data (e.g. how activating students as resources compared across the two classrooms). Second, I compared how each teacher identified and clarified learning targets with students as part of formative assessment episodes. Third, I compared how each teacher collected student learning data as part of the formative assessment episodes identified in her classroom. The comparison included the frequency with which each teacher used different methods to collect student learning data, the relationship between the method of data collection and types of data used, and the alignment between the data that were collected and the learning target associated with each formative assessment episode. Finally, I compared how the teachers analyzed and interpreted data. Representation of findings. The findings of this embedded, multi case study include two individual results chapters (one for each classroom) and a cross case analysis chapter. All three results chapters address the research questions guiding this study with the chapters further organized by the case study propositions. Each chapter illustrating classroom level results included: the unit learning context overview (including naming the “typical” learning activities occurring during the unit and describing the participation structure for each); presentation of the chain of evidence in support or not in support of each proposition related to the critical attributes of formative assessment episodes, and presentation of the chain of evidence in/not in support of the classroom context propositions.

PAGE 140

130 The cross case analysis chapter included a comparison across the two classrooms of the learning context at the unit and activity level, and the chain of evidence related to each of the propositions for every research question. It also included some comparison regarding missing aspects of the propositions.

PAGE 141

131 Chapter IV Case Report One One of the first things I noticed in Liza’s classroom was that the class did not start with the teacher talking to the whole class from the front of the room. Liza greeted students at the door as they entered the classroom. Then students would pick up a notebook from a bin by the door and make their way to their assigned table (where they sat in groups of 3 or 4). They would turn to a page in their notebook and start working on “warm up” problems projected on the board in the front of the room. Some days Liza didn’t speak to the class as a whole until 5 or 10 minutes into the class session. At the end of the first 90 minute class session, I had nine pages of notes. Formative assessment episodes occurred continuously throughout the class session. Learning activity was constant; no time was wasted in this classroom. The transitions between the different learning activities were so quick that I struggled to add a new row to my observation table for each new activity before it was well under way. It took me over a week to notice the more subtle ways that Liza used student learning data to direct her actions. Some days, at the beginning of class, Liza would have a sticky note in her hand with student names written on it. When I asked her about it, she told me that this was a list of the students she planned to check on during the warm up based on written work from the day before. I later realized that she kept an entire pack of sticky notes in the pocket of an apron she donned at the beginning of the day. She used these to provide just in time written feedback to students about their work throughout each class session.

PAGE 142

132 The volume of formative assessment interactions Liza had with students every day were not fully evident to me until I started to watch and listen to the video and audio recordings of her class sessions. For example, in my observation notes, I might note that Liza was moving around the room, observing table groups as they worked and asking questions. In the video/audio recording, I would see and hear 7 or 8 separate formative assessment episodes with individual students or groups of students in less than 10 minutes.. This case report brings together evidence from a variety of sources to capture both the obvious and the more subtle ways that Liza engaged in formative assessment practice during a single mathematics instructional unit that included nine class sessions. The report is focused by the two research questions for the study: 1) What are the critical attributes of formative assessment practice as it occurs in situ? 2) How does the social context of a K 12 classroom (e.g. tools, participation, rules/norms, student and teacher roles) influence formative assessment practice in situ? Critical Attributes of Formative Assessment Practice My first proposition for my first research question defines the components of a formative assessment episode. To determine the usefulness of this definition, I started by determining what I would count as evidence that a formative assessment episode had occurred. As described in Chapter 3, my approach involved first identifying a formative use of student learning data, and then working back in time to locate an associated instance of the teacher collecting data about a valued learning target. Below I follow this order, starting with description regarding how Liza used student learning

PAGE 143

133 information. Second, I describe how Liza identified associated learning targets and clarified the targets with her students. Third, I describe how Liza collected data about student learning, including her data collection methods, the relationship between data collection methods and types of formative data use, and the alignment between the data collected and the learning targets. My analysis of the evidence related to the second proposition, focused on teacher use of informal data collection methods, is in this section. Next, I describe how Liza analyzed and interpreted data and the relationship between data interpretation and data collection methods. Finally, I provide evidence regarding the degree to which each component of a formative assessment episode was evident in each of the formative assessment episodes I identified (part of the first proposition) and the “levels” of formative uses of student learning information (the third proposition regarding the scale of teacher use of student learning data). I summarize my evidence related to the first research question after all of these sections. Using student learning data. Liza and her students constantly used student learning data as part of every learning activity. She rarely engaged in instructional strategies that did not include formative data use. I identified 248 formative assessment episodes during the 9 class sessions of the observed unit. On average, Liza formatively used student learning data over 28 times per class session; this average included class sessions during which 1/3 or more of the time was spent completing a quiz or test. Formative assessment episodes occurred close to once every three minutes in Liza’s classroom.

PAGE 144

134 Liza used student learning data in a variety of ways which I was able to categorize. The literature on formative assessment practices informed my categorization of LizaÂ’s uses of student learning data but it also emerged from the episodes occurring in LizaÂ’s classroom. My categories of data use in LizaÂ’s classroom included the following: activating students as resources for one another, adjusting instruction, oral feedback to an individual student or a small group of students, oral feedback to the full class, formative questioning and scaffolding individual students or small groups of students next steps in their learning, students adjusting their learning tactics, students self assessing, and the teacher providing written formative feedback. A number of these categories overlapped, such as when LizaÂ’s intervention for an individual student based on data collected during the prior class session involved her observing the student working on a similar task during the current session and providing oral feedback about that work. Several of these uses also occurred concurrently, such as when Liza engaged small groups of students in questioning and also provided oral feedback to the group during the same interaction. Table 4.1 includes the relative frequency of each type of use of student learning data as a percentage of all formative assessment episodes identified in LizaÂ’s classroom, the percent of all of the class sessions in the unit that included Liza using data that way, and the average number of times Liza employed each type of use of student learning data on the days during which that type of use occurred. Below, I describe what these uses of student learning data looked like and sounded like in LizaÂ’s classroom, starting with the most and working towards the least frequently occurring.

PAGE 145

135 Table 4.1 Frequency of Types of Uses of Student Learning Data Types of Student Learning Data Uses Percent of FA Episodes Percent of class sessions Average per class session Activating Students as Resources for one another 4.4 56 2.2 Adjusting Instruction, Selecting Students for Intervention 5.6 89 1.3 Grouping Students 2.0 33 1.3 Oral Feedback to Individuals or Small Groups(not with questioning) 48.0 100 13.2 Oral Feedback and Questioning 13.7 89 4.3 Oral Feedback to Full Class 12.5 89 3.9 Questioning (individuals or small groups) 9.7 78 3.4 Students Adjusting their Learning Tactics 0.8 11 2.0 Student Self Assessment 2.8 44 1.8 Written Formative Feedback 0.8 11 1.0 Oral feedback to individuals or small groups. Liza provided oral feedback to an individual student or a small group of students about mathematics task(s) they were currently engaged in completing during every class session and on average 17.5 times per class session ( sometimes combining oral feedback and questions to scaffold student learning). Because oral feedback frequently occurred while students were working in small groups (of 2 or 3), it was sometimes not possible to discern the difference between feedback to an individual student and feedback to the entire group. So I combined these two categories. Liza sometimes chose the students or groups of students that she would observe, and to whom she would provide oral feedback, based on student learning data collected during the previous class session. One day the end of the first week of the unit, I noticed that she had a sticky note in her hand as students began working on the warm up problems. During the post observation interview, she explained that the sticky note

PAGE 146

136 had the names of the students who had not done well on the written work from the day before. She used sticky notes to remind herself who she needed to check on during the warm up to see if they were able to do problems that they weren’t able to do the prior day. I don’t know how often this occurred. Because she structured so much of her class around her observing individuals or groups of students as they engaged in mathematics tasks, I frequently failed to ask if she had a reason for checking with certain individuals or groups. Questioning. Liza’s second most frequent use of student learning information was formative questioning with individuals or small groups to scaffold the next steps in their learning which occurred over three times per class session. Frequently, a single interaction with an individual student or small group of students would include both oral feedback and formative questioning. This use of student learning information followed a similar pattern to her oral feedback to individual students or groups of students. Students would be engaged in a mathematical task, she would observe their work or their dialogue with one another or she would ask the student(s) to explain their thinking. Then, instead of, or in addition to, providing feedback about the work or the dialogue, she would ask another question, guiding them in the next steps of their learning. Often her formative questioning included Liza facilitating dialogue among the students in the small group.The following interaction between Liza and a group of three students illustrates this type of formative questioning with a group of students:Liza to student 1, “Yes, ma'am?”

PAGE 147

137 Student 1, “Um, Student 2 was thinking like change this and times it times this.” Liza to student 1, “Oh, to do a strategy. Oh, okay. What do you think about that strategy?” Student 1, “It could work. It's too big. But what I was thinking is if this is already 5 and she only has a 1/2 left, just change this. .” Liza, to student 1 “Oh!” Then to the third student in the group, “So, Student 3, our concern now is whether to do the actual or the estimate. Okay? And they're kind of in an argument. Question number A1 says estimate, so I would go with that first. But, then, I love your strategy. Bring that forward later. Just so you know their conversation.” Student 3, “Uh, I rounded 3....and 2/5.” Liza, to student 3 “So you did the estimating? Oh, okay. So check in with them, see what they think. Student 2, get that on your paper.” Oral feedback to the full class. Providing oral feedback to the class as a whole was Liza’s next most frequent use of student learning information. She did this almost every class session and, on average, four times during those class sessions. About half of the time, her oral feedback to the class followed her use of student work as a social object (described below). Other examples of when Liza provided oral feedback to the whole class included the following: characterizing how the class did and where students made mistakes on a ticket out task, summarizing her observations of 3 or 4 groups struggling with the same issue, and while explaining why she created some additional practice problems and on what they focused.

PAGE 148

138 Activating students as resources for one another and grouping students. I followed Wiliam and Thompson (2007) in using the descriptor “activating students as resources for one another” rather than the more familiar descriptor “peer feedback” because it better captured the practice evident in Liza’s classroom. Liza’s use of student learning information to activate students as resources for one another often overlapped with her use of student learning information to group students. Although individual occurrences of Liza activating students as resources for one another were less frequent than some other uses of student learning information (only a little over 4% of all formative assessment episodes), the relative frequency of this use of student learning information may understate its impact. How students Liza seated for the entire unit established a context for activating students as resources for one another. She based student table group assignments on the results of the unit pre assessment. Each table had one student who did well on the pre assessment, one who did poorly and one or two students in the middle. She established and frequently reinforced the expectation that students would serve as resources for one another at their table groups. She would indicate that someone in the group might “take the lead” or play a “teacher” role in helping the rest of the group with the problem, etc. Although Liza sometimes grouped students in other ways, for most group activities students worked with their table group. There were several other ways that Liza used student learning data to group students and activate them as resources for one another. She paired students based on their ticket out results (collected at end of one class session) to review their ticket out

PAGE 149

139 results during the next class session. She gave each student a different role – assigning the student who completed the ticket out task correctly to review and provide feedback about the mistake(s) he or she saw in the other student’s work. About once a day Liza activated students as resources for one another in ways that did not involve grouping students based on data. As she was moving around the room observing the work and discussions of small groups of, she would determine that one group had correctly completed the task(s) while others had not. At that point, she would ask the students who had completed the task correctly to work with students from a different group. Sometimes while interacting with an individual group, Liza would determine that one group member had completed the task correctly or had a better understanding of how to complete the task than the rest of the group, and would specifically ask that student to work with other group members on some aspect of the task. In both cases, she checked back with the “helper” student(s) about how the individual or group he/she had been working with had done. Adjusting instruction. Liza adjusted her instruction based on student learning information during almost every class session by doing the following: changing the current student learning activity, changing another learning activity during the same class session, changing the plans for the learning activities on subsequent days, and changing the learning activity for a group of students rather than the whole class. On several occasions, she made adjustments on the fly to the next learning activity planned during the current class session. Often she would wait to complete her plans for learning activities for the

PAGE 150

140 following day based on student learning information she collected during the current class session including for example, changing the learning activities in which some or all students engaged, adding practice tasks or postponing a quiz. Student self assessment. A little less than 3% of the formative assessment episodes in Liza’s class included engaging students in self assessment and this occurred during slightly less than half of the class sessions. Most of the instances when Liza engaged students in self assessing were formal, in that students created a written record of their evaluation of their learning and/or effort towards meeting specific learning objectives. Formal student self assessment was part of the unit pre assessment, a “proficiency poster” (completed once at the beginning and once at the end of the unit) and as part of each ticket out students completed during the unit. As part of completing the unit pre assessment, students wrote notes next to each item that addressed how easy or difficult the item was for them to complete, if they had ever done a similar problem before, or if they were guessing or estimating in their response. Liza gave students specific time to write these notes during the unit pre assessment. As part of completing the proficiency poster question (at the beginning and end of the unit), students assigned a performance level (unsatisfactory, partially proficient, proficient or advanced) to their response. Before they completed each ticket out task, students rated both their level of effort and their current learning of the objective. Rubrics for assigning these ratings were on the back of the ticket out form.

PAGE 151

141 On a few occasions, only once or twice a week, Liza asked students to provide informal, on the fly, information about their learning. In these instances, she asked students to indicate with their thumbs up, down or sideways their confidence in completing a task or meeting a learning objective. In applying my definition of a formative assessment episode to these instances, I asked “What data were collected and what data were used?” From the teacher’s perspective, she collected data about students’ perceptions of their learning. Liza used that data in a couple of ways. On a few occasions, she made an adjustment to the current learning activity, such as providing additional information to the class, or solving an additional mathematics problem. Sometimes she had students provide information about their learning as part of using student work as a social object, asking students to indicate if they agreed with how another student had approached a problem. In both cases, I categorized the data use as something other than self assessment, but coded the data collection as self assessment. Written formative feedback. Most of Liza’s written feedback on student products included a grade or performance rating and comments about a student’s response to individual tasks. While she expected students might somehow use the comments, she did not provide specific opportunities for students to read or use the comments. This practice, providing written comments with grades, raised a question of what counts as formative written feedback? If students don’t have the opportunity to read or use the written feedback, what is the likelihood that it forms learning? I chose not to “count” these instances of Liza providing written comments on student work as formative written feedback.

PAGE 152

142 On a few occasions, Liza provided specific written feedback on student products and gave students in class time to use the written feedback. This included when she noticed that a large percentage of the class missed one step in multiplying fractions on a ticket out task. She went back through all of the student responses to the task and indicated on each student paper if they had failed to complete that step. During the next class session, Liza structured part of the Ticket out Review around students using that feedback. She also added some in class practice for students to have more chances to do that step. Liza provided written feedback and gave students time to use it when she created structured class time for them to review their summative quiz results, an activity she called Quiz Circles. She provided not only an over all performance rating but also an indication of which specific tasks students had completed incorrectly on quizzes. During Quiz Circles, students identified if the mistake(s) they made for each task was simple or not, and if getting the answer correct would require them to get additional help. As described above, Liza frequently provided “on the fly” written feedback on sticky notes or on individual student products as she was also providing oral feedback. Unfortunately, because of how these class sessions were video recorded, I did not identify all of the occurrences of this “on the fly” written feedback. Students adjusting learning tactics. Liza explicitly engaged students in making decisions about their own learning tactics a couple of times during the observed unit. On the last day of the unit, she gave

PAGE 153

143 students an opportunity to work with one another to review their results from the two, unit summative quizzes. Based on this review, students then had several choices they could make about their next steps accept their grade and turn the quiz in without further corrections, turn in their quizzes with the corrections they had time to make during class, or take their quizzes home and/or bring them to an after school tutorial session to complete additional corrections. Different students chose each of these options. Identifying and clarifying learning targets. I was able to discern a learning target for each formative use of learning data identified in Liza’s classroom. Liza shared the learning targets during the class session for the vast majority of formative episodes. She always posted the overall unit learning targets (the major unit learning goals or essential questions) and the learning targets for each day (daily learning objectives) on a board in the room. However, she did not always clarify the learning target(s) associated with each particular instance of data collection with the students. She occasionally engaged students in interacting with the learning targets, in addition to posting them in the classroom. Liza posted major unit learning goals on the bulletin board in the front of the room, at the beginning of the unit, as the “essential questions.” For the observed unit, the major unit learning goals were: 1) Determine the product of two fractions, a. using benchmarks, and b. using models; and 2) Determine how you can add and subtract fractions, a. using benchmarks, and b. using models. She engaged learners in dialogue about these major unit learning goals, and introduced key vocabulary terms associated

PAGE 154

144 with them during the first few class sessions of the unit. They became a structure within which Liza identified daily learning objectives. The daily learning objectives were always a subcomponent of, or a step towards, one of the major unit learning goals. Examples of daily objectives posted on the board included: students will add and subtract fractions using benchmarks, students will develop strategies for calculating sums and differences of fractions, students will represent the product of fractions visually, and students will master the multiplication of fractions. Some days the daily objective and the major unit learning goal were close to the same (e.g. students will master the multiplication of fractions), such as on days when students were taking a quiz. Two learning activities, investigations and tickets out, that included formative assessment episodes also involved students interacting with the associated daily learning objective. While introducing an investigation, Liza asked students to write the objective of the investigation in their notes. As they read and “coded” instructions for the investigation, Liza referenced the purpose of each investigation. Sometimes this included explaining which aspects of the investigation did not align with the learning objective, and on which aspects of the investigation students should not focus. For example, the “brownie pan” investigation prompted students to calculate the cost of the brownies purchased. Liza explained that this was not part of the daily objective. “Does anything in our objective talk about [how] students will find [out] how much money something costs? Is there anything in our objective about finding [out] how much money? (Divergent answers from students) I don't see anything about finding [out] how much money something costs. So, it (the investigation in the text) talks about [how] the pan costs $12 and how much money someone is gonna pay. Yes, that's interesting, but that's not the goal. The goal is the picture. So don't get caught up in that.”

PAGE 155

145 As described above, students rated their current learning and effort to date towards learning the objective of each ticket out before completing the task. Two reoccurring examples of formative assessment episodes that had learning targets different from the daily objective were the weekly Hawk math assignments and the homework challenge (three times per week). For both, Liza explained that the learning targets were ones that cut across the entire unit. Finally, Liza sometimes included warm up tasks that were not part of the daily learning objective. She always provided warm up tasks within specified categories (e.g. number sense, or computation), which indicated to the students a general topic for the task. Liza explained that the warm up tasks sometimes addressed learning targets from a prior unit tangentially related to the current unit content. Collecting data about student learning. From the starting point of identifying each instance of student learning information being used formatively, I worked backwards to determine when and how an associated instance of data collection had occurred. In this section, I describe how Liza collected data as part of formative assessment episodes, provide evidence related to the second proposition, describe the relationship between the data collection methods and the types of data use, and consider the alignment of the data collected with associated learning targets. As I open coded data collection as part of formative assessment episodes, it became evident that data collection was falling into three basic categories: (1) the teacher observing the student(s) as he/she engaged in mathematics tasks, and/or

PAGE 156

t a t e c a R w o w c o F i c l m T y w r e t h a lked about e acher colle c a tegorizatio ussell & Air a w as observa t I also f tasks invol w ritten resp o o llection m e i gure 4.1. A n My r e l ass work, o r m ajority of s t y pically, stu d w ork and/or e sponse ite m h eir concep t mathemati c c ting a stud e ns of assess a sian, 2012) t ion. categorize d ved, includi n o nse, and p e e thods evid e n illustratio n e view of the r as part of a t udent prod u d ents were explain ho w m s. She occ a t ual underst Observ a Questio Student P r c s tasks; (2) t e nt work pr o ment meth o Liza emplo y d the type o f n g: selected e rformance/ e nt in Liza’s c n of data col l all of the ta s a formal ass e u cts in Liza’ s prompted t o w they got th a sionally ass i a nding of t h a tion ning r oducts146 t he teacher o duct. This c o ds used by y ed each of f student w o response, s demonstrat c lassroom. l ection met h s ks student s e ssment (q u s classroom o complete a eir answer. i gned a task h e mathema t •Sele c •Shor t •Exte n •Perf o asking stud e c lassificatio n a number o f these strat e o rk product s hort constr u ion. Figure 4 h ods. s completed u izzes, ticke t were short, a mathema t A few assig n k that requir e tics proble m c ted Respons e t Constructed n ded Constru c o rmance or D e e nts questi o n is consiste n f authors (s e e gies, but th s Liza collect u cted respo n 4 .1 illustrat e as homew o t out), revea constructe d t ical task an d n ments incl u e d students m s they wer e e Response c ted Respons e e monstration o ns; and (3) t n t with e e for exam p e most freq ed by the t y n se, extend e e s the data o rk, as part o led that the d response. d show thei r u ded select e to write ab o e solving. O n e t he p le, uent y pes e d o f in vast r e d o ut n e

PAGE 157

147 example was when she asked students to answer the question, How can you prove the product of two fractions? The directions for this task included these things to remember: (1) Complete sentences, (2) Examples, and (3) Re read your answers to see if it makes sense. During the observed class sessions, Liza did not ask students to complete any performance or demonstration tasks. Multiple formative assessment episodes occurred related to studentsÂ’ completing a single mathematics task or assignment (with multiple tasks) in LizaÂ’s classroom. Homework assignments illustrate how this worked. Students received homework assignments associated with unit content on Mondays and Wednesdays to bring to class completed the next day. These homework assignments included multiple tasks. While Liza collected homework assignments during class on Tuesdays and Thursdays and reviewed and graded them that evening, most of the formative assessment episodes based on this data source occurred before students turned in their homework assignments. This data collection involved Liza observing students interacting about their homework assignments before the products were collected. On days when homework was due, 8 or 9 formative assessment episodes occurred as Liza observed students talking with their table group about their homework assignments. Table 4.2, below, summarizes the frequency of data collection as part of formative assessment episodes by the method used. A little over 78% of the formative assessment episodes identified during this focus instructional unit included data collection through observation. For about 9% of the formative assessment episodes identified, Liza collected data through student questioning, and 12.5% through student

PAGE 158

148 products. For approximately 3% of the formative assessment episodes, data collection involved studentsÂ’ self assessment. This included two instances of having students self assess by thumb voting, and four instances of students indicating verbally that they agreed or disagreed with other studentsÂ’ work shared on the board. The vast majority, 88.5% of formative assessment episodes involved informal data collection. No written record (other than notes Liza may have taken) was involved in the data collection. These findings support the second proposition that a variety of assessment methods are used and that informal methods are frequently used during learning activity to collect data about student learning. The relationship between data collection methods and types of data use. Table 4.2 also illustrates the relationship between the data collection and type of data use. This table includes the percent of all formative assessment episodes for which the data were collected through observation, student products, or questioning overall, and by each type of data use. Several patterns emerged from this cross tabulation of data collection methods and types of data uses. First, Liza always grouped students based on formal data collected through student products. She always selected individuals or small groups of students for an intervention based on formal data collected through student work projects. Most of the time, when she made other adjustments to instruction, those decisions were based on data collected through student work products. However, when she made adjustments to the current instructional activity or to the next activity during the same class session, she based those adjustments on data collected through

PAGE 159

149 observation. Most of the time when Liza had students self assess, they used data collected through formal student work products. While it would seem obvious that written feedback used formal student work products, this was not always the case. As described above, Liza frequently provided written feedback (on a sticky note) after observing students in process working on math tasks or talking about math tasks they had already completed. Unfortunately, I was unable to capture the frequency of occurrence of this “on the fly” written feedback. Table 4.2 Percent Data Collection Method by Types of Data Use Types of Data Use Data Collection Methods Percent Observation Percent Product Percent Questioning Activating Students as Resources for one another 3.2 0.8 0.4 Adjusting Instruction 1.6 2.4 Grouping Students 1.6 Individual/Small Group Intervention 1.6 Oral Feedback 47.6 0.4 Oral Feedback and Questioning 10.5 3.2 Oral Feedback to Class 10.5 2.0 Questioning/Scaffolding 6.0 3.6 Student Self Assessment 2.4 0.4 Student Adjusting Learning Tactics 0.8 Written Feedback Total (all types of data use) 78.2 9.3 12.5 Since most of the formative assessment episodes included data collection through observation, it is not surprising that when Liza provided oral feedback, she based it on data collected through observation. The vast majority of the time when she activated students as resources for one another she used data collected through

PAGE 160

150 observation. When she provided oral feedback and used questioning to scaffold the next step in students’ learning, Liza collected data through observation or through questioning. Alignment of data collected and learning targets. One critical element of a formative assessment episode is not just that data are collected, but that the data collected provide information about valued student learning targets. Understanding the role of data collection in a formative assessment episode involves interrogating the degree of alignment between the collected data and the learning target. The tasks used in the data collection process were the primary source of evidence from which I evaluated the alignment to the student learning targets. I assumed the daily learning objective as the default learning target for any task in which students engaged that day. In most cases, this was appropriate. However, as Liza explained, sometimes data collection was at a different “level,” used in instructional decisions that went beyond the instructional unit, such as with the tasks included in the Hawk Math assignments, the Homework Challenge assignments, and some of the warm up tasks. For these, I identified a different learning target I described the learning target and the types of tasks that were included for each assignment or assessment (task source). Then I used two strategies to determine the degree to which the task(s) was/were aligned with the learning target for the formative assessment episode. I considered coverage or the degree to which the task covered the content identified in the target. Then, I compared the cognitive complexity of the learning target with the cognitive complexity of the task(s) using the Webb (1997) Depth

PAGE 161

151 of Knowledge (DOK) framework. This framework includes four levels of cognitive complexity: DOK 1= recall and reproduction, DOK 2= Basic Application of Skills and Concepts, DOK 3= Strategic Thinking & Reasoning and DOK 4= Extended Thinking. Table 4.3 depicts my analysis of the alignment between data collected and the learning targets for each formative assessment episode. It includes the following: the learning targets for formative assessment episodes occurring during the observed units, associated assignments or assessments used to collect data (task source), a description of the tasks that were part of those assignments/assessments, and the alignment between the learning target and the tasks used in data collection.

PAGE 162

152 Table 4.3 Learning Targets and Mathematical Tasks Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Relevant Unit Learning Goal: Determine how you can add and subtract fractions, a. using benchmarks, and b. using models. Posted Daily Learning Objective: Students will demonstrate 5th grade mastery of fractional concepts. Bits and Pieces II, Part II Pre Assessment (12/6) Show your work and write your answers on the line. (Four total similar problems including two subtraction). At D.J.’s Drink Stand, Erika ordered chocolate milk made with chocolate syrup, milk, and whipped cream. The drink was, chocolate syrup, milk, and the rest whip cream. What fraction of the drink was whipped cream? (two total similar problems) Liza also asked students to make notes next to each task regarding the difficulty of the problem and the degree to which they could complete it. The tasks covered part of the relevant unit learning goal – adding and subtracting fractions using models. No explicit tasks were included that required students to use benchmarks, although they could have chosen to do so. Since this was a pre assessment, students engaged in tasks that went beyond the daily learning objective of demonstrating mastery of 5th grade fractional concepts. However, Liza only expected students to attempt each task and then write about that attempt. Thus, students’ self assessment of their level of mastery was additional data being collected. From that perspective, the learning objective and the tasks aligned. Relevant Unit Learning Goal: Determine the product of two fractions, a. using benchmarks, and b. using models. Posted Daily Learning Objective: Students will demonstrate 5th grade mastery of fractional concepts. Proficiency poster problem (12/6) How can you prove the product of two fractions? (Make sure your answer includes complete sentences and examples.) This task covered only student conceptual understanding of how to determine the product of two fractions. It did not require them to use benchmarks and models. This task has a DOK level 3. The learning target has a DOK level 2. The task is more cognitively complex than the learning target. Add and subtract fractions using benchmarks. Warm Up Computation: Number Sense: Are each of these fractions close to zero, half or whole? Explain or The first warm up task addressed a target from a prior unit. The second warm up task and the learning target

PAGE 163

153 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence show your answer. and both required students to use benchmarks. However, the task did not require students to add fractions. The task only covered one part of the daily learning target. The DOK for both are the same. Add and subtract fractions using benchmarks. Homework 13: Estimating Sums of Fractions (12/7) Estimate Sums. (eight similar problems) Using benchmarks, estimate each sum. (for each the answer is greater than 1 or less than 1) (eighteen similar problems) All of the homework tasks were adding and subtracting fractions using benchmarks, or estimates. The objective and the tasks covered the same content. The DOK of the target and tasks were also the same. Students will develop strategies for calculating sums and differences of fractions. Investigation 2.2 (12/7) Two problems with three parts each including using benchmarks to estimate sums of fractions, and determining if an approach to adding fractions was appropriate. Tasks cover the target. The DOK of the task and target were the same. Students will demonstrate mastery of finding sums and differences of fractions. Students will write estimation strategies on fraction + and – problems. Number Talk (12/8) Individually find the answer with no paper. 100 – 31.8 The task is aligned with a learning target from a prior unit. However, it included one aspect of finding differences of fractions. The DOK of the task and target are the same. Students will demonstrate mastery of finding sums and differences of fractions. Homework 14: Adding and Subtracting mixed numbers (12/8) Write your answer as a mixed number in simplest form. Show all work to receive credit. (twelve similar problems) The tasks cover the entire learning target. The DOK of the learning target and the tasks are the same. Students will demonstrate mastery of finding sums and differences of fractions. Partner Switch Problems (in class practice) (12/8) and The tasks cover the entire learning target. The DOK of the learning target and the tasks are the same. Students will demonstrate mastery of finding sums and differences of fractions. Ticket out Problem (12/8) The tasks cover the entire learning target. The DOK of the learning target and the tasks are the same.

PAGE 164

154 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Students will demonstrate mastery of finding sums and differences of fractions. Adding and Subtracting Story Problems (in class practice) (12/8) Solve each problem and circle your answer! Amanda has 3 pizzas and Vinney has 5 pizzas. How many more pizzas does Vinney have than Amanda? Amanda as 3 pizzas and Vinney has 5 pizzas. How many more pizzas does Vinney have than Amanda? (Four total pairs of similar tasks – eight tasks total). The tasks go beyond the learning target– requiring students to translate a description into a mathematical equation and then find the sum/difference between two fractions. The DOK of the learning target and the tasks are the same. Students will demonstrate mastery of finding sums and differences of fractions. Warm Up Computation: 1.2 2.1. Number Sense: Are each of these fractions closer to zero, half or whole? The first task is aligned with a learning target from a prior unit. The second task requires students to identify benchmarks, but doesn’t require them to add or subtract fractions. The DOK for the task and the target are the same. Students will demonstrate mastery of finding sums and differences of fractions. Hawk Math Term 2 Sheet 5 (12/9) Adding, subtracting, multiplying and dividing whole numbers (5 each). Adding with carrying. Subtracting with borrowing. Multiplying three digits. Dividing four digits. Adding fractions. Changing to an improper fraction. Write a decimal in words. Solve exponents with addition and multiplication. Identify the next in a sequence. Add decimals with carrying. Subtract decimals with borrowing. Find the missing angle in a triangle. Measure a line using a ruler. Find prime factorization. Find a perimeter. Measure an angle. Find a mean of a set of whole numbers. Solve simple story problems with whole numbers (3). As described above, the tasks included in the Hawk Math assignments address a variety of learning targets from across the entire school year.

PAGE 165

155 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Students will demonstrate mastery of finding sums and differences of fractions. Ticket out Review (additional practice problem used as part of this activity) The task and learning objective cover the same content. The DOK is the same. Unit Learning Goal: Determine how you can add and subtract fractions, a. using benchmarks, and b. using models. Daily Learning Objective: Students will demonstrate mastery of finding sums and differences of fractions. Bits and Pieces II Part I Quiz (20 points) (12/9) Show your work, and write your answers on the line. Make sure to simplify all answers and change any final answers from improper fractions to mixed numbers if necessary. (5 similar addition and subtraction problems, including two with mixed numbers). Roberto is making chocolate chip cookies. He needs 2 cups of flour for the recipe. He has 2/3 cup of flour in one bag and 1 3/5 cups in another bag. a) Estimate the total amount of flour Roberto has. Use estimation or benchmarks to prove your answer. Show your work. b) Solve to find out exactly how much flour Roberto has. (three similar story problems) The tasks covered all aspects of the unit learning goal and daily objective. Some tasks also required students to translate a word problem into an equation. This goes beyond the learning objectives as stated. The DOK of the learning objectives and tasks are the same. Students will represent the product of 2 fractions visually. Homework 15: Review Completed when a substitute teacher was in the class (12/13) Show all work to receive credit. Problems included: 2 problems subtraction of decimals with borrowing, 2 problems adding and subtracting fractions, One problem multiplying decimals, What is wrong with this problem, ? What is the greatest common factor of 36 and 54? What is the least Review problems did not address the learning goals for the unit or the learning objective for the day.

PAGE 166

156 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence common multiple of 2 and 51? Four problems expanding exponents. Two problems asking students to express fractions as a decimal. Students will represent the product of fractions visually. Investigation 3.1 (12/13) Aunt Serena buys of a brownie pan that is full. 1. Draw a picture to show how the brownie pan might look before Aunt Serena buys her brownies. 2. Use a different colored pencil to show the part of the brownies that Aunt Serena buys. 3. What fraction of the whole pan does Aunt Serena buy? (three sets of similar problems) The investigation tasks covered all aspects of the learning target. Some tasks went beyond, although the teacher gave oral directions to students not to focus on these aspects. The tasks also include students translating word problems into equations. The DOK is the same. Determine the product of two fractions using models. Homework 14: Multiplying Fractions (12/14) Draw a model to find each product. (six similar problems) Find each product (or multiply). Simplify. (seven similar) (two similar) (eleven similar) (six similar) Solve and change to improper fractions A certain granola cereal has 240 calories in each 1 cup serving. How many calories are in a serving of 1 1/3 cups of the cereal? (two similar problems) The tasks covered the learning target. Some tasks went beyond the learning objective, also including students translating word problems into equations. The DOK is the same.

PAGE 167

157 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Students will demonstrate mastery of finding products of two fractions. Ticket out Problem (12/14) The task and learning target cover the same content and are at the same DOK level. Students will demonstrate mastery of finding products of two fractions. Warm Up Computation: 300 – 99.28 Number sense: The first task is from a prior unit. The task and learning target cover the same content and are at the same DOK level. Students will demonstrate mastery of finding products of two fractions. Homework 16 (12/15) Show all work to receive credit. Write each mixed number as an improper fraction (10 similar problems) Multiply (six similar problems) Multiply (matching, four possible answers for each set of three problems) (12 similar problems) The tasks cover the content of the learning target. They also include specific focus on one step of multiplying fractions, writing all numbers as a fraction before multiplying. The DOK is the same. Reduce or simplify improper fractions (step 3 of multiplying two fractions). Extra Practice: Multiplying Fractions, Step 3 (in class practice) (12/15) Reduce or simplify the improper fractions to lowest terms. Show all work. (six similar problems) The tasks cover the same content as the target and the DOK is the same. Demonstrate mastery of finding the product of two fractions. Warm Up Computation: 100 3.51 Number sense: The first task is from a prior unit. The number sense tasks cover the same content as the learning target and the DOK is the same. Demonstrate mastery of finding the product of two fractions. Hawk Math, Term 2, Sheet 6 (12/16) (same as Hawk Math, Term 2, Sheet 5 described above) As described above, the tasks included in the Hawk Math assignments address a variety of learning targets from across the entire school year.

PAGE 168

158 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Demonstrate mastery of finding the product of two fractions. Bits and Pieces II, Part II Quiz (12/16) Show your work and write your answers on the line. Make sure to simplify all answers and change any final answers from improper fractions to mixed numbers if necessary. (three similar problems) On a particular map of Denmark, 1 inch equals 12 miles. How many miles will 3 inches equal? The tasks covered the learning target. Some tasks went beyond the learning target, also including students translating word problems into equations. The DOK is the same. In general, the tasks in which students engaged covered the same content as the associated learning target. The tasks were also consistent with the cognitive complexity of the learning target or required greater cognitive complexity from the students than the learning target. Some tasks were not intended to be aligned with the daily objective, but rather addressed learning objectives from prior units or from across the entire school year. This included the weekly Hawk Math assignments, Homework Challenge tasks, a review assignment given when Liza was sick and had a substitute teacher, and some of the warm up tasks. In the majority of cases, the tasks students completed focused on the daily learning objective, creating the potential for uses of student leaning data during the current class session or the next class session. Some tasks (such as those included in the pre unit assessment and the proficiency poster) went beyond the daily unit learning objective focusing one of the unit learning goals. Those tasks created the potential for uses of student learning that were at a level beyond the current class session, or even the next class session, but within the current unit.

PAGE 169

159 Some tasks focused on learning objectives that went beyond the current unit, creating the potential for uses of student learning data for future units. These included tasks completed as part of the weekly Hawk Math assignments (6 formative assessment episodes), the Homework Challenge (7 formative assessment episodes), and some of the warm up tasks (one task on each of three days were subtraction of multi digit decimals). Analyzing and interpreting data about student learning. For the vast majority of formative assessment episodes occurring in Liza’s classroom, the person analyzing, interpreting and using student learning data was Liza. Liza asked an individual student or students working in a group to analyze their learning information, “on the fly” on about four occasions during the observed unit. On 10 occasions, she asked students individually or working in small groups to interpret data. She had students interpret their learning data using a variety of strategies. She had them give themselves a rating, determine which of their incorrect responses were small and which were big mistakes, and she had them characterize qualitatively the level of difficulty of different tasks. In the vast majority of the formative assessment episodes that involved Liza analyzing and interpreting student learning data, close to 90%, her analysis and interpretation was “on the fly.” In other words, when Liza collected data about student learning through observation or questioning and used it immediately, how she analyzed and interpreted the data was implicit and not directly observable. Because the vast majority of formative assessment episodes involved informal data collection methods –

PAGE 170

160 observing student work or student dialogue or asking student(s) questions (~90%) – most of her analysis and interpretation of data was not observable; it was inferred. When Liza collected data formally (through a student product), her analysis and interpretation was observable. Liza analyzed student learning data collected through student work products consistently across all work products. This may have been because the vast majority of student work products she collected included selected response or short constructed response tasks that had a single correct answer. When analyzing student work products, Liza identified if the students’ responses to each task were correct, and/or if some part of the students’ response to the task was correct. The exceptions were when students completed tasks that included showing their work, explaining their response, or writing about a math concept. While these tasks or parts of tasks did not have one correct response, Liza did not use any tools, such as a rating scale or rubric, to score these tasks. Rather, the criteria she used to analyze these kinds of student products were implicit. How Liza interpreted data collected through student work products varied based on what the work product was. Liza’s interpretation of student work products frequently involved considering patterns in the tasks or aspects of tasks with which multiple students struggled, or patterns in how groups of students performed. When she identified tasks with which multiple students struggled, she would either adjust instruction the next class session to address that issue, or if it was on a Hawk Math assignment, use a similar task as part of a warm up activity. When her interpretation was regarding patterns in how groups of students performed, she used the information

PAGE 171

161 to adjust the learning experiences for those students. Sometimes she would identify a group of students within the class that performed significantly less well than the rest of the class. This led to her providing a specific intervention within that group, or putting them on a list of students to check in with during the next class session. Her interpretation also frequently involved putting students into categories – high, middle, low – based on their performance. She used this approach to establish groups or pairs of students to work with one another during the next class session. One example was the class session after a ticket out, while students reviewed their results, Liza assigned student partners based on their performance on the ticket out task – she assigned every low student to work with a high and possibly a middle performing student. When interpreting the unit pre assessment or the two summative quizzes, Liza also did “item analysis.” This involved identifying which students had answered each item incorrectly, and further grouping those students as having made a small or a big mistake. She captured these lists students in two columns (small and big mistakes) next to each item on a copy of the test. They became her initial lists of students with whom to “check in” after the pre assessment, and formed the basis for how she seated students for the unit. Table 4.4 provides a summary of Liza’s analysis and interpretation approaches by the work products collected.

PAGE 172

162 Table 4.4 Analysis and Interpretation Approaches by Work Product Work Product Analysis and Interpretation Approach Pre Assessment Identify correct and incorrect responses. Provide an over all number correct. Identify which students (by name) made small and large mistakes on each item. She called this “item analysis”. Sort students into groups by performance – high, middle, low. Proficiency Poster Provide a performance rating (U, PP, P, A) Ticket Out Provide a performance rating (U, PP, P, A) Identify which students made small vs. large mistakes. Sort students into groups by performance – high, middle, low. Homework Challenge Tracking Provide a performance rating (U, PP, P, A) Homework Assignments focused on unit content (twice a week) Provide a performance rating (U, PP, P, A) Hawk Math Assignments (weekly homework) Provide a performance rating (U, PP, P, A). Identify items with which multiple students struggled to include on a future warm up task. Unit Test/Quiz Identify correct and incorrect responses. Provide an over all number correct and performance rating (U, PP, P, A). Identify which students (by name) made small and large mistakes on each item. She called this “item analysis.” Progress Monitoring Assessment Identify correct and incorrect responses. Provide an over all number correct and performance rating (U, PP, P, A). The scale of use of student learning data. I classified the temporal scale of formative data use by describing the relationship between when data were collected and when they were used as different levels or “scales” of formative decisions. Identifying “levels of uses” of student learning information is one way I considered how different components of a formative assessment practice episode occurred together. The categories I used to describe the “levels of uses” of student learning data included the following: in the moment, during

PAGE 173

163 a subsequent learning activity (in the same class session), during the next class session, within the current unit (subsequent to the next class session), and beyond the current unit. These categories refer to when the teacher or students used student learning data in relationship to when they were collected. Table 4.5 summarizes the frequency of occurrences of uses of learning information at different “levels.” Table 4.5 Levels of Uses of Student Learning Information by Types of Uses. Types of Use Levels of Uses In the Moment Next Activity Next Class Session Next Unit Activating Peers as Resources 3.6% 0.0% 0.0% 0.8% Adjusting Instruction 1.2% 0.8% 1.6% 0.4% Grouping Students 0.0% 0.0% 0.8% 0.8% Individual/Small Group Intervention 0.0% 0.0% 1.6% 0.0% Oral Feedback 47.6% 0.0% 0.4% 0.0% Oral Feedback and Questioning 13.7% 0.0% 0.0% 0.0% Oral Feedback to Class 11.3% 0.0% 0.4% 0.8% Formative Questioning 9.7% 0.0% 0.0% 0.0% Student Self Assessment 2.4% 0.0% 0.0% 0.4% Students Adjusting Learning Tactics 0.8% 0.0% 0.0% 0.0% Written Formative Feedback 0.0% 0.0% 0.8% 0.0% Grand Total 90.3% 0.8% 5.7% 3.2% The vast majority (90%) of formative assessment episodes identified in Liza’s classroom involved using student learning information in the moment, or during the current learning activity. Almost all of the oral feedback provided to individuals or groups of students used information in the moment. There were only a couple of examples of Liza providing oral feedback to an individual student about data collected during a prior class session. All instances of Liza using student learning information with

PAGE 174

164 questioning to scaffold learnersÂ’ next steps were in the moment. When Liza activated students as resources for one another without planning for grouping students based on data collected during a prior class session, these uses were in the moment. Most of the instances of Liza providing oral feedback to the whole class were in the moment. Uses of student learning information during a subsequent learning activity in the same class session were less frequent, accounting for less than 1% of the formative assessment episodes. Liza occasionally engaged students in self assessing their learning effort/progress and then made an adjustment to the next learning activity. She also indicated, during a post observation interview, that she adjusted an activity later in the class session based on what she was observing in an earlier activity. About six percent of the identified uses of student learning information resulted in changes to the next class session. Liza based most of the adjustments she made to instruction on data collected during a prior class session. This included Liza developing and providing a separate intervention for a group of students. Liza usually provided formal written feedback during the class session after she collected the data. Liza formally collected data from student products at the beginning of the unit using the pre assessment and the proficiency poster response (students answer the question how do you prove the product of two fractions). Several formative assessment episodes that included using data collected through these two sources involved Liza making an adjustment to instruction that cut across the entire unit, influencing more than the just the next class session. Finally, Liza engaged students in self assessment about their performance on the final unit quizzes (one on adding/subtracting fractions

PAGE 175

165 and one on multiplying fractions) at the end of the unit, but with students making decisions for how they would respond as part of the subsequent unit. The description above provides evidence in support of the third proposition that, Teacher use of information about student learning to determine what to do next occurs at different levels, from “in the moment” adjustments to instruction or learning tactics, to shifts in the next activity, to changes in the unit as a whole, or to changes the next time the unit is taught. It was possible to discern the different levels of Liza’s use of student learning information to determine what to do next. Summary of the evidence related to formative assessment episodes. Liza constantly engaged in formative assessment. Formative assessment episode occurred on average close to every three minutes across all of the class sessions I observed. Her classroom provided substantial opportunities for me to examine formative assessment in context to determine if my definition and approach to identifying formative assessment episodes was helpful and to identify and describe the critical attributes of formative assessment practice in the context of an actual K 12 classroom. My definition and approach to identifying formative assessment episodes allowed me to identify 248 episodes across nine class sessions in Liza’s classroom. This definition was useful. However, not all of the components of my definition were evident in every episode I identified. Because of how I identified formative assessment episodes, three components (formative use of student learning data, data collection, and an associated learning target) were evident in each of the observed formative assessment episodes in Liza’s classroom. For the vast majority of formative assessment episodes,

PAGE 176

166 Liza communicated the learning target to students, at least in writing. Exceptions included episodes for which the learning targets came from a prior or future instructional unit Liza explained this in the post observation interviews, however, she did not clarify these targets with students during the observed unit. In the vast majority of formative assessment episodes (88% of all episodes), the person doing the analysis and interpretation of student learning data was Liza and both her analysis and interpretation of student learning was done “on the fly”. So while the results of her analysis and interpretation were evident in how she used the student learning data, I inferred that the analysis and interpretation occurred. My analysis of each component of the 248 formative assessment episodes occurring in Liza’s classroom provided additional evidence regarding the critical attributes of formative assessment practice in context. I considered the types and frequencies of different types of data use, the learning targets associated with formative assessment episodes, data collection, and data analysis and interpretation. Next, I describe the critical attributes of formative assessment emerging from my analysis of the components of the formative assessment episodes occurring in Liza’s classroom. I identified ten different types of formative data use occurring in Liza’s. Some types of data use were more common in Liza’s classroom than others. The vast majority of formative assessment episodes (71.4%) occurring in Liza’s classroom involved her interacting with individual students or small groups of students to provide oral feedback (48% of episodes), provide oral feedback in conjunction with engaging students in formative questioning (13.7%), or engaging students in formative questioning without

PAGE 177

167 oral feedback (9.7%). Liza structured how she seated students and organized frequent learning activity in ways that activated students as resources for one another. In LizaÂ’s classroom formative assessment mediated the learning experiences of individual students or small groups of students multiple times throughout each class session. Liza also made adjustments to instruction, or selected students for intervention almost twice per class session. Formative assessment mediated both LizaÂ’s planning and her in the moment adjustments to her studentsÂ’ learning experiences. I could discern a learning target for each identified use of student learning data in LizaÂ’s classroom. Most of the formative assessment episodes involved learning targets associated with the content of the observed unit and mediated learning activity occurring during the same class period as the data collection. Liza posted these learning targets in the room each day. Liza did not share learning targets with students for formative assessment episodes that involved content associated with the current unit. The characteristics of learning targets that teachers choose to share or not with students may also be a key attribute of formative assessment practice. Liza used a variety of data collection strategies as part of formative assessment episodes including observations, questioning and student work products. However, the vast majority of formative assessment episodes, over 78%, used data Liza collected through observation. In combination with data collected through oral questioning, over 87% of the formative assessment episodes in LizaÂ’s class used data collected informally. This evidence supported the second proposition that when formative assessment practice occurs, a variety of assessment methods, including informal methods, are used

PAGE 178

168 during (in addition to after) learning activity to collect information about student learning. The evidence from Liza’s classroom pointed to the importance of explicitly considering informal data collection also. In Liza’s classroom her use of informal data collection strategies overlapped substantially with her most common types of data use – to adjust the in the moment learning experiences of individual students or small groups of students. In general, the tasks in which students engaged as part of formative assessment episodes covered the same content as the associated learning targets. The tasks were also consistent with the cognitive complexity of the learning target, or required greater cognitive complexity from the students than the learning target. This was true for tasks and learning targets associated with the unit content, as well as tasks and targets from prior units or that cut across the entire school year. This is a condition for formative assessment to use accurate student learning data. While the formative assessment occurring in Liza’s classroom met this condition, all classrooms may not meet this condition and thus it is important to consider. The vast majority of the formative assessment episodes occurring in Liza’s classroom involved Liza, rather than her students analyzing and interpreting student learning data. In most of these instances (~ 90%), Liza’s analysis and interpretation was “on the fly” and not observable. This suggests it is important to consider whether formative assessment practice that includes informal data collection always includes “on the fly” data interpretation.

PAGE 179

169 Liza’s analysis and interpretation of data collected through student work products was not “on the fly”. She generally used the same approach for most student products, identifying correct and incorrect responses and providing a performance rating – unsatisfactory, proficient, partially proficient, and advanced. She sometimes also interpreted student learning data by grouping students according to whether they answered tasks correct or incorrect and then categorizing their mistakes as large or small. This allowed her to group students for additional and more targeted assistance. How different analysis and interpretation approaches support different data uses is another key consideration regarding the attributes of formative assessment practice. I identified four distinct levels or scales of data use in Liza’s classroom in the moment (as part of the current activity), next activity, next class session, and later in the unit – occurring across multiple formative assessment episodes. This evidence supported my third proposition that teacher use of information to determine what to do next occurs at different scales. The evidence also suggests that the scale of use is another key attribute of formative assessment practice to consider. How the Social Context Influences Formative Assessment Practice in Situ The first proposition related to the second research question identifies the components of “social context” of a classroom. As described in Chapters 2 and 3, I used components of an activity system proposed by Engestrm (1987, 1993, and 2001) to describe this context. My analyses associated with this proposition included providing a description of what I called the Unit Learning Context for Liza’s classroom organized by

PAGE 180

170 the key features identified in the proposition. This Unit Learning Context overview provides evidence related to the first proposition. Formative assessment episodes occurred not just within the social context of the instructional unit, but also within the social context of the learning activities that made up that unit. Within a given class session, the social context of the learning activity changed several times. The social context of the learning activities created the specific conditions within which formative assessment episodes did or did not occur. Thus, I grounded my analysis of the relationship between the classroom social context and formative assessment episodes in the unit learning context, but focused more explicitly on the social context of the individual unit learning activities. After characterizing the instructional unit learning context, I described how the social context of the classroom at the “activity” level created conditions within which formative assessment practices could occur, guided by the remaining propositions. While it is somewhat artificial to talk about the components of the social learning context of the classroom separately since they all work together to influence formative assessment practice, it is also difficult to talk about them all at the same time. Therefore, I organized the remainder of this section of the case report by the components of the social learning context focusing my analysis at the learning activity level, rather than the overall instructional unit level. The activity learning context components are addressed in the following order: object or focus of learning activity, the tools (physical and semiotic) that mediated learning, participation structures (who participated and how they participated), and the rules or norms guiding participation.

PAGE 181

171 For each component, I describe both the relationship of that component to the identified formative assessment episodes, and how that component of the social context related to the other components. In the sub section addressing the tools used in Liza’s classroom, I provide evidence related to the third and fourth propositions (tools illuminating features of formative assessment practices, and student learning information becoming a social object). Unit learning context. The unit learning context establishes the frame within which I investigated the relationship between the social context of Liza’s classroom and the formative assessment practice occurring within it. In this unit learning context overview, I describe the social learning context of this classroom during the observed instructional unit, including: the object(s) of the unit, the tools used, the rules/norms evident, and the participation structures (who participated and how). This context establishes details to which I refer later in my analysis at the learning activity level. Object. On the first day of the unit, Liza posted the object for the unit on the classroom wall as the “essential questions.” Her introduction of the unit also included specific talk about these essential questions. These remained on the wall throughout the observed instructional unit. Each day, Liza posted math objective(s) and the daily language objectives on a dry erase board in the back of the room (the wall students faced as they entered the classroom). These objectives changed daily. She frequently referenced both the essential questions and daily objectives while introducing learning activities.

PAGE 182

172 Tools. A variety of tools mediated learning in LizaÂ’s classroom including the following: equipment, supplies, resources posted on classroom walls and hanging from the classroom ceiling, instructional materials (provided by the district), teacher developed instructional materials, and student math notebooks. I describe each type of tool below in greater detail. Liza located physical tools strategically throughout the room. Equipment. The following equipment was used frequently (multiple times a week) during math instruction: a SMART board and attached LCD projector, a hand held board that the teacher could use to bring up images on the smart board (write on it, etc.), a second LCD projector (mounted from the ceiling and aimed at the left wall), a document camera (that allowed images to be projected on the left wall), and a phone. In addition, the classroom had the following equipment seldom used during math instruction: a small printer, an Apple laptop computer (used by the teacher before or after class), five desk top computers against the left wall of the room (used once with students), and a television hanging from the ceiling in the corner of the room by the door that was used for all school announcements at the beginning of each day. Several items used during math instruction were not standard for classrooms in this school. Liza wrote grants to acquire her SMART board, document camera, and the hand held board that allowed her to use the SMART board from any location in the room. Supplies. The supplies used in LizaÂ’s classroom included individual students supplies, those shared by the whole class, and those Liza kept on her person. Students were responsible for bringing writing utensils and paper to each class session and a 3

PAGE 183

173 ring binder used as their “math notebook” for the entire year. Liza’s students used all of these supplies every day during math learning activities. Shared class supplies were stored in a cabinet in the back right corner of the room. Those used during math instruction included: dry erase boards, dry erase markers, erasers, and math manipulatives (e.g. 1” tiles). Finally, Liza wore an apron every day that included three large pockets containing: a pencil sharpener, pens and pencils, sticky notes, “hot rods” (slips of paper given out to indicate the student had engaged in exceptional behavior), and conduct cards (slips of paper given out to indicate the student had violated a behavioral rule). Resources posted on classroom walls and ceiling. Liza used the walls and ceiling of the classroom to locate resources used frequently during math instruction. Unlike many classrooms, Liza’s used the walls almost exclusively used for learning resources. Card stock sheets with vocabulary terms for the unit printed on both sides in big black block print hung from the ceiling with fishing line, one over each table. They were color coded (the color of the cards corresponded to the color of the unit title on it posted above the classroom door) and sheets with the essential questions printed. Liza changed these vocabulary terms at the beginning of each unit and called on the tables under each term using the vocabulary term hanging over the table. Liza posted cards with vocabulary terms from prior units around the room in the wall header (top of the wall just below the ceiling). Just to the right of the door students entered the classroom through, on the front wall, was a bulletin board where Liza posted seating charts (one for each math

PAGE 184

174 period), and a pocket containing blank discipline referral cards. She changed the seating charts for each unit. Students would pick up the blank discipline referral cards if they entered the classroom late. The primary rule for the classroom was posted in the header of the front left wall (above the door), “Don’t disrupt others learning.” To the right of this (still at the top of the wall, in the header) Liza posted sheets of construction paper (each a different color) with the name of each unit that had been completed by that point in the year. Below these sheets was another bulletin board with the “Essential Questions”. In the middle of the front wall was a SMART board. Liza used the SMART board every day as part of various learning activities. To the right of the SMART board was a bulletin board (Liza called it a proficiency poster) for the first period class. Above the poster, for the duration of the observed unit, the teacher posted an 8.5 x 11 sheet with the following question written on it: “How can I prove the product of two fractions?” Below this sheet, the board was divided into four sections horizontally from top to bottom: A (for advanced, dark pink), P (for proficient, light pink), PP (for partially proficient, light green) and U (for unsatisfactory, dark green), with a 2” wide black strip of paper dividing the P and the PP sections. Liza posted student responses to this question written on 3 x 5 index cards (with no identifying information on the front) on this “proficiency poster” once on the first day and once on the last day of the unit. In the middle of the right wall was a bulletin board with information about the topics for the unit. For the focus unit (operations with fractions), this board had 2 flow maps (algorithms) posted on it, one with the steps in adding/subtracting fractions and

PAGE 185

175 one with the steps in multiplying fractions. Liza posted each algorithm after the class session when she introduced it. To the right of the storage cabinet containing shared supplies, on the back wall, was a dry erase board organized into the following sections: SSR (sustained silent reading which was a 30 minute class period right before lunch); homework (for math classes); agenda; language objective; math objective; and a blank section where the teacher wrote the day of the week, date and any additional information about the day (e.g. “It is picture day”). Liza changed the writing on this board each day to provide specific information to students that pertained to that day. She referred students to this board at least once during each class session. Lower on the same wall, to the right of the dry erase board, a couple of small pockets were stapled to the wall, one containing conduct cards, and one containing blank index cards. Liza accessed these resources if she was in the back of the room and needed them. In the middle of the left wall of the classroom was a large chalk board divided into sections using colored tape. The board included one column (with two rows) for each class period. Liza labeled the top row HW Challenge (for homework challenge) and the bottom row PAT (for preferred activity time). Liza tracked the class homework completion rate in the HW row (for the three days each week when homework was due and for the week as a whole on Fridays) and the “free” time that the class earned/lost each day (based on behavioral norms) in the PAT row. To the right side of this board was a sheet which indicated the different behaviors for which students could earn time and how much time. This PAT menu included the following: Warm Up (1:00), No Tardies

PAGE 186

176 (1:00), All IDs (2:00), All Pencils (2:00), Group Activities (3:00), All HW (5:00), and Planners (5:00). There was a screen that Liza pulled down in front of this chalk board to project information using the second LCD projector (hanging from the ceiling) or a document camera. Liza used this screen when students graded their weekly Hawk Math assignments (described below) and to project student work using the document camera, (an alternative to showing student work on the SMART board). District provided instructional materials. The primary instructional resource used for math in this school was the Prentice Hall Connected Mathematics Resources for 6th grade. For the observed unit, Liza used Bits and Pieces II (fraction operations). She distributed a copy of a student workbook to each student at the beginning of the unit. Students did not write in these workbooks and they returned them at the end of the unit. The “investigations” in these workbooks included directions for hands on activity and problems for students to solve. For Bits and Pieces II, Liza used investigations 2 (addition and subtraction with fractions) and 3 (multiplication with fractions). This school also used Math Mates to supplement the primary instructional resource. Liza used the Math Mates (renamed Hawk Math) as a weekly assignment. These two sided worksheets included 24 problems with the same problem number associated with the same 6th grade math topic each week, (e.g. # 1 was always adding, #21 was always area and perimeter). The Hawk Math worksheets included small circles at the bottom of each page with numbers corresponding to the problem numbers that students filled in to tracking their progress on certain topics over time.

PAGE 187

177 Table 4.6 Teacher Developed Tools Tool Name Description Warm Up (form) Students used warm up forms daily. The forms included a full page sheet containing five boxes with the following titles: algebra, computation, probability and statistics, number sense, and geometry and measurement. They had space at the top for the students to write their name, date, and class period. Each day Liza posted warm up problems on the SMART board using the same form. Students would copy the problem(s) for that day into the appropriate box on a blank warm up sheet stored in their math notebook, and complete the problems. Reference Sheets (for each unit) Students completed one reference sheet for each unit. These full page sheets had the unit title + “Reference Sheet” on the top. The front side was a table with the following column headings: Concept, Strategies or Tips, and Example. While each reference sheet had the same format, the content of the columns pertained to the unit. The back of the reference sheet included an empty table with the following column headers: Vocab Word, Definition, and Picture or Example. Students wrote definitions and pictures or examples specific to the unit in this table. The teacher distributed the reference sheets with the first two columns on the front page completed for the unit, one row per concept. As one of the final activities of the unit, students filled in their own examples in the third column on the front of the sheet and completed the definitions and provided examples on the back. Homework Challenge (tracking sheet) Students used Homework Challenge forms three days a week. These full size pink pages were the same on both sides. Below the title “Homework Challenge” was the statement “moving between fractions, decimals, and percent’s.” Below that was this question: “What percentage of students turned in their homework today?” Below this question was a table with the following columns headers: Fraction, Showing your work, Decimal, Showing your work, and Percent. Students used one row in this table each day that homework was assigned, to calculate the homework completion rate for the class. Below this table was another statement: “Now let’s find our weekly average!” and a table with one row and the following column headings: Add up the percent’s, Divide by how many there are, and Average %. Students used this table at the bottom of the page to capture their calculation of the weekly average homework completion rate each Friday. Ticket out (forms) Students used Ticket out forms once or twice a week. They were 1/3 page white sheets with printing on both sides. On the first side was a box for the student to write his/her name, date and class period. Below that, on the left of the sheet was a larger box with this question: “What was today’s Objective?” To the right of that box was a larger box titled: “Question and Answer.” The other side of the sheet included two side by side tables. The first was titled, “Learning Rubric” and had five categories (A, P, PP, US, and O) in the first column with descriptions next to each regarding learning of the lesson objective. The second table was titled, “Effort Rubric” and was similarly structured but the descriptions next to each category pertained to effort.

PAGE 188

178 Tool Name Description Hawk Math (tracking sheet) Students used the Hawk Math Tracking Sheet. This 1 page sheet had a table that listed, in the first column, all of the Math Mates topics by problem number (e.g. order of operations, adding mixed numbers, area of rectangles). Then the table had columns numbered 1 4 for the number of weeks for which math mates were completed during that term. At the bottom was a row where students indicated whether or not they got their parents signature on the Hawk Math worksheet that week. Finally, below this table was a row with boxes under each sheet for students to indicate the number correct, out of the total number of problems on their Hawk Math assignment each week. Students used these sheets to track their correct responses on Hawk Math weekly assignments by topic once every four weeks. Unit Assessment Overview Students used the Unit Assessment Overview form after each summative quiz. These two sided forms included space for students to enter their name, date and class period. A table on the front of these sheets included the following columns: #, Big Idea, Correct, Incorrect, Simple error, and I need help! The problem numbers and the big idea for each problem were identified. For each problem, students colored in or put an ‘X’ in either the “correct” or “incorrect” column. For incorrect problems, they indicated if the incorrect response represented a simple error or something about which they needed more help. A table on the back of this sheet included the following column headers: Problem #, Correct answer and work, and Why did you get this problem incorrect. Students used this table to provide corrections to the problems they missed on their test. They turned it in to the teacher with their tests. Progress Monitoring Assessment The Progress Monitoring Assessment instrument included 27 items addressing different 6th grade math topics. The 6th grade math team developed this instrument using items from their primary curricular resource. Teachers administered it 4 or 5 times during the school year to measure student learning of all of the targets for 6th grade math during this school year. One administration happened at the close of the observed unit. Progress Monitoring Chart Students used a Progress Monitoring Chart in conjunction with a Progress Monitoring Assessment to track their progress after each of the five administrations of the assessment throughout the year. This single sheet included a table with the following column headings: question number, question type, and test date (repeated in 5 columns). The first column had one blank row then each row is numbered 1 – 27. The next column also had one blank row and then each row indicated what that item number measured, (e.g. subtracting decimals, multiplying decimals, LCM, dividing fractions, and area of trapezoid). The day after the administration of the Progress Monitoring Assessment, students wrote the date of the administration of the test in the row directly below the header row. Then they colored in the cells on each row for the given test date green for correct responses and red for incorrect. At the bottom of the chart, they indicated the total number correct/incorrect for that date. Teacher developed materials Liza developed a number of additional materials/tools used by students in the classroom, many repeatedly. Liza stored copies

PAGE 189

179 of the materials used across different class sessions in a filing cabinet in the front left corner of the room and students would access them when they needed them. An example of each of these teacher developed materials is included in Appendix D. Table 4.6 provides a description of each of the teacher developed materials Liza and her students used. Student math notebooks. Each student maintained her/his own math notebook. These were three ring binders with different sections, where students kept blank versions of the teacher made materials (used repeatedly for the same type of activity across different units described above) and their completed work. These notebooks included the following types of materials: Warm Up sheets (blank and completed), unstructured notes, structured “reference sheets” for each unit, completed homework, Homework Challenge tracking sheets, investigations (notes and problems), Hawk Math worksheets, Hawk Math Tracking Sheet, Progress Monitoring Chart, Ticket out forms, and quizzes and tests. Students left their math notebooks in the classroom, storing them between class sessions in a cupboard against the left front wall (near the door to the classroom). They would pick up their math notebooks as they entered the classroom each day and keep them at their seats throughout the class session. We collected and reviewed examples of nine student notebooks for this study. Rules and norms. The rules and norms in this classroom were evident from a variety of sources. Liza posted one rule publicly (on the front wall of the classroom). Behavioral norms were evident through teacher talk and observed student behavior (observation notes

PAGE 190

180 and classroom video/audio recordings). Norms related to homework were explained by the teacher (in the pre unit interview) and also evident in how class sessions were structured. Finally, Liza explained norms regarding student seating during a post observation interview. I describe each of these types of norms and rules below. Public, or written, explicit rules. The only “written” rule posted in this classroom was over the door of the classroom: “Don’t disrupt other’s learning.” Liza provided an example of how she supported this rule was by eliminating the electronic pencil sharpener from the room – she had a small manual pencil sharpener which she carried in a pocket of the apron that she wore each day. When students needed to sharpen their pencils, they would tell the teacher and she would give them the sharpener to use at their desks. Liza also brought students facial tissues during quizzes or tests so they wouldn’t need to get up from their seats to blow their noses. Student behavior norms. The school’s positive behavior support program, Preferred Activity Time (PAT), was the basis for many of the behavior norms in this classroom. School leaders frequently referenced these school wide behavioral rules/norms during daily announcements and Liza and her students strictly adhered to them. Students earned in class PAT for different behaviors. The daily behaviors that earned them time were posted on the left wall of the classroom and included: having the highest average homework completion rate for all of the math classes, bringing a writing utensil each day, being on time to class, having their student ID, not being tardy, etc. Liza also gave them extra PAT minutes when they were on task, or transitioned quickly between activities. Students lost PAT minutes for not transitioning quickly

PAGE 191

181 between activities, talking at inappropriate times, or engaging in activity not focused on the current math learning activity. Individual behavior that was inappropriate (being tardy to class, talking out of turn, being out of his/her seat, disrupting or being disrespectful to others) resulted in the student first receiving a warning and then a conduct card. Students who got started with tasks early, transitioned more quickly, or helped other students, got “hot rods” which were positive behavioral rewards. In class free time usually occurred on a Friday (there was one day I observed which included some free time the day before the holiday break) and involved students doing math puzzles or games while watching part of a movie. Students who received “conduct cards” were required to attend detention. In general, this classroom was very quiet. Students talked freely as they entered the room (before the bell), when directed to as part of group discussion, and during earned in class free time. Otherwise, they talked very little. After students had focused on individual work (during a quiz or practicing problems) for more than 10 minutes, Liza often stopped and facilitated a “stretch” break (2 3 minutes). Liza frequently gave her students choices about classroom activity. This included, for example, whether Liza or another student would demonstrate how to do a problem, who would show another way to do a problem from among students disagreeing with the first student’s approach, and how to spend preferred activity time. The day after this class misbehaved for a substitute teacher provides an example of how Liza required students to take responsibility for their own behavior. The warm up activity for this day was for students to write a letter to about their behavior. Later,

PAGE 192

182 during the same class session (after they had done some math activities) Liza initiated a class discussion about what happened while the substitute teacher was in the class the day before. She told them the information that the substitute had left for her (notes) and indicated that their PAT time would reflect some reduction in free time. Then she asked them to talk about their experience the day before. Several students offered descriptions about what happened including some information about how the substitute teacher had deviated significantly from their typical class routines. Liza frequently provided oral feedback to the class about how things were going behaviorally, including, for example: what she noticed about their engagement with a particular task, how they helped one another, their expressions of appreciation to one another, and how quickly they transition between activities. She also frequently reminded them not to do or say things that might be unkind to another student. One example includes the following, “If you get your test back and you turn to the back page and you see a grade you like, should you go YES. We don't want to make someone feel badly. I already prepped you by saying some people are not going to be super happy. So we don't want to make someone feel badly. This grade is for you and for me and your parents or guardians, not for your neighbor to say Ooh, Aah! Look what I did. It's not about that. So make sure that the test score stays with you.” Homework norms. Liza assigned homework on Mondays (to turn in on Tuesdays) and Wednesdays (to turn in on Thursdays) that pertained directly to the work students had been doing in class for the current unit. On days when homework was due, Liza moved around the room during the warm up activity to check on student completion of

PAGE 193

183 their homework. On these days, students were also given time in class (after the warm up activity) to talk with other students about their homework, what they understood, what they didn’t understand, etc. After this homework talk (Homework Circles), the group helper from each table collected the homework for the table and turned it in to a shelf/cubby by the front door. Students received a homework assignment every Monday, which wasn’t due until Friday called “Hawk Math,” (from Math Mates curricular resources described above). On Fridays, Liza projected the Hawk Math assignment with correct answers indicated on the left wall and students corrected their own assignments. Students used the Hawk Math Tracking Sheet to track their correct responses on the Hawk Math assignments by problem number/topic. On Tuesdays, Thursdays, and Fridays, students calculated the homework completion rate for the class that day, and on Friday for the whole week. Two students in the class were failing to complete their daily homework, bringing the class homework completion rate down. These students asked to be excluded from the class completion rate calculation and from the resulting rewards earned by the class. They were required to do math work when the rest of the class had free time. The class discussed this option and voted to exclude them from the class completion rate. After this, they were no longer included in the completion rate calculation. During my observations of the classroom, one of the excluded students determined he could complete his homework more regularly. The class voted to add him back into the completion rate calculation. If students were absent on a day when homework was distributed, they were expected to

PAGE 194

184 do 30 minutes of math practice before coming to class the next day. If they did so, it counted as “completing” homework for class completion rate calculation on the next day. Students who did not bring their homework to class on the days when assignments were due were frequently required to call a parent/guardian immediately, during class, to tell him/her that they did not bring their homework to class. Liza usually returned graded homework assignments to students on the day after they turned it in. Students stored their completed homework assignments in their notebooks. Student seating. As described above, students changed seats for each new unit based on the unit pre test results. The day after the pre test, Liza posted a new seating chart on the bulletin board by the door. That day, as they entered the room, students checked the seating chart and then sat in their new seats. Later in the class session, Liza checked where students sat to make sure they interpreted the seating chart correctly. Participation and roles. Who participated and how they participated (roles) in the learning activity in this classroom, and the reoccurring patterns of participation over time within the classroom occurred at various levels including learning activity, class session, unit, and school year. As explained in chapter 3, I defined learning activities as the sub parts of a class session distinguished by a change occurring during the class session in one or more of the following: in the object (or purpose), tools used, who participated and how (student roles and participation structures), and/or the rules or norms. Below, I first describe student and teacher roles and participation patterns in the various types of learning activities evident in this classroom. Next, I describe how different learning activities

PAGE 195

185 were put together to form patterns of activity for “typical class” sessions. Then, I describe how the teacher combined learning activities, or groups of learning activities, to form patterns across the unit and the entire school year. Learning activity participation structures. The various types of math learning activities evident during class sessions across the observed unit each had different, but predictable, teacher to student and student to student discourse patterns and participation structures. Table 4.7 includes a description of the participation structures for each type of learning activity evident during the observed unit in Liza’s classroom. Frequently, Liza named these different types of learning activities. I used her names in table 4.7. Many of these activities were supported by a teacher made tool, which I also identified in table 4.7. I provided a more detailed description of each tool above. The teacher also provided directions or discussion protocols projected on the SMART board to support some activities, listed in the tools column in Table 4.7. Table 4.7 Learning Activity Participation Structures and Tools Learning Activity Participation Structures Tools Warm Up Students started each class session with a warm up activity. Most days the teacher structured the warm up activity supported by teacher created tool (warm up form). She projected the day’s specific warm up problems on the SMART board when students entered the room. Students would get seated immediately, write the days warm up problem(s) on a copy of the warm up form (which they kept blanks of in their notebooks) and independently begin working on the warm up problems. Since the observed class was the first of the day, this continued through the daily school announcements. When prompted, students talked to their neighbors about what they got on the warm up problems. Warm up form Warm up problems posted on the SMART board

PAGE 196

186 Learning Activity Participation Structures Tools Warm up problems were then “solved as a class” (see description below). Sometimes the teacher provided additional information about the warm up problems such as showing the class how to solve one, or reminding students about more general principles they should have applied while solving the problems. Once at the beginning and once at the end of the unit, Liza replaced the warm up activity with students solving a problem for the proficiency poster (see below). On the day after Liza was absent, she replaced the warm up activity with students writing about their behavior with the substitute teacher. During the warm up activity, the teacher moved around the room observing student work, asking questions, and providing oral feed back. If it was a day when homework was due, she also checked students’ homework completion while students worked on the warm up. Frequently, the teacher used this time to check in with individual students on individual learning challenges related to work done the day before. Homework Circles Homework Circles occurred on Tuesdays and Thursdays when unit focused homework was due, and immediately followed the warm up. The teacher projected directions on the SMART board to structure the student discourse and set a timer for the activity. Students would then talk with their table groups, usually for about 5 minutes (sometimes longer if they ask for more time), about their homework. Some students made revisions to their homework during these table discussions. The teacher moved around the room interacting with groups of students during this time, providing oral feedback, responding to questions, etc. When time was up, table helpers collected all of the students’ homework from their table and turned it in. Homework Circles Directions: Prompts, I don’t understand, What did you get for the first question? How did you get that answer? Let me help you solve this problem. Raise your hands when your group is finished. Homework assignments Timer Checking Hawk Math Immediately following the warm up on Fridays, students checked their answers on their weekly Hawk Math assignments. The teacher would project correct answers on the left wall. Students would move their seats to face Hawk Math Assignments (standard format each week,

PAGE 197

187 Learning Activity Participation Structures Tools that wall or move to the floor of the room with a lap board. They checked each problem on the Hawk Math assignment for that week, coloring in the circle at the bottom of the sheet with the corresponding number. Sometimes individual students would ask the teacher about certain problems. The teacher would respond to questions generally, but did not initiate interactions with students as they checked their Hawk Math assignments. Students would turn their assignments in after they checked them. After several weeks (one occurred while this classroom was being observed), students would complete the Hawk Math Tracking Form immediately following checking Hawk Math. individual items varied) Answer sheet Homework Challenge The teacher structured the Homework Challenge activity using a teacher created tool, the Homework Challenge Tracking Sheet. The teacher walked the whole class through the first few steps of calculating the percent of students in the class who had completed the homework that day, by providing the total number of students present and the number who completed the homework. She then gave students time to independently calculate the homework completion percentage (from the fraction). Then she provided the correct answer. On Fridays, the teacher also guided students through finding the average classroom homework completion percentage for the week. She gave them time to do the actual computation, and then provided the correct answer to the class. Homework Challenge Tracking Sheet Solving Problems as a Class As part of completing the daily warm up, when practicing how to do certain kinds of problems, as part of sharing the results of small group investigations, or while reviewing the results of a “ticket out”, students solved problems/completed math tasks as a class. This activity followed a typical participation structure. The teacher identified what problem students would solve. Sometimes the teacher gave students a choice about whether they want her to solve the problem or if they want other students to solve it. Every time they were given this choice, the students choose to have other students show their work. Usually, Liza would then ask students to volunteer to show the class how they solved the problem. Sometimes she did not ask for volunteers. Liza selected the first student, who would then write SMART board Assigned in class math tasks

PAGE 198

188 Learning Activity Participation Structures Tools her/his work on the SMART board. Then the she would ask if students agreed or disagreed with the work on the board. If any students disagreed, Liza had the student who initially showed his/her work, identify one the “disagreeing students” to show his/her work. This frequently continued until 3 or even 4 students had demonstrated their work on the SMART board. Then the teacher would compare the different approaches. Often, students selected the next student without prompting from the teacher. If the class was solving more than one problem, the last student to show her/his work on the first problem selected the first student to show her/his work for the next problem. This pattern continued until all of the problems were completed. Number Talk Number Talk was a different format for the warm up activity that sometimes occurred on days when homework was not due (Mondays or Wednesdays). The teacher established how students would engage in number talk and the problem about which they would talk. Number talk focused on improving students’ number sense and involved an estimation problem. Liza projected the “number talk instructions” and the problem on the SMART board. Students followed the discourse protocol suggested by the teacher. Once they arrived at step three, (whole class discuss/show the answer and how to find it) participation followed a similar pattern to other problem solving as a class activity. Sometimes the teacher also provided an example of how she would have solved the problem. Number Talk instructions: 1) Individually find the answer with no paper. 2) As a group, discuss the answer and how you found the answer. 3) As a whole class, discuss/show the answer and how to find it. Taking Notes The teacher structured this activity by having students turn to a blank, lined page in their notebooks, and talking them through notes that were projected on the SMART board. Usually this involved showing steps in an algorithm and providing examples for each step. For the fractions operations unit, during note taking, she repeated and had students repeat a chant to remember how to convert fractions into common denominators (“if you do it to the bottom, do it to the top, and the denominators name stays the same”). Flow Map (process steps) posted on bulletin board on the right wall. Chant Coding Text The teacher established the participation pattern for this activity. The teacher described her process for students to read and make meaning of information in the work books as coding text, which each student received as part of the primary instructional resource. She had students “code the text” on the blank lined paper in Student workbooks (from primary instructional resource)

PAGE 199

189 Learning Activity Participation Structures Tools their notebooks. She directed students to the page and location on the page in their workbooks from which they would begin reading and taking notes. She initiated “popcorn reading” by reading the first sentence, then she said “popcorn” and a student’s name to indicate the next student to read. That student would read a sentence or two and “popcorn” to a different student. This continued until they reached the end of a section or the teacher indicated they should stop. Then Liza prompted students to talk within their table about what they read and what they would likely write in their notes, and then they would proceed to write notes. Coding text occurred in preparing for an investigation. Investigation Investigations occurred once or twice a week and Liza used them to introduce to new content. If it was the first day of working on an investigation, this activity always followed Coding Text (see above). The teacher provided background about the investigation, and directed students to the appropriate pages in their student workbooks (from the primary instructional resource) and/or provided a copy of pages from the book (if they need to write on them). She provided initial directions to get table groups started on the investigation and set a timer. Students structured their own work within their table groups as they completed the investigation. While students worked with their table groups, the teacher moved around the room observing the work, answering questions, and providing oral feedback. Sometimes she would stop the group work and provide some feedback to the whole class. Sometimes when one group finished earlier than the others Liza would send the students from that table to assist the other groups. Student workbooks (from primary instructional resource) Timer In Class Practice The teacher structured student participation in this activity by providing the tasks and determining how students would be grouped to complete problems. Students either picked a partner, worked within their table groups, or were to partners or groups (other than their table group). Liza set a timer for how long students would have to work on the problems. After students had time to complete the problems, Liza would initiate the process of solving problems as a class (described above). Tasks (teacher developed) Ticket Out At critical junctures during the unit, when students should master an objective, the students completed a Ticket Out at the end of a class period. The teacher Ticket Out Form

PAGE 200

190 Learning Activity Participation Structures Tools provided a tool she called the Ticket Out Form, and the specific problem for student to solve which structured student participation in this activity. The teacher wrote the specific Ticket Out problem/task on the SMART board (projected to the front of the room) or projected the problem from a sheet of paper using the document camera on the screen on the left wall of the room. She directed students to write their name and the lesson objective on the front of their Ticket Out form and the problem from the board (front or left). Then the teacher would set a timer and students would work silently to complete the problem. Finally, students were to self assess their learning and their effort using the pre printed rubrics on the back side of the form. Students would hand the teacher their completed Ticket Out Forms on the way out of the room. The teacher graded the Ticket Out problems and returned them the next day (as part of Reviewing Ticket Out Results). Reviewing Ticket Out Results This occurred the day after students had completed a Ticket Out. First, the teacher provided oral feedback about the Ticket Out results for the class over all. Then, the teacher structured the review of the Ticket Out results by providing directions on the board and assigning students to dyads or triads. These groups were usually “mixed” based on the students’ performance on the Ticket Out—including one student who did well, and one student did not do well. The teacher verbally established that one student would be acting more as the “teacher” for each group. Once students were grouped, the teacher provided directions on the SMART board to structure their discourse. Sometimes the teacher identified a separate group of students (3 6) that needed more intensive support from her, and facilitated the review of the Ticket Out results with these students while the remainder of the class was working in dyads or triads. Then the teacher would move around the room while dyads/triads were working, observing what students were doing, providing oral feedback and answering questions. Reviewing Ticket Out directions: Compare answers. Who got the question correct? Who got the question incorrect? Were the mistakes big or small? Erase and change answers. [One day she added another problem to complete that was similar to the original Ticket Out problem.] Reference Sheet Students completed a Reference Sheet (teacher made tool) the day after the end of unit test. The teacher structured participation for students completing the Reference Sheet

PAGE 201

191 Learning Activity Participation Structures Tools Reference Sheet. She provided a Reference Sheet with a typical format but unique information for each unit to structure student participation. Like most full class activity, the teacher frequently stopped and prompted students to talk to a partner about what they were writing, etc. Reference Sheets were not graded, but students kept them in their notebooks. The teacher referred back to reference sheets from prior units. Proficiency Poster Students completed the proficiency poster task once at the beginning of the unit and once at the end of the unit, during the warm up time for the class session. The teacher structured student participation in this activity by providing general directions on the SMART board, writing the question students were supposed to answer on the SMART board, and projecting things to remember as they answered the question. She also provided oral directions. She prompted the students to talk to each other about the things to remember before they started solving the problem and reminded them that this was not a graded activity, but rather a way of checking what they came into the unit knowing or left the unit knowing. Then she set a timer (adding time if needed) and prompted them to start. Students worked independently, writing their answers on a 3 x 5 index card. The next day the teacher posted these index cards on the proficiency poster (with student names on the back and not visible). This provided a visual chart of how students in the class as a whole did on the problem (the distribution of responses into four categories: Unsatisfactory, Partially Proficient, Proficient, and Advanced). Liza used the proficiency poster to collect data about students understanding of the concepts underlying the essential questions for the unit and for students to show the progress or growth of their understanding across the unit. She also used it to help students align their perception of their current level of proficiency with their actual level of proficiency. Proficiency Poster Directions Screen 1: 1. Get your math binder, 2. Grab one index card from under the SMART board, 3. Write your initials on the back of the index card, 4. Write the date and mod (class session) on the front of the index card, 5. Wait for further instructions. Screen 2 Directions: Answer the following question on your index card individually: How can you prove the product of two fractions? Things to remember: 1. Complete sentences, 2. Examples, 3. Re read your answers to see if they make sense.

PAGE 202

192 Learning Activity Participation Structures Tools Unit Pre Test This occurred the first day of the unit. The teacher structured participation during this activity, reminding students that the pre test was not for a grade but rather to help her structure the learning activity during the unit and determine student seating. During the pre test, Liza intentionally did something other than observing students doing their work, indicating that the participation structure for this activity was different from others. She told students that she will not help them with the pre test problems. She also encouraged students to write notes on the side of the paper by each problem about how difficult the problem was for them, using these prompts: This is easy for me. This is hard for me. I had to take a guess. I estimated. I think I'm going to be good at this. I think I'm going to need help with this. Students worked independently and silently on the tasks. After all students completed the pre test, the teacher had students use clickers to enter their answers for each question. Then she showed how the class answered each question on the SMART board. Liza did not give students an opportunity to change their answers. The teacher collected and corrected the pre tests, indicating which problems were correct, but did not grade the pre tests. The 6th grade math team used pre test results across all of the 6th grade math classes to adjust pacing and activities for the unit. Liza assigned student seats for each unit based on pre test results. She used the pre test results in conjunction with the end of unit test to evaluate student growth across the unit. She also had students review their pre test results at the end of the unit to see the progress in their own learning. Pre test (multiple tasks) Verbal prompts regarding notes about each problem. Summative Quizzes or Tests This occurred at the end of the unit or at a mid point in the unit. Liza provided oral directions about the quiz/test. She asked students for concerns, questions and comments before she handed out the quiz/test. She projected testing rules on the SMART board. She also clarified how she could help them during the quiz/test (read a problem). She asked them if they would like to Testing Rules: 1. Raise your hand if you need anything (we donÂ’t want to disrupt other peopleÂ’s testing

PAGE 203

193 Learning Activity Participation Structures Tools do a breathing exercise to focus before they started. If they said yes, she facilitated a breathing exercise. She handed out the quiz/test. She provided a pencil sharpener, erasers and tissues to students during quizzes and tests so they didn’t need to get up. Liza indicated the number of points students could earn for each problem next to the problem on the quiz/test. Students worked independently to complete the quiz/test. Some students moved their seats to have adequate distance from other students (some sitting on the floor with lap boards or at a table just outside of the classroom). Liza did not observe student work during the test or quiz. She did come to students who raised their hands and would answer individual student questions. environment). 2. Eyes on your own paper (don’t worry about other people or how fast/slow they are testing make your own test bubble). 3. No talking (communicate all needs to Mrs. CC). Quiz Circles This occurred the day after or two days after the final test for the unit. Liza structured this activity with a teacher made tool, the Final Assessment Overview sheet. She described the purpose of this activity to students this way: “Remember the purpose of this is so you're not just getting something back from a teacher and saying hey ok that's the grade I got. You're actually going through it and seeing what happened with each problem.” She provided oral directions and indicated that students could talk to one another about individual items but not about their overall scores. Students worked individually to fill in the Final Assessment Overview. They talked with one another about whether or not their mistakes were simple or if they needed more help. Students only completed the back side (correcting problems they missed) if they wanted to improve their grade. Final Assessment Overview Typical daily pattern. The math instructional period in this classroom was approximately 90 minutes each day. However, because the observed class session was the first of the day, some of that time (10 15 minutes) involved all school announcements. The learning activity during most daily math class sessions followed a few common patterns, depending on whether or not homework was due. The class

PAGE 204

194 session always started with a warm up and involved some in class mathematics activity (investigation, reviewing Ticket Out results, or in class practice). One or two days a week, the class session would end with a Ticket Out. Table 4.8 includes descriptions of the typical daily learning activity patterns. On days when students were taking a quiz or a test, these typical patterns did not hold. Students still completed a warm up, but if the test or quiz would take a long time, it followed the warm up and took the rest of the class period. If the test or quiz was short, Liza engaged students in additional learning activity before or after the quiz, such as in class practice for the quiz or note taking for the next learning topic. Table 4.8 Typical Daily Activity Pattern Monday Tuesday (homework due) Wednesday Thursday (homework due) Friday (Math Mates due) Warm up Warm up Warm up Warm up Warm up Number Talk Homework Circles Number Talk Homework Circles Grading Hawk Math (Math Mates) Homework Challenge Homework Challenge Homework Challenge Coding Text/Investigation Note Taking Reviewing Ticket Out Results In Class Practice Ticket Out The first day of the unit (when the students took a pre test) and the day following the end of unit test (when students reviewed their test results and complete a Reference Sheet) also did not follow the typical daily patterns. On these days, students still started with a warm up activity, however, the pre test, Reference Sheet activity, and/or reviewing test results took the place of some other in class math practice.

PAGE 205

195 Unit level activity structure. The unit level activity structure also followed a general pattern. However, student learning data was used to modify this pattern throughout the unit, both at the 6th grade math team (all of the teachers in the school who teach 6th grade math) level (where adjustments to the overall pacing of the unit across all classrooms were made) and within the individual classroom (where Liza made adjustments to full class activity, group level activity, or in the form of extra support provided to individual students). The typical activity structure for a unit illustrates how formative assessment practices occurred at different levels. The basic activity structure for a unit included the following: Pre Test – A pre test was administered the day before the start of the unit; this assessment included similar or some of the same problems as the end of unit test. After the teacher scored the pre test, the 6th grade math team considered pre unit test results across all classrooms to adjust unit pacing, and/or major unit activities. Within the classroom, the teacher used the pre test results to organize student seating by table groups. Proficiency Poster – Once at the beginning and once at the end of the unit, students completed a problem (on a 3 x 5 card) related to one of the big ideas for the unit. Liza posted student work (with names hidden). Liza and her students looked for changes in individual and over all class performance on this problem across the two administrations. Liza scored them but not include the scores in students’ grades. Homework Students completed assignments specifically related to unit content two days per week. The teacher frequently provided oral feedback to the class about

PAGE 206

196 the homework and sometimes adjusted activity for the next day based on student homework results. She graded and returned homework to students usually within a day of completion. Investigations The class engaged in at least one (sometimes two) investigation each week and sometimes the investigations took more than one class session. The teacher frequently adjusted activity both in the moment, and for the next day based on what she observed as students worked on investigations. Practice problems – Some days, students were given additional in class practice problems, usually completed in groups, which reinforced learning objectives for the day. Ticket Out – Students completed these individually at the end of a class session several times during the observed unit. The timing corresponded to critical junctures during the unit when students should have mastered an objective. The teacher made group level or individual level adjustments based on the Ticket Out results – giving groups of students additional support, providing extra practice, working with individual students more intensively during warm up, and/or inviting students to attend after school tutorials or come in during lunch for additional assistance. She sometimes adjusted the plan for the entire class the next day, based on the results of a Ticket Out. Quizzes/Unit Tests – Liza administered formal tests at either the mid point or the end of the unit to check on mastery of unit learning goals. She analyzed the results at the item level, identifying students who had big or small (possibly computational)

PAGE 207

197 errors for each item. If the quiz was during the unit, the teacher adjusted activity the next day for the whole class, groups of students, or individual students. If the quiz/test was at the end of the unit, she provided additional support to the students who needed it as part of a learning activity in the next unit – supplemental problems, or an additional session during the after school tutorial time or lunch. Year long activity structure. The full school year activity structure was evident during the initial observations (the first few weeks of the school year) and during the two units that were observed (seven weeks of total instructional time). These structures included: Students progressed through a series of instructional units that last from 2 5 weeks each. Weekly Hawk Math (Math Mates) assignments provided information about student learning related to targets that cut across the entire year. Students completed a Progress Monitoring Assessment (administered 4 or 5 times during the year) as a mechanism for gathering information about learning in relationship to targets for the entire year. Students set goals and tracked their progress using the results of this assessment. The teacher communicated the results of this assessment to parents. District administered Interim/Benchmark Assessments 3 times during the year. Liza did not use results of these assessments. Students receive grades at a mid point during the semester and at the end of the semester. Grades described performance at that point in time rather than by unit or

PAGE 208

198 at a natural content break. Semesters and units did not coincide – one of the units observed cut across the semester break. After school tutorials were available to students three days per week throughout the year, with a different teacher offering the session each day. Liza offered tutorials on Tuesdays. Sometimes she invited individual students to attend based on their performance on assignments of Tickets Out. Students who did poorly on a quiz or an end of unit test were required to attend. The 6th grade math team (all of the teachers in the school who teach 6th grade math) collaboratively determined what units were taught, the sequence of the units, and the duration of each unit based on their review of the Colorado Academic Standards/ Common Core State Standards. The team also jointly planed which investigations, from the primary instructional resource, they would use and determined what the pre and post unit test would be for each unit (which was common across all classrooms). This team met weekly to check on progress across classrooms and to make adjustments to the pacing of the unit. They compared student results on the pre test, quizzes taken during the unit, end of unit tests, and the progress monitoring assessment administered 4 to 5 times during the year. The daily structure of learning activity, however, was not planned across classrooms and most of the additional materials created and used Liza were not used by other members of the 6th grade math team. Additional student roles. Liza expected her students to, and they did take responsibility for many different aspects of classroom activity across the school year.

PAGE 209

199 These roles were in addition to their participation in the typical learning activities described above. Specific examples include the following: Tardiness: When students came in late, they took a conduct card before they sat down. Collecting assignments and passing out materials and supplies: For every unit, each table identified a “table helper” who was responsible for turning in homework and getting materials or supplies for the students at the table during the unit. These table helpers were also responsible for quick transitions between activities. If they did their jobs more quickly than the amount of time allocated (usually 2 minutes), Liza added extra time to the PAT free time for the whole class. Liza frequently reminded the class to thank the students who did these tasks for their table groups. Supporting absent students: When a student was absent on a day in which information was shared by the teacher, another student from the absent student’s table was tasked with taking a double set of notes for the absent student. Student choices: The teacher frequently gave students choices about how the class session would proceed. This included both learning activity (e.g. how many problems they would practice; if examples would be easy, medium or hard; and who would show the work, other students or the teacher) and non learning activity (e.g., how they would spend their in class free time). The teacher often asked students to vote. For example, the teacher asked, “How many problems would you like to see solved on the board? Indicate by raising number of fingers 1, 2, or 3.” Then she facilitated different students solving the number of problems indicated.

PAGE 210

200 Evidence related to the propositions. The above description of the unit learning context for Liza’s classroom provides substantial evidence in support of the proposition that key features of the classroom learning context are perceptible and can be described within the following categories: the object or focus of the learning activity, the tools (physical and semiotic) that mediate learning, who participates, how they participate (roles and responsibilities), and the rules or norms guiding participation. I described the social learning context of Liza’s classroom (at the unit level) using these key features. I explore below the usefulness of these different categories for characterizing the social context of Liza’s classroom and how that social context influences formative assessment practice. How the Learning Activity Context Influences Formative Assessment Practice What was the relationship between the social context of Liza’s classroom (e.g. tools, participation, rules/norms, and student and teacher roles) and the formative assessment practices that occurred? The next stage of my analysis focused on the individual learning activity context and formative assessment episodes occurring during different learning activities. First I considered how frequently formative assessment episodes occurred within different learning activities. In Liza’s classroom, it was possible to discern “typical” learning activities that reoccurred across more than one class session. These included the following: Warm Up, Homework Circles, Checking Hawk Math, Homework Challenge, Taking Notes/Coding Text, Investigation, In Class Practice/Partner Switch, Ticket Out, and Reviewing Ticket Out Results. Some learning activities occurred only once during the observed

PAGE 211

201 instructional unit, including the following: Proficiency Poster, Unit Pre Test, Summative Quizzes or Tests, Reference Sheet, and Quiz Circles. Formative assessment episodes occurred during most of these learning activities. Although it wasn’t an in class learning activity, when the formative assessment episode involved Liza changing her plans for the next class session, I identified the activity as “planning.” In table 4.9, I provided two metrics indicating the frequency of the occurrence of formative assessment episodes during each type of learning activity. The average number of formative assessment episodes during the activity is a sum of all types of formative assessment episodes occurring during that type of activity divided by the number of times that activity occurred during the unit. The percentage of formative assessment episodes occurring during the activity is the sum of formative assessment episodes occurring during that type of activity divided by the total number of formative assessment episodes. The majority of formative assessment episodes (74%) occurred during a few types of learning activities across multiple class sessions. These included the following: investigations, warm up, in class practice/partner switch, and homework circles. While reviewing Ticket Out results occurred twice during the observed unit, the audio recording malfunctioned during the second reviewing Ticket Out activity, thus table 4.9 only included the formative assessment episodes occurring during one of these activities. Seven formative assessment episodes occurred during that single reviewing Ticket Out results activity.

PAGE 212

202 Table 4.9 Formative Assessment Episode Frequency by Learning Activity Learning Activity Average # of FAE occurring during activity Percent of FAE occurring during activity Checking Hawk Math 3.0 2.4 Homework Challenge 2.3 2.8 Homework Circles 6.3 10.1 In Class Practice and Partner Switch 14.3 20.5 Investigation 18.7 22.6 Planning 1.1 3.2 Pre Test (for unit) 5 2.0 Proficiency Poster 2.5 2.0 Summative Quiz 4.3 5.2 Quiz Circles ** 5 0.4 Reviewing Ticket Out Results 4 3.2 Taking Notes/Coding Text 1 0.8 Ticket Out 1 0.4 Warm Up/Number Talk 10 20.2 *The audio recording failed for approximately 12 minutes during one of the days when students were reviewing Ticket Out results. Observation notes indicated a number of interactions but no audio was available to verify. The frequencies provided are likely an undercount. **The audio recording failed for all of the Quiz Circles, video and observation notes indicated some formative assessment episodes occurred but this is likely an undercount. In response to the proposition that the classroom social learning context influences how teachers and students use data about learning I provide additional evidence organized by the components of the social learning context at the activity level. For each, I describe the component of the social learning context for the learning activities happening in LizaÂ’s classroom, how that component of the social context related to the other components, and the relationship of that component to the formative assessment episodes that occurred in this classroom. In the subsection on tools, I provide evidence related to the proposition that the features of cultural tools used during formative assessment episodes and how they are used illuminate critical

PAGE 213

203 features of formative assessment practices and the proposition that information about learning (including mistakes or misunderstandings) becomes a “social object” that is valued both in terms of how it is described and how it is used Object. Were the objects of each learning activity clear? To what degree did the object of each learning activity align with the unit learning goals (referred to by this teacher as the essential questions) or daily learning objectives? Did the teacher and students have a shared object for the learning activity in which they engaged? The answers to these questions are critical to understanding how the object of learning activity influenced the formative assessment episodes occurring during each learning activity. The object of learning activity and unit learning goals/daily objectives were the same, which is a necessary condition for data collected during learning activity to align with the unit learning goals/lesson objectives. The data collected (informally or formally) as part of the learning activity was aligned with an explicit learning target (such as the unit learning goals and/or lesson objectives), which is also a necessary condition for formative assessment episodes to occur. Thus, these questions frame my analysis of the object of learning activity occurring in Liza’s classroom. Were the objects of learning activity clear and aligned with unit learning goals and/or daily learning objectives? Liza posted a learning objective for each class session on a dry erase board in the back of the room (daily learning objectives). In all cases, the learning objectives were a subcomponent of or a step towards one of the major unit learning goals. Also, Liza frequently explained the object of different learning activities

PAGE 214

204 in reference to the essential questions for the unit and the daily objectives. For example, on the day when the daily objective was students will develop strategies for calculating sums and differences of fractions Liza explained the objective of engaging in one activity, an investigation, in the following way. “What we have going on is the objective at hand uses the language of developing strategies. So by the time you leave here today are you going to be 100% perfect with everything adding and subtracting fractions? (several students say “no”). Probably not; you might still be developing it. . we will have some concrete ideas or examples but we might not be 100% perfect and that’s ok Essential questions that are up in the front . this weeks’ essential question is right here, and it basically says add and subtract fractions. You know essential questions aren't always like just do it and be done with it so we have to kind of demonstrate this in a couple ways. So, how I specifically tell you on the essential question is benchmarks kind of like your homework from Monday, and also with pictures or models or some sort of construct we learn today. . You can use either one.” In general, the learning activities during any given class session were closely aligned with the posted daily objectives. For example, on the day when the daily objective was students will represent the product of fractions visually, most of the class time was devoted to students engaging in an investigation that required them to represent the product of two fractions by dividing a brownie pan into segments. The same daily objective was on the board the next day when students completed the investigation, and when the homework and in class practice involved similar tasks. There were a few exceptions to this general alignment between daily objectives and the daily learning activities. A couple of reoccurring exceptions were the weekly Hawk Math assignments and the Homework Challenge, an activity which occurred three times per week. Liza assigned weekly Hawk Math assignments on Monday for student to

PAGE 215

205 complete by Friday each week. They included a number of tasks not aligned with the current unit learning goals. During one post observation interview, Liza explained the purpose of the weekly Hawk Math assignments this way: “It's spiraling. . so number 24 is the same for 4 sheets in a row. In a term (8 weeks long) the first 4 sheets of the term, there is similar to the 2nd four sheets of the term, so number 24 on the first 4 sheets is just is the angle acute, right or obtuse and on these four sheets (the second part of a term), they have to actually measure the angle. So they can track sort of concepts that way. I use it a lot. I go through their binder and find things to pull before like random questions on their homework or for like that's why we did the subtracting decimals on the warm up so often because a lot of kids were missing it. The concepts follow each other.” The object of the Hawk Math assignments was not the daily learning objective. They provided data about a broader set of learning objectives, cutting across the entire year, and informing a different level of formative uses of student learning data – instructional decisions across the year rather than just within the given unit. Similarly, the Homework Challenge activity which occurred every Tuesday, Thursday and Friday also had a purpose different from the daily learning objectives on those days. The learning objectives of this activity included students: 1) converting a fraction to a decimal, 2) converting a decimal to a percent, and 3) finding the average of three numbers. While these were not the focus of the current instructional unit, these were part of the learning objectives for the entire year. Similar to the Hawk Math assignments, this activity provided data about student learning to inform instructional decisions that went beyond the current unit. To what degree did the teacher and students share the object for different learning activities? From Liza’s perspective, the object of different learning activities was

PAGE 216

206 usually very clear, and was provided in writing and/or explained orally to the students. Because we did not ask students about their perceptions of the purpose of each learning activity, I don’t have direct evidence about the degree to which Liza and her students shared the objective for different learning activities. However, students’ engagement and actions during different learning activities provide indirect evidence regarding their object. All students engaged in almost every learning activity during the observed instructional unit. In addition, student actions almost always aligned with the object of class activity as represented by the teacher. Students seldom deviated from what the teacher had asked them to do. If they did, even for a few minutes, Liza immediately “got them back on track.” Students were also able to evaluate their own learning in reference to the learning objective as part of a couple of different activities – the unit pre test, the proficiency poster, and the Ticket Out. With some small exceptions, the object of classroom learning activities and the unit learning goals and or lesson objectives were the same. Even in the case of a consistent exception, the weekly Hawk Math assignments, the teacher believed the object of the activity aligned with learning goals that cut across the entire school year. Indirect evidence supports the idea that students shared the teacher’s object for all class learning activities. This created a context within which data collected as part of classroom learning activities had the potential to align with a clearly defined learning target. In the case of this classroom, that was usually the daily objective.

PAGE 217

207 Tools. Liza used variety of cultural tools to mediate learning Most of the reoccurring learning activities in LizaÂ’s classroom involved students interacting with at least one and sometimes multiple cultural tools. Tools used during learning activities during which a high frequency of formative assessment episodes occurred supported participation structures that created the context for formative data use. The tasks Liza included in assignments and assessments determined the learning targets about which she could collect data. The targets to which the tasks aligned also determined the level of instructional decision that could make use of the resulting student learning data. I focused my analysis on two overlapping types of tools that had the greatest potential for influencing formative assessment practices: 1) teacher created tools used during the learning activities when a high frequency of formative assessment episodes occurred; and 2) tasks that were part of instructional materials (created by the teacher, or provided by the district/school) and used in the collection of student learning data (e.g. the mathematical tasks included in assignments and assessments). Below I provide evidence related to the use of these types of tools in LizaÂ’s classroom. Tools associated with frequent formative assessment episodes I identified several typical learning activities within which a high frequency of formative assessment episodes occurred including the following: warm ups/number talk, homework circles, in class practice/partner switch, investigations and reviewing Ticket Out results. One tool was common to all of these learning activities, and to every learning activity that involved students working in a small group (2 4 people) a timer. Whenever students

PAGE 218

208 had a group task, they also had a specific amount of time to accomplish that task and the timer was visible to all of the students (projected on the SMART board). This may have contributed to studentsÂ’ high level of focus on the task, something that created a context within which Liza could move around the room informally collecting data about student learning and using it formatively. Next, I describe the other tools associated with each of these learning activities and describe how they were associated with formative assessment episodes occurring during the activities. Warm up. The tools used as part of the standard warm up activities (not number talk) included a warm up form and warm up problems (posted on the SMART board). The warm up form was a full page sheet including five boxes with the following titles: algebra, computation, probability and statistics, number sense, and geometry and measurement. Liza posted the warm up problems on the board using a duplicate form. Students copied the warm up problems from the board into the appropriate box on their copy of the warm up form, using a new warm up form each time. The form and the problems associated with the different boxes on the form associated the math tasks with a specific mathematics topic area, providing support for students to make a connection between the activity and the learning target. The warm up form and warm up problems provided critical support for the participation structure of the warm up learning activity. They gave students something they could use independently, without further direction from the teacher, to engage in doing mathematics, making it possible for Liza to collect informal data about individual student learning while other students were productively engaged in doing mathematics.

PAGE 219

209 Number talk. The tools used as part of Number Talk, when it was the warm up activity, included Number Talk instructions, and a Number Talk problem. The Number Talk instructions Liza posted on the SMART board included the following: 1) Individually, find the answer with no paper. 2) As a group, discuss the answer and how you found the answer. 3) As a whole class, discuss/show the answer and how to find it. One example of a Number Talk problem used during the focus unit was 100 31.8. Similar to the warm up form and problems, these two tools – the Number Talk instructions and problem – gave students something they could use independently, without further direction from the teacher, to begin to engage in doing mathematics. The second step in the Number Talk instructions created a participation structure that allowed the teacher to observe student conversations about mathematics, informally collect student learning data, and use that learning data formatively. The third step in the Number Talk instructions created the context for Liza to use student work as a social object. However, the Number Talk problems, as the above example illustrates, often addressed learning objectives from a prior unit. Homework circles. The tools used during the homework circles activity included the homework circle directions (posted on the SMART board) and the homework assignment from the day before. The homework circles directions included the following prompts: I don’t understand. What did you get for the first question? How did you get that answer? Let me help you solve this problem. Raise your hands when your group is finished. The teacher also set a timer at the beginning of homework circles for about 5 minutes that students could see as time progressed. The norms or rules related to

PAGE 220

210 homework circles included that students would come to class with the homework assignment from the day before completed, engage in homework circles with their pre assigned table group, and not just “give” each other the answers but work through them collaboratively. The combination of the rules associated with homework circles, the directions or prompts for homework circles, and the homework assignments all supported a participation structure in which students engaging in talk about mathematics without direct involvement of the teacher. Again, this created a context within which formative assessment episodes could occur. In class practice/partner switch. The tools Liza used during in class practice were math tasks drawn from school/district provided instructional resources or those she developed.. Liza used several different protocols to structure student engagement in completing in class practice tasks including the following: students working independently, students working with a partner they selected, and students working with a partner Liza assigned. The tasks provided the focus for student engagement in mathematics, again creating a structure within which formative assessment episodes could and frequently did occur. Investigations. The tools used during the investigations learning activity included the district provide instructional materials (the Bits and Pieces II student workbooks), a standard process for students “coding” text from this resource, and the daily learning objective. For the two sections of these workbooks used during the focus instructional unit, students used both the background text, and a series of “tasks” provided for each investigation. Students “coded the text” from the instructional resource to build their

PAGE 221

211 background knowledge about the investigation. Students wrote the learning objective, posted by Liza on the SMART board in the front of the room, in their notes, thus connecting the learning activity to the learning targets. Liza used excerpts from the workbooks to introduce students to new learning content without providing that content through lecture. This supported a participation structure that involved students engaging in solving mathematical tasks, talking about mathematical tasks, and explaining mathematical concepts while leaving the teacher available to interact with small groups of students. Again, these tools supported a participation structure within which formative assessment episodes could and frequently did occur. Reviewing Ticket Out Results. The tools used as part of the Reviewing Ticket Out Results activity included the Ticket Out Form, the Ticket Out problem, and the Reviewing Ticket Out Directions. Sometimes Liza also proposed an additional task for students to complete in pairs after they had reviewed their Ticket Out results. The Ticket Out Forms were 1/3 page white sheets with boxes for the student to write the learning objective, and the Ticket Out task and his/her response. Examples of Ticket Out tasks used during this unit included and Liza posted the Reviewing Ticket out Directions on the SMART board; they included the following: Compare answers. Who got the question correct? Who got the question incorrect? Were the mistakes big or small? Erase and Change answers. Liza communicated the intended alignment of the Ticket Out tasks to the learning objective to students through the structure of the Ticket Out Form. Similar to the homework circles activity, the studentsÂ’ completed Ticket Out Forms and the

PAGE 222

212 Reviewing Ticket Out Directions supported a participation structure within which students worked in groups talking about mathematics. This allowed Liza to interact with groups of students, informally collecting data and using it formatively. These tools supported a participation structure within which formative assessment episodes could and frequently did occur. Tools supporting aligned data collection. One critical element of a formative assessment episode is the collection of student learning data specifically aligned to a learning target. In the section above focused on data collection as part of formative assessment episodes, I summarize my analysis of the alignment between the tasks used in data collection and the learning target about which student learning data was being collected for each formative assessment episode (see table 4.4). In general, the tasks Liza used aligned with the learning targets. The learning targets on which the tasks focused illuminated on what learning associated formative assessment episodes had the potential to focus. They also illuminated the level of instructional decisions (a use of student learning data) that the teacher could make based on the data collected as part of students completing the task. In the majority of cases, the tasks focused on the daily learning objective, creating the potential for uses of student leaning data during the current class session or the next class session. Some tasks (such as those included in the pre unit assessment and the proficiency poster) focused more broadly on one of the unit learning goals creating the potential for uses of student learning that were at a level beyond the current class session, or even the next class session, but within the current unit. Finally, some tasks

PAGE 223

213 focused on learning objectives that went beyond the current unit, creating the potential for uses of student learning data that went beyond the current unit. Tools illuminating critical features of formative assessment practices. The tools used during learning activities when a high frequency of formative assessment episodes occurred, all supported participation structures that created a context within which certain types of formative data use could occur. Specifically, these participation structures supported the teacher informally collecting data about student learning and providing oral feedback or engaging students in formative questioning. The tasks used in the collection of student learning data (as part of assignments and assessments) determined the learning focus of formative assessment episodes. Most of the tasks used supported the collection of data in LizaÂ’s classroom and aligned with the daily learning objectives and/or one of the major unit learning goals. Others aligned with learning objectives from prior units and/or from across the school year. The targets to which the tasks aligned determined the level of instructional decision that could make use of the resulting student learning data. Using student work as a social object. Liza frequently used student work as a social object in support of formative assessment episodes. Social objects are reflective talk among a group of people engaged in a particular activity that they can refer back to later (Jordan & Putz, 2004). Student work and oral feedback about student work becomes a social object when the work is used by students other than the one who created it as something that the teacher and other students can refer back to as

PAGE 224

214 evidence of some aspect of the learning. When used as a social object, student work becomes a tool that mediates learning and formative assessment practice. In the Unit Learning Context overview, I described the participation structure that involved using student work as a social object as “Solving Problems as a Class.” Solving problems as a class was part of the following learning activities: Warm Up/Number Talk, In Class Practice, Investigations, and Reviewing Ticket Out Results. Towards the end of the time set aside for each of these activities, after students had time to work on mathematical tasks independently or as part of a small group, Liza would ask a student or a group of students to show their work on a task on the SMART board in the front of the room. Sometimes the student or group would be a volunteer otherwise Liza would just pick someone. That student or group would then solve the task on the board, showing his/her/their work. Then Liza would ask if the rest of the students agreed or disagreed with the work on the board. If any students disagreed, she would have the student who initially showed his/her work, identify one the disagreeing students to “show a different way.” This frequently continued until 3 or even 4 students had demonstrated their work on the SMART board. Then Liza would compare the different approaches, sometimes asking students to vote on which one they agreed with most. Typically, she would also provide oral feedback to the whole class about what she saw in the different approaches. Usually, she referred to each of the student work examples independently from the student who created it. For example, she might have different students use different colored markers to write up their work, and then would

PAGE 225

215 refer to the different examples by the color of marker used as “the red way,” or “the blue way.” This was how Liza used student work as a social object. Liza used student work as a social object almost every class session, frequently as the “close” of various learning activities. This occurred on average 1.5 times per class session. While she frequently used student work as a social object, this practice represented a relatively small number of all of the formative assessment episodes occurring during each class session (1.5: 28), or during a particular learning activity (1: 8). For example, Liza used student work as a social object once or twice during a warm up activity while she provided individual students oral feedback 7 or 8 times during the same activity. However, the impact of Liza using student work as a social object may have been greater than other formative assessment episodes occurring during the same learning activity. When Liza used student work as a social object, it was a formative assessment episode that engaged the whole class rather than one, or a small group (2 4), of students. She always combined using student work as a social object with providing oral feedback to the whole class about a math task or series of tasks in which they had engaged independently or as part of a small group. The many instances of student work being used as a social object in Liza’s classroom provides evidence in support of the proposition that in a classroom where formative assessment practices are evident, information about learning (including mistakes or misunderstandings) becomes a “social object” that is valued both in terms of how it is described and how it is used, even though there were a number of examples of

PAGE 226

216 formative assessment episodes that did not include student work being used as a social object. Participation structures (who participates and how). My analysis of the participation structures in Liza’s classroom responds to the proposition that they influence how teachers and students used student learning data. The evidence I report below related to participant structures includes description of the participation structures for different learning activities, evidence regarding the relationship between participation structures and formative assessment episodes, and a possible extension to the propositions associated with my second research question. Table 4.7, in the Unit Learning Context overview, includes a description of the participation structures and the tools used as part of each type of learning activity occurring in Liza’s classroom. While I described them separately, some of typical learning activities were overlapping. For example, Number Talk was actually a type of warm up activity. The learning activity I labeled “solving problems as a class” was part of the following learning activities: warm up/Number Talk, investigations, in class practice, and Reviewing Ticket Out Results. Some learning activities only occurred once during each instructional unit, including Unit Pre Test, Quiz Circles, and Reference Sheet. The Proficiency Poster occurred twice. Students took two summative quizzes. The remaining learning activities were reoccurring and made up the majority of class time. In Table 4.10, I describe the participation structures for the learning activities that included the majority of formative assessment episodes or had a relatively high average number of

PAGE 227

217 formative assessment episodes per occurrence of the learning activity. For each activity, I have included: The nature of the student role(s) including whether the roles included students completing a math task(s) or talking about math task(s) they had already completed; How students participated individually, in small groups, or as a full class (some activities included all three); The nature of the teacher’s role(s), whether it included the teacher initiating interactions with individuals or small groups, only responding when individual students or groups of students asked questions, or providing content (direct instruction); If the activity included the teacher formally collecting data (i.e. students turning in a product); and The average number of formative assessment episodes occurring during that type of learning activity. As illustrated in table 4.10, most of the learning activities that made up each class session involved students working on or talking with peers about mathematics tasks. Students spent the vast majority of in class time “doing mathematics.” With few exceptions, while the students were doing mathematics, Liza’s primary role was to move around the classroom initiating interactions with individual students or groups of students about the mathematical tasks, informally collecting data about their learning.

PAGE 228

218 Table 4.10 Learning Activity and Participation Structure Learning Activity Student Roles How Students Participate Teacher Roles Formal Data Collection Avg. # of FAEs Do math tasks Talk about math tasks Individual Small Group Full Class Student Interaction Observation Primarily Respond to Questions Present Content Checking Hawk Math x x x 3.0 Homework Challenge X x X x x x 2.3 Homework Circles x x x X x 6.3 In Class Practice X x x x x X 14.3 Investigation X x x x X 18.7 Pre Test X x x x 5 Proficiency Poster X x x x 2.5 Quiz(es) X x x x 4.3 Quiz Circles X x x X x x 5 Reviewing Ticket Out Results X x x x X 4 Taking Notes/ Coding Text X x x x x 1 Ticket Out X x x x 1 Warm Up/ Number Talk X x x x x X 10 During both the pre and post unit interview, Liza explained that she made a distinction between when students were completing tasks with assistance for no grade, and when they were completing tasks without assistance (for a grade). When students were “doing mathematics” for a grade, Liza only responded to questions; she did not observe and initiate interactions with students while they were working. Thus, the data collected represented what mathematics students could do without assistance.

PAGE 229

219 The few learning activities that involved the teacher directly providing information to the students – note taking, and coding text also required students to do more than sit and listen. During note taking, students not only captured notes in their notebooks about what the teacher was writing on the SMART board, but they also solved example problems to include in their notes. During coding the text, students shared responsibility for reading relevant text out loud to the entire class, decide what they would take notes about from the text, and shared these notes with their peers. However, Liza seldom initiated interactions with students during these learning activities. She almost never collected data as part of these activities. The fact that students were actively doing mathematics for most of their time in this classroom created a significant amount of time when formative assessment episodes could occur. The role Liza most often assumed while students were actively doing mathematics was to observe student work and initiate interactions with individuals or groups of students, which allowed her to informally collect student learning data, creating an additional condition for formative assessment episodes to occur. Collectively, the learning activities during which student roles included completing math tasks and/or talking about math tasks, and the teacher role included initiating interactions and observing student work or talk, had the highest average occurrences of formative assessment episodes. These learning activities included the following: Warm Ups/Number Talk, Homework Circles, In Class Practice/Partner Switch, Investigations, and Reviewing Ticket Out Results. The uses of student learning

PAGE 230

220 information associated with these activities included the teacher providing oral feedback to an individual or small group of students, and/or using questions to scaffold the next steps. Each of these activities frequently ended with student work being used as a social object, full class interaction about the work, and Liza providing oral feedback to the entire class. One additional learning activity that did not involve the teacher initiating interactions with students, but did have formative assessment episodes associated with it, was the teacher’s planning or adjusting instruction based on student learning information. These formative assessment episodes included the teacher using student learning data to make adjustments to instruction for the next day, determine how to group students for learning activities, or identify students for a more intense intervention (during class or outside of class time). The above described relationship between the participation structure of reoccurring learning activities, and the frequency of formative assessment episodes, provides evidence in support of the proposition that at least one component of the classroom social learning context, the participation structure, influences how data about learning is used by the teachers. The evidence also suggests that learning activities that include the following participation structures student roles of completing a math task(s) and/or discussing a math task(s), and the teacher role of initiating interaction with the students about math a task(s) or their discussion of a math task(s) – are associated with an increased frequency of formative assessment episodes.

PAGE 231

221 Norms and rules. Lisa clearly established and consistently applied norms and rules related to student behavior and homework that created conditions within which formative assessment episodes could occur. These norms and rules made it possible for Liza to have small group or individual student interactions without interruption while other students engaged in mathematics independently or with their peers. She did not have to use lecture or direct instruction to keep the class time focused on mathematics, and she used these strategies sparingly. Liza’s behavioral norms included the following: do not to “disrupt one another’s learning” rather support one another’s learning; focus group work on the mathematics tasks, not on other topics; when asked to help a peer, do so immediately without question. Small deviations from these norms brought immediate consequences (conduct cards, detention, and loss of free time). Adherence to these norms brought frequent praise and rewards (additional free time). Rules or norms related to homework completion also established conditions within which formative assessment episodes could occur. Students had homework assignments due three days each week. Liza identified students who did not complete their homework assignments on those days and marked them down in the grade book. She also frequently required the student to call home to notify a parent or guardian that he/she had not completed his/her homework. Finally, the homework completion rate, calculated three days per week, for the class as a whole was one determiner of the preferred activity time that students earned each day. Student homework completion

PAGE 232

222 rates were very high most days, usually at most one or two students failed to complete their homework. This meant that most students had what they needed (completed homework tasks) to participate in the homework circles and homework challenge activities. The rules and norms evident in LizaÂ’s classroom didnÂ’t cause formative assessment episodes. However, by supporting participation structures within which formative assessment episodes could occur, the rules and norms created a context within which formative assessment episodes frequently did occur. Rules and norms influenced formative assessment in LizaÂ’s classroom indirectly, through the participation structures of learning activities. Summary of evidence regarding how the social context influenced formative assessment practice. The social context of LizaÂ’s classroom supported the formative assessment episodes that occurred within it. I identified and described in specific detail how each component of the social context interacted with the formative assessment practice in which Liza and her students engaged. The evidence I presented does not specify whether Liza constructed a social context to support an instructional approach with formative assessment as fundamental constituent, or if the social context she constructed created conditions within which formative assessment could flourish. However, it does establish the importance of considering the interaction between the social context of the classroom (and each component of that context), and the formative assessment practices occurring within the classroom. The evidence from LizaÂ’s

PAGE 233

223 classroom also established a relationship between the social context of the classroom and the specific types of data use in which Liza engaged. Several characteristics of the social learning context of Liza’s classroom supported specific types of data use by Liza and her students. Liza was clear regarding the object for all learning activity; she described the object as targets for student learning, and in most cases shared the object with her students. Student engagement in all learning activity was consistent with them sharing the object of that activity. The consistent and shared object of activity focused on student learning established a context within which data collected during that activity aligned with targets for student learning. The tasks that Liza used to collect data about student learning also aligned with the learning targets, further supporting one condition for formative assessment episodes to occur – data collection aligned to learning targets. The cultural tools used the student and teacher roles during various activity, and the norms and rules related to behavior and homework completion came together to establish a context within which students were consistently and productively engaged in doing mathematics. Liza and her students used a variety of cultural tools to structure student participation in a learning activity that provided tasks for student to do, connect the tasks to learning targets, and establish protocols for students to use as they interacted with one another about mathematics. These tools supported students’ completing math tasks and talking about math tasks in small groups without direct teacher supervision. The behavioral rules and norms Liza established and strictly enforced maintained students’ focus on each learning activity further supporting

PAGE 234

224 students’ remaining productive without direct teacher supervision. Finally, Liza structured most of students’ time around them working in small groups to complete or talk about math tasks. The tools, norms and rules, and resulting participating structures created a context within which Liza could informally collect data about the learning of individual or small groups of students to provide oral feedback and engage students in formative questioning. These were the most common types of formative data use in Liza’s classroom and Liza spent most of her time during most class sessions engaged in them. These same tools, norms and rules, and participation structures created a context within which Liza could, and frequently did, use student learning data to activate students as resources for one another. The social context of Liza’s classroom created the conditions for students to productively complete math tasks and talk about math tasks in pairs. This made it possible for her to select students to work together based on their performance on a prior task – such as a Ticket Out, provide directions for how they should work together (including differentiated roles), and for them to learn with and from each other about how to correct mistakes or misunderstandings related to valued learning objectives. The social context of her classroom reflected Liza’s approach to teaching. It established conditions that allowed her to continuously make adjustments to the learning experiences of individual students and small groups of students based on student learning data. It allowed both Liza and other students to scaffold students’ learning. Liza may have constructed the social context of her classroom based on an

PAGE 235

225 instructional approach that incorporated formative assessment practice as an essential component. The evidence from Liza’s classroom supported also the other propositions associated with my second research question. The key features of the classroom social learning context identified in my theoretical framework were perceptible and I was able to describe them at both the instructional unit and learning activity level. The cultural tools used in Liza’s classroom illuminated key features of her formative assessment practices. The learning targets that were the focus of the tasks Liza used determined the scale or level of her use of the results. Tasks focused on the daily objective resulted in data that could be, and frequently was, used by Liza to make adjustments during the current class session. Tasks focused more broadly – on a major unit learning goal resulted in data used to make instructional adjustments across the entire unit, such as when Liza used the results of the unit pre assessment to adjust the pacing for the unit. Finally, how Liza used student work as a social object during a variety of different types of learning activities (including students sharing correct and incorrect responses to tasks) elucidated that Liza used all kinds of student work as part of her formative assessment practice. This also made it possible for Liza to provide oral feedback to the whole class about misconceptions or misunderstandings evident in one student’s work.

PAGE 236

226 Chapter V Case Report Two The first thing I noticed about Kari’s classroom was the physical arrangement of the space. Getting to Kari’s classroom required walking through one of the two other 5th grade classrooms. Her classroom didn’t have a door; it shared an open corner with the other 5th grade classrooms. The noise from activity in the other two classrooms was always audible and Kari wore a microphone every day to amplify her voice. Once I was sitting in the classroom, I noticed a table of students on the right side of the room that seemed to be doing something different from the rest of the class. During math instruction, several (3 to 5) students would move from their regular desks to sit at a table on the right side of the room to work with paraprofessional. This group engaged in parallel activity to the rest of the class, but with on going separate dialogue, even when Kari was presenting to the class or other students were asking and answering questions as part of full class discussion. During all but the last few days of the unit, about of the math class time was taken by full class question and answer sessions. Possibly related to the physical lay out of the classroom, the ambient noise level, and/or the concurrent activity occurring with the paraprofessional, a number of students appeared to be otherwise engaged during these full class discussions – talking about other topics, playing with rulers or protractors, or moving about the room.

PAGE 237

227 Then, the day before and after the end of unit test, this classroom transformed. Almost all of the students were highly engaged, participating so fully in the learning activity that some even missed recess to complete what they were doing. What did formative assessment practice look like? Sound like? Feel like in this classroom? This case report brings together data from a variety of sources to capture how Kari engaged in formative assessment practice during a single mathematics instructional unit. It is focused by my two research questions: 1) What are the critical attributes of formative assessment practice as it occurs in situ? 2) How does the social context of a K 12 classroom (e.g. tools, participation, rules/norms, student and teacher roles) influence formative assessment practice in situ? I further organized this case report by propositions associated with each research question, restated below as they are addressed. Critical Attributes of Formative Assessment Practice To respond to the first research question, I developed the following propositions based on my review of relevant literature (detailed in Chapter 2): 1. Formative assessment episodes are defined as including the following teacher and/or student actions: a) the teacher identifying (implicitly or explicitly) the learning target, and clarifying the learning target with the students; b) the teacher collecting data about student learning; c) the teacher and/or students analyzing learning data; d) the teacher and/or students interpreting data about student learning with regard to what it means for studentsÂ’ learning and/or instructional

PAGE 238

228 practice; and e) the teacher making adjustments to learning activity, and/or student(s) making adjustments to learning tactics. 2. When formative assessment practice occurs, a variety of assessment methods, including informal methods, are used during (in addition to after) learning activity to collect information about student learning. 3. Teacher use of information to determine what to do next occurs at different levels, from “in the moment” adjustments to instruction or learning tactics, to shifts in the next activity, to changes in the unit as a whole, or to changes the next time the unit is taught. The first proposition defines the components of a formative assessment episode. In response to this proposition, I first identified the formative assessment episodes occurring during the observed instructional unit in Kari’s classroom using a process described in Chapter 3. Next, I considered the evidence related to the formative assessment episode components for the episodes I had identified, and described each component across multiple formative assessment episodes in Kari’s classroom. Finally, I consider the evidence regarding the degree to which all of the components of a formative assessment episode were present for each episode I identified. In the section focused on the data collection occurring in Kari’s classroom, I describe and evaluate the evidence related to the second proposition. Following all of the sections on the different formative assessment episode components, I present evidence related to the third proposition. After each of these sections, I summarize the evidence for all of the propositions associated with the first research question.

PAGE 239

229 Using student learning data. Kari and her students frequently used student learning data during every class session. I identified 190 formative assessment episodes across the 11 class sessions of the observed instructional unit. Kari and her students used student learning data on average more than 17 times per class session. However, these uses of student learning data were not evenly distributed across all class sessions; some class session had a much higher number of formative assessment episodes than others. Kari and her students used learning data in a variety of ways as part of daily classroom activity. I developed my categorization of the types of uses of student learning data in KariÂ’s classroom based on the literature and the evidence from her classroom, it included the following categories: activating students as resources for one another, adjusting instruction, oral feedback to an individual student or a small group of students, oral feedback to the full class, formative questioning and scaffolding individual students or small groups of students next steps in their learning, formative questioning and scaffolding the next steps in learning for the full class, students adjusting their learning tactics, students self assessing, and the teacher providing written formative feedback. Kari also frequently used student work as a social object in conjunction with facilitating questioning and scaffolding with the entire class and in conjunction with students self assessing.

PAGE 240

230 Table 5.1 Frequency of Types of Uses of Student Learning Data Types of Student Learning Data Uses Percent of FA Episodes Average per class session Activating Students as Resources for One Another 0.5 >0.1 Adjusting Instruction 7.4 1.3 Grouping Students 0.0 0.0 Oral Feedback to Individuals or Small Groups 8.4 1.5 Oral Feedback to Full Class 5.3 0.9 Questioning (individuals or small groups) 17.4 3.0 Questioning (full class) 39.0 6.7 Student Adjusting Learning Tactics 5.3 0.9 Student Self Assessment 16.3 2.8 Written Formative Feedback 0.5 >0.1 Table 5.1 includes two statistics related to the frequency of each type of data use in KariÂ’s classroom. The average per day is the relative frequency of each type of use of student learning data as the percentage of all formative assessment episodes identified in KariÂ’s classroom. The table also indicates what percent of all of the class sessions in the unit included each type of data use. This provides context for the final statistic, the average number of times Kari employed each type of use of student learning data on the days during which that type of use occurred. I did not include instances of Kari using student work as a social object because it was always in conjunction with another use of student learning data. In almost a quarter (22%) of the total number of formative assessment episodes (190) identified in KariÂ’s classroom, the person or people using the student learning data were the students themselves. This included the following types of uses: activating students as resources for one another, student self assessment, and student adjusting learning tactics. Thus, my description of the different types of uses includes who used

PAGE 241

231 student learning data – Kari or her students. I also describe what each type of data use looked like and felt like in Kari’s classroom, how frequently that type of data use occurred, and the relationship between the impact on student learning and the frequency of the use for each type of data starting with the most and working towards the least frequent type. I describe how Kari used student work as a social object following the descriptions of uses of student learning data. Questioning with the full class. Formative questioning as part of full class discussions was the most frequently occurring formative use of student learning data in Kari’s classroom. On average, this happened almost seven times per class session. Formative questioning occurred while students were checking some of the tasks from their homework from the evening before or doing a homework task as a class, when Kari was eliciting student background knowledge about new content, and when students engaged in hands on discovery activities as part of the introduction of new content. Kari facilitated formative questioning as part of full class discussions after students’ checked some of the tasks on their homework assignments from the day before on all but two days during the observed unit. Before collected their homework assignments, Kari asked students to nominate tasks from their homework for the class to do together. Then Kari facilitated a formative questioning episode about the homework tasks the students selected. After a student nominated a homework task for the class to do as a group, Kari started by asking that student what they had done first to complete the task. She then restated the student’s answer, and often probed deeper,

PAGE 242

232 asking the student to explain or provide further detail about his/her answer. She wrote the correct aspects of the student’s response on the board. She would ask another student to add to or correct mistakes of the first student. This continued until Kari had written one complete solution to the task on the board, usually with different students providing oral explanations for why they took certain steps to complete the task. During the pre unit interview, Kari explained why she structured students’ daily checking of homework in this way, “. .then they have the opportunity to ask questions, so that's one way I'll know what they're understanding and not.” Full class formative questioning also occurred while Kari was introducing new content, either as part of eliciting information about students’ background knowledge, or after engaging students in doing a hands on task to “discover” some aspect of the new content being introduced. When eliciting information from students about their background knowledge, Kari started with a more general question such as, “What do you know about parallelograms?” She called on a student to answer the question, and would restate the student’s response. Depending on the answer, she then would probe further, asking the student to explain or add to his or her response. Frequently she asked other students to add to what the first student had said, or to correct mistakes. When full class formative questioning was part of a hands on discovery activity, Kari started with some variation of the question, “What did you notice?” This followed a similar pattern to homework review with Kari repeating the students answer, asking the student to elaborate, and frequently asking other students to add to what the first

PAGE 243

233 student said. This would continue until Kari had written a complete response to the task on the board. While a majority of the students in the class engaged in these full class questioning episodes, roughly did not. Rather, these students would engage in one of the following alternative activities: talk with other students at their table about something else; play with their pencils, rulers or protractors; or get up from their desks and move around the classroom. In addition, only 6 or 7 students actively participated in providing information as part of the formative questioning episodes – offering solutions and describing their approach to completing tasks. Others listened, took notes on their assignments, but did not provide information about their thinking. So, while I have characterized this use of formative assessment data as “full class”, some students (usually the same students each day) did not participate or participated without sharing their own learning data in these formative assessment episodes. As a result, Kari based her decisions in the moment related to how much explanation to provide, whether to complete additional tasks, or if the class needed additional practice, on data from a relatively small portion of the students in the class. Questioning with individuals or small groups. Kari occasionally engaged individual students or table groups (of 3) in formative questioning about a mathematics task. This followed a similar pattern to the full class questioning episodes, except it involved an individual student or several students sitting together at a table. While on average this occurred 3 times per day, close to half of the occurrences of this type of use of student learning data were on the day before and the

PAGE 244

234 day after the end of unit test. The day before, this occurred while students were engaged in the Gap Attack activity, described below, and the day after, this occurred as students were correcting the incorrect answers on their tests with help from peers. Student self assessment. Discerning instances of student self assessment in KariÂ’s classroom caused me to consider when self assessment is just a data collection strategy and when it also includes a data use. Some authors (e.g. Wiliam, 2010) describe self assessment as including strategies such as students indicating with their thumbs up or thumbs down that they understood some learning objective. This strategy and other very similar strategies did occur in KariÂ’s classroom. Based on my conceptual framework for what a formative assessment episode includes, I asked what data were collected and what was the use in this case? From the teacher perspective, the data collected was studentsÂ’ perception of their learning, but what was the data use? If Kari subsequently used studentsÂ’ perception of their learning to adjust instruction, then I considered this practice a formative use of student learning data. However, the actor using the data, in these instances, was the teacher not the students. So, when Kari collected data informally about student perceptions of their learning and used it to adjust or to choose not to adjust instruction, I categorized that use as the teacher making an adjustment to instruction, rather than students adjusting their learning tactics. In those instances, I coded the data collection method as student self assessment. A far more common practice in KariÂ’s classroom involved her students using formal data about their learning to interpret their progress towards meeting a learning

PAGE 245

235 objective, and planning to change their learning tactics. On average, Kari engaged students in some form of self assessment almost three times per class session. However, over half of the self assessment episodes occurred on the final day of the unit, the day after the end of unit test. On that day, Kari had students complete a Math Self Assessment form regarding their analysis and interpretation of their results on the end of unit test (see Appendix E for an example of the form). They did this in two parts. First students answered questions about what they expected they did well on, or poorly on, and what actions they had taken to learn the material for the unit before they saw their actual test results. After they reviewed their individual test results, students answered questions about the degree to which their actual results were what they expected, what they actually did well and/or poorly on, how their actions leading up to taking the test shaped their results, and what they planned to do during the next unit to ensure they learned that content. I counted students completing the form as one instance of student self assessment. Kari facilitated a class discussion during which she had students publicly share various aspects of their self assessment for each step as they were completing this form. I counted these instances of students sharing their self assessment and Kari asking other students the degree to which they agreed with what the students shared as separate episodes of student self assessment. All but a couple of the students in the class completed these self assessment forms and engaged in the discussion about them. During the post observation interview, Kari explained that she expected students to use the information they gained from this self assessment to adjust their learning tactics during the next instructional unit. However, Kari did not

PAGE 246

236 provide students feedback about this self assessment or use her students’ self assessment reports to adjust her instruction. Kari engaged her students in a second type of self assessment during all but two class sessions (the first day of the unit and the day of the end of unit test). After students checked some of the tasks from their previous day’s homework assignment (with Kari reading aloud the correct responses), and after Kari facilitated a full class discussion about a few of the tasks on the homework assignment, Kari gave students a few minutes to filled in one row of their unit progress monitoring form (see Appendix E) They listed the topic of the homework (e.g. triangle inequalities) and rated their learning in relationship to the focus of the homework using one of the following categories: I don’t get this at all; I’ve sort of got it; I get it, but I’m still making a few mistakes; and I’m an expert. Then students made notes regarding on what aspect of the topic they needed help. This took a few minutes each day. Kari explained this activity during a post observation interview. “...that’s why I wait until after we've checked homework, so that they have some additional feedback before they evaluate how well they understand it. So they evaluate themselves based on the mistakes they made, if it was just a silly addition mistake then they still understood the concepts, but then I ask them to put notes off to the side about what kind of mistakes they made or reminders for them to help to them to make sure when they go back. I use it as a test review kind of tool.” Kari also explained that while having students self assess in this way was primarily for their use, she occasionally collected the student’s progress monitoring forms to check on the types of comments students were making, and provide students feedback about their self assessment. She only collected the progress monitoring forms

PAGE 247

237 once during the observed unit, the day before the end of unit test. All but a couple of students completed these progress monitoring forms. Kari asked students to verbally share what they had written on their progress monitoring forms and provided oral feedback about their comments several times during this unit. She also reminded them about comments to include based on mistakes she had noticed on their homework assignments from the day before. However, Kari did not use students’ self assessment data captured in these progress monitoring sheets to adjust her instructional practice. Oral feedback to individuals or small groups. On average, 1.5 times per day Kari provided oral feedback to an individual student or a small group of students sitting together as a table group. However, almost half of these instances of oral feedback occurred on one day while students were doing in class practice problems and writing their answers on individual dry erase boards. This happened only once during the observed unit. Kari’s oral feedback typically included a slight correction such as “You’ve got the right idea, you’ve got the wrong math,” to indicate a computation error, or “OK, these are angles, not sides,” to let students know they were interpreting the problem incorrectly. Oral feedback to the full class. About once a day, Kari provided oral feedback to the entire class. A little over half of the time, her oral feedback was about student products that she had reviewed from the prior day. For example, “One thing I noticed on the homework you turned in yesterday, some of you have gotten a little lazy about labeling things.”

PAGE 248

238 She provided this feedback in modeling for the students what they might have observed about their test results: “Here's one thing that I noticed: I noticed that you had trouble identifying the right angles, not the right angles, but the correct angles that they were asking you to find. Some of you guys forgot how to name angles.” On a couple of occasions, she provided oral feedback to the whole class about what she had seen while they were doing in class practice problems, for example, “I like how you guys are being careful and starting with zero and not the end of your ruler.” Students adjusting their learning tactics Kari facilitated students making adjustments to their own learning tactics on two days during the unit, the day before the end of unit test as part of the Gap Attack activity, and the day after the end of unit test as part of students self assessing about how their actions led to their test results. This use included students making choices about how they would spend their time during math class based on information about their prior homework results. This also included students describing, in writing, their plans to change their future learning tactics based on their unit test results. Kari gave students a choice of five different centers at which to spend their time during the Gap Attack activity. Each center included a list of problems focused on a major topic from the unit (from the student workbooks) that students were to complete working with the other students. Students used their progress monitoring sheets (completed throughout the unit) to select which center(s) they would visit during the “Gap Attack.”

PAGE 249

239 Students also formally described how they planned to change their learning tactics as part of their written self assessment regarding their end of unit test results. Before they actually received their graded tests, students responded to this prompt, “The things I did to ensure I understood the concepts for this unit were…” Then after they had reviewed their test results, they responded to these two prompts: “The things that I could have done to improve my understanding and my final score are…” and “What I plan to do so that I improve for the next unit is…” Kari asked a number of students to share their responses to the final prompt orally with the entire class, and then she prompted them to make notes on the progress monitoring sheet for the next unit about these goals. Adjusting instruction. For the most part, Kari’s daily math learning activity followed the plan she had created with the other fifth grade teachers in the school before the beginning of the unit. Exceptions were generally related to interruptions to the daily schedule that were out of her control, such as when they had a “lock down drill” one day during math instruction, cutting time for math short. However, there were three different ways that Kari made adjustments to her instruction based on student learning data. First, Kari had a built in mechanism to adjust one of the math tasks in which students engaged three days a week computational practice worksheets. After the first day of the unit, students received a different computational practice worksheet each day based on their performance on the worksheet the day before. If they got all of the answers correct during one class session, they would get the next most difficult

PAGE 250

240 worksheet for the next class session. If they did not, they would get the same worksheet they completed the day before. This occurred five times during the observed unit. On a few occasions, Kari asked student to use thumbs up or thumbs down, fist to five (holding up zero to five fingers), or just raising their hands to indicate one of the following: how difficult a task was, their readiness to complete a task or assignment, or their understanding of a concept. In some cases (seven occurrences across the entire unit), Kari used the information collected this way about studentsÂ’ perceptions of their learning to adjust her instruction; for example, choosing not doing any more homework tasks as a class when students indicated they didnÂ’t need to do more. In two instances, Kari told the class that she disagreed with the studentsÂ’ self assessment and did an additional problem anyway. Finally, on two occasions, Kari made adjustments to the learning activity planned for the day based on data about student learning. One day, she added some scaffolding to the homework assignment at the end of class because of what she had heard while engaging students in a full class discussion. On another occasion, she added a problem to the ones the studentsÂ’ did as a class because of errors she saw in their homework the day before. Activating students as resources for one another Generally, Kari did not explicitly activate students as resources for one another, or engage students in peer assessment, as part of a math learning activity in her classroom, even though she had students seated in table groups. On the day after the end of unit test, however, Kari spent about half of the class session in an activity during

PAGE 251

241 which she explicitly activated students as resources for one another. She did this as part of students making corrections to the items they had missed on the end of unit test. She described it this way to the class. “. . those of you who don't have any corrections to make, or only have one or two corrections to make, or already looking at your test you already figured out the couple things that you did wrong on this test, you're the experts. You're gonna’ [sic] help other people. Does that mean give them the answers? [Students: No.] No. What does it mean? [Student: Help them understand.] Help them understand. Yes. Be the teachers. Help them understand with kid language instead of teacher language.” Kari also reminded several students to continue to play this role during the activity, “Staple it together and then go around and help folks that struggle. Okay?” In one instance, she even differentiated the kind of help she asked one student to provide, “I would suggest to you that you help some of those folks with harder questions instead of spending your time on this stuff where others can help people. Why don't you kind of be the expert on some of those tough questions?” In activating students as resources for one another, Kari left the choice of with whom they would work up to the students, encouraging them to make those choices based on their own interpretation of their test results. On the day after the unit test, when prompted, students moved around the room to form small groups. After a while, all but one student was part of a group. In each group, different students took on a “teacher role” talking to the group or one other student on how to complete different problems. Several students asked one particular student, Harvey, for help because he got 100% of the problems on the test correct. When they were stuck on a problem, another group of students moved to Harvey’s group to ask how to approach a particular

PAGE 252

242 problem. Then students, who were initially working with Harvey, became resources for other students about how to solve a certain problem. All of the student talk during this activity was about the test items. All students became resources for their peers, reviewing their tests and providing feedback about what they observed regarding mistakes made during this activity. Written formative feedback. Kari provided written formative feedback on student work throughout the unit, however, in only a couple of instances was it provided in a way that students had the opportunity to see and use the feedback. This raised the question of what should count as written formative feedback in this classroom. Details about how Kari provided written feedback illustrate the challenge of determining when feedback is formative. Students completed a number of products and turned them in during the observed instructional unit. This included products that were completed and turned in daily (i.e. computational practice worksheets and homework assignments); products that were completed and turned in once (i.e. the end of unit test, end of unit test corrections, and a Math Self Assessment for the unit); and one product (student progress monitoring sheet for the unit) which students added to daily, but was turned in at the end of the unit. All of these products represented opportunities for Kari to provide written formative feedback. However, Kari provided written feedback on these products in a format that students could use to form their learning in only a few instances.

PAGE 253

243 Kari graded the computational practice worksheets to determine the number correct or incorrect. Students found out if they had missed any problems on these sheets the next day when they got the next worksheet For each homework assignment, Kari provided written descriptive feedback on some student’s work; however, this feedback was always in addition to a grade. There was only one instance when Kari gave students a specific opportunity to use that written feedback formatively. Students completed homework assignments from math almost every day. During the class session, students would grade some of the tasks on each homework assignment from the day before and then turn in their written assignments. Then Kari would finish scoring them and provide an over all rating (1, 2, 3, or 4). On a number of students’ assignments, she would also provide written comments, much of which was descriptive. However, students did not receive their homework assignments with over all scores and written feedback until the end of the week, in a “Friday Folder” that Kari sent home to their parents for signature. Although Kari expressed a hope that students reviewed their homework assignments, there was no indication that the students read or used the comments on these homework assignments in any way. Kari did not expect students to do anything with their homework assignments after they turned them in until the day before the end of unit test. On that day, as part of a Gap Attack activity, students were encouraged to use their prior homework assignments, with comments on them, to determine on which unit topics they needed additional practice. However, the homework assignments students used in this way

PAGE 254

244 were only the ones the teacher handed back to them that particular day for the preceding class sessions that week (the ones that Kari had not sent home in a Friday folder yet). In addition, it is unclear whether students used the comments or just the over all score to determine that they needed additional practice on a topic. One student explained her choice of which topics to get individual practice this way, “I pretty much got everything down. I'm just doing all the stuff I got threes on.” A three was a score students received on their homework. Kari did not provide written comments on the end of unit test, rather incorrect problems were marked, and if points were deducted for not showing work, that was indicated (e.g. “work?”). Students did use these graded tests to make corrections the next day. Kari provided more extensive written comments on a few of the students’ test corrections when the corrections were still incorrect. This included, for example, “Difference = answer to a subtraction problem,” “Angles in a triangle = 180,” and “Isosceles triangles have two equal angles.” However, students had no opportunity to utilize these comments. Rather, they served as an explanation for the points that they received for each problem. Kari’s comments related to the specific content assessed on that test, although content was no longer the focus of a math learning activity by the time the students received the comments. Kari did not provide written comments on students’ submitted Self Assessments of their test results. After the Gap Attack activity, the day before the end of unit test, students made one final entry to their unit progress monitoring sheets and turned in those sheets with

PAGE 255

245 any notes from the problems they completed during the “Gap Attack.” Kari explained what she did with these products, “I’ll look through it mostly just for kinda [sic] completeness and do they understand how the progress monitoring works...usually it’s no comments, question mark on some that are kind of blankish [sic].” She did provide written feedback on a number of the student’s progress monitoring sheets about their progress monitoring. Students had an opportunity to use this written feedback immediately, as they began a new progress monitoring sheet for the next unit. Kari’s practice of providing written comments, often descriptive, but with no specific opportunity for students to review them (sent home to parents in the Friday folder) or without opportunities for students to use them (after the content is no longer being addressed in class) raises a question of what counts as formative written feedback? For this study, I counted as a formative assessment episode, only those instances of written formative feedback for which students received the written feedback and had a specific opportunity to use it. Identifying and clarifying learning targets. For each formative assessment episode identified in Kari’s classroom, I could discern at least one learning target associated with the data use. Kari described these learning targets either during the class session or as part of the pre unit or post observation interviews. However, while the over all learning goal for the unit and the student learning objectives for activities that included formative assessment episodes were identifiable and clear to the teacher, only a little over two thirds of the formative assessment episodes were the learning targets explicitly clarified for the students. The

PAGE 256

246 learning targets for the identified formative assessment episodes included those that were related to the content of the unit, those related to additional computational practice in which students engaged three times each week, and those related to students self assessing. Next, I describe in greater detail if and how Kari clarified these different types of learning targets with her students. Clarifying learning targets associated with unit content. Kari identified the over all learning goal for the unit on the first day of the unit and it remained posted on the dry erase board on the right wall of the room throughout the unit. The major unit learning goal was “Students can identify different geometric figures and describe their properties.” However, in the pre unit interview, Kari described the “big idea” for the unit as encompassing more than what she shared with students. She said, “The main things that I want them to know are how to name geometric figures, describe their properties, and be able to be flexible with their use of angles and understanding how angles work in those geometric figures as well.” What Kari posted on the board for students did not include the last part of her explanation of the big ideas for the unit – students being able to be flexible with their use of angles and understanding how angles work in these geometric figures as well. Although Kari did not communicate this to students, many of the tasks in which students engaged during the unit related to this last aspect of the big ideas of the unit. Kari spent several minutes during the first day of the unit clarifying the over all unit goal with the students. She engaged students in interpreting and defining the terms included in the unit learning goal. She asked them to identify examples of properties of

PAGE 257

247 geometric figures. In addition, students were supposed to write the over all learning goal on the top of their unit progress monitoring sheet (described below) that they used every day of the unit. Some students did not do this for the focus unit. The learning targets associated with daily activities that included formative assessment episodes (e.g. introduction of new content, in class practice problems, and checking homework) were frequently identified as types of geometric figures (e.g. isosceles triangles, equilateral triangles, parallelograms), or topics associated with properties of geometric figures (e.g. triangle inequalities), rather than stated as learning targets. Kari and the students treated these topics as learning targets or objectives even though she didnÂ’t state them in objective terms. Kari used several different strategies to identify and clarify the learning focus (treated like an objective) of in class learning activities that included formative assessment episodes. Each day, students checked some of the problems from the homework on the day before. As part of this, Kari wrote the topical focus of the homework on the board so students could transfer that topic to their progress monitoring sheet. The topics Kari wrote on the board included: classifying triangles, measures of angles of a triangle, isosceles triangles, equilateral triangles, triangle inequalities, possible lengths, parallelograms, rhombuses, and trapezoids Then the students used information about which homework problems they had done correctly to self assess regarding their current learning in relationship to the identified topic. Each day Kari also introduced new content which became the topic for the homework graded on the next day. Several formative assessment episodes occurred

PAGE 258

248 during the introduction to new content. After checking homework from the day before, Kari identified the next learning focus in a couple of different ways. The first two class sessions for the observed unit illustrate two different approaches. During the first class session, she provided an oral explanation to the class about the learning objective. “Our first lesson in this unit is properties of triangles, the unit is properties of triangles. .Ok so the unit we are starting is properties of triangles and 4 sided figures. Up on the board we have our big ideas for the unit. So you need to be able to identify different . geometric figures and describe their properties.” Then she went on to elicit from students examples of properties and listed on the board the different shapes and properties that they were going to learn about in the unit. During the second class session for the unit, Kari attempted to get students to discover the lesson objective. She told them, “I haven’t told you what we’re doing today, huh? Which is kind of unusual…you’re going to figure it out. You’re going to tell…you’re going to tell me what we’re going to learn today.” To this end, she had students cut out the angles from a triangle and see what they noticed. They did this twice. While students were playing with the angles from a triangle, Kari suggested they put all of the vertexes together and asked questions to get them to notice that the angles of a triangle when you put them together form a straight line. After 10 minutes or so of discovery, she asked the following questions: “Alright, so what do you suppose the point of our lesson today was? What did I want you to learn in today’s lesson? What was our objective today? What do I expect you to know after we’re done? . [student answer inaudible] I don't think 360 was my magic number today. 360 was my magic number when we were doing angles on a point; 180 was my magic number when we were doing angles on a line. What do you think my point is today? And I don’t mean…point. I don’t mean a geometric point. What did I expect you to learn today?”

PAGE 259

249 One student responded “Um, that…we can for having triangles that we can make a…a normal one to measure 180 out of little pieces of it.” Kari probed further. “Okay we could make 180 out of little pieces of a triangle. Tell me a little bit more. You’re on the right track. Tell me a little bit more. Because usually we don't cut up triangles when we have a triangle we just work with the triangle as a triangle. So tell me what you should have learned about triangles today? . .From what I’ve done…what we’ve done so far in class today, what did I expect you to learn?” One student attempts a response, “Well, that…any kind of angle on a triangle…I forgot.” Then Kari prompted another student to help, “You forgot? Help him out. .” A second student responded, “Well I think that what we’re mostly doing is that even though triangles and angles may seem totally different, they’re not…there’s a lot of similarities. Kari probed further, “Alright, so knowing about angles can help you learn about triangles. Okay tell me something else I expected you to learn today?” After several additional exchanges, Kari told them the learning objective, “Instead of saying all the corners of the triangles add up to 180 what might be a more mathematical way to say it? All of the angles in a triangle add up to…180. All the angles in a triangle add up to 180. Do you believe that? Did we prove that today?” Several students responded together, “Yes.” After this class session, Kari talked about her attempts to get students to discover the lesson objective.

PAGE 260

250 “I didn't think it was going to be that hard for them to come up with it, it took them a lot longer than I expected trying to get to that 180 degrees is a line, 180 degrees is a line, 180 degrees is a line. We killed that last unit, and so when we found out that they had a line...they weren’t saying what I wanted them to say, in the way I wanted them to recognize it now. A lot of them know 180. I think that was part of the problem. There’s such a number of them that know that there’s a 180 degrees in the triangle, that they were just jumping to that instead of the way we discovered it, they were just telling me what they already knew. That there’s 180 degrees in a triangle, but not know why they knew. After we what we did today,. . how they proved it to themselves that there’s 180 degrees in a triangle. So that’s what I didn’t, that’s what I felt like I had to drag out of them.” On the day before the end of unit test, Kari structured an activity during which a number of formative assessment episodes occurred, based on the major topics of the unit. She gave students the list of unit learning topics (treated as objectives) and asked them to selected one or two on which to focus their activity. On the day after the unit test Kari gave students a “Math Self Assessment” form which also included a list of the major topics of the unit. Students used this form to interpret their learning of each major topic based on analysis of their test results. Learning targets associated with daily computational practice. In addition to engaging in learning activity related to the unit content learning goal, or big idea (identify different geometric figures and describe their properties), students also engaged in computational practice three times a week. As Kari explained, based on gaps they were noticing in their 5th graders, and because their new instructional resource did not address computation at that grade level, the teachers at the school decided that students needed to develop their basic computational skills for addition, subtraction, multiplication and division of numbers from 1 to 12. As a result, they added this to their daily math instruction. According to Kari,

PAGE 261

251 “We can’t produce fractions. We can’t add up degrees in a triangle. We can’t do any of the higher level math stuff that were doing in 5th grade if they don’t know how to add seven and five, we just can’t. So that’s our way of trying to fill that gap. We don’t intend to do it permanently, but we’ll see how these primary kids that are there now and hopefully doing a little bit more rigorous curriculum, hopefully they’ll be coming up to us and we can eliminate that part of our day and have some more time for other things.” During the observed unit, this computational practice focused on students subtracting numbers from 1 to 12. There was no explicit connection between the learning goal for the observed unit and the computational practice focus. As Kari explained, “...they are all subtraction now. . We started with multiplication then we did division cause that’s what we were doing right away at the beginning of the year. Our units matched multiplication and division, then we did addition.” While she did not explicitly clarify it with students during any of the class sessions of the observed unit, the learning target for this computational practice was clear to the teacher. Learning targets associated with student self assessment skills. Some formative assessment episodes that occurred during the observed unit in Kari’s classroom had an additional learning focus – students’ learning how to assess their own learning of the unit learning objectives. Kari talked about this in several of the post observation interviews and in the post unit interview. She also provided written formative feedback about students’ written self assessment. This included comments on their progress monitoring sheets. While this learning target, students self assessing, was clear to the teacher and the focus of learning activities that included data collection and use, Kari did not clarify the target for those activities with students during any of the class sessions of the observed unit.

PAGE 262

252 Collecting data about student learning. Each instance of formative data use was associated with data being collected about student learning. Kari used a variety of strategies to collect student learning data including questioning, observation, and collecting student products. The method used to collect data in KariÂ’s classroom as part of formative assessment episodes varied significantly based on whether the person using the data was Kari or her students. Also, in some cases the data used was student knowledge, skills and understanding of math, usually demonstrated by them doing or verbally sharing how to do a math task. In some cases, the data Kari used was student perception of the difficulty for them doing math tasks. She collected this later kind of data by asking students to indicate through a show of hands, with their thumbs up or down, or with zero to five fingers raised. From KariÂ’s perspective, she collected this data through observation. Both Kari and her students collected and used data in KariÂ’s classroom. Table 5.2 includes the frequency of different data collection methods used as part of formative assessment episodes. It specifies also if Kari or her students were the collectors and users of the data. In the table, I included the percent of each data collection method as a total of all formative assessment episodes and the percent of each data collection method by user (student or teacher) for all of the formative assessment episodes for each user. Table 5.2 Data Collection Methods by User Student Teacher Total Observation 0.0% 13.5% 10.5% Questioning 2.4% 77.7% 61.1% Product 97.6% 8.8% 28.4%

PAGE 263

253 For the vast majority of formative assessment episodes for which the students were the ones using the data, the data they used were collected through their generation of a work product. Kari frequently had her students review work they had completed to self assess or make adjustments to their learning tactics before she collected analyzed and interpreted the student work product. The majority of formative assessment episodes for which Kari used the data involved Kari collecting data through asking questions orally. She less frequently (13.5% of all formative assessment episodes) collected data by observing students while they were working. These two data collection strategies did not result in any formal record of the data. Per my definition in Chapter 3, I considered these to be informal data collection methods. When Kari was the one collecting and using student learning data, 91% of the formative assessment episodes involved informal data collection. This was 72% of all of the formative assessment episodes. Frequently, Kari collected data informally during a class session about a formal product students turned in at the end of the class session. While the data involved a product, the formative assessment episode made use of informal data collection methods. The student products that Kari collected included those collected several times each week (written computational worksheets and homework assignments), as well as those collected once during the unit (student progress monitoring sheets and “Gap Attack” notes, student’s self assessment worksheets, and the end of unit test). Kari used the results of student’s written computational practice worksheets formatively during most class sessions to determine what computational practice worksheet each

PAGE 264

254 student would receive the next day. Kari used student homework assignments as the data source for a number of formative assessment episodes, however, frequently these episodes used students answers to oral questions (informal data collection) about tasks on their homework assignments, rather than students turning in the homework assignments themselves (formal product). Some formative assessment episodes involved data collected as part of the written products generated by student’s self assessment (student’s progress monitoring sheet and “Gap Attack” notes, and student’s Math Self Assessment worksheets turned in after the end of unit test). Although, as with the homework assignments, almost all of the formative assessment episodes occurred when students informally shared with the class the information they were going to write, or had already written on their Math Self Assessment worksheet before Kari collected them. Kari did not subsequently use the self assessment data students provided formatively except for one time the day before the end of unit test. These findings support the first half of the proposition that teachers use a variety of assessment methods to collect data about student learning as part of formative assessment episodes. The findings also support the second half of the proposition, that teachers frequently use informal methods during learning activity to collect data about student learning. The evidence from Kari’s classroom also included incidents of data collected informally and used formatively before the formal student products (associated with the informal data collection) were collected.

PAGE 265

255 The relationship between data collection methods and types of data use. Table 5.3 illustrates the relationship between the methods used to collect data and the type of data use in KariÂ’s classroom. For each type of data use, I provide percent of all formative assessment episodes for which the data were collected through observation, student products, and questioning. Table 5.3 Percent of Data Collection Methods by Types of Data Use Types of Data Use Data Collection Methods Observation Product Questioning Activating Students as Resources for One Another 0.0% 0.5% 0.0% Adjusting Instruction 3.2% 3.2% 1.1% Oral Feedback to Individuals or Groups 5.3% 0.0% 3.2% Oral Feedback to Class 1.1% 3.2% 1.1% Questioning/Scaffolding (individual or small group) 1.1% 0.0% 16.8% Questioning/Scaffolding (whole class) 0.0% 0.0% 38.9% Student Adjusting Learning Tactics 0.0% 15.8% 0.0% Student Self Assessment 0.0% 5.3% 0.0% Written Formative Feedback 0.0% 0.5% 0.0% Total (across all types of data use) 10.5% 28.4% 61.1% Some patterns emerge from this cross tabulation of data collection methods and types of data uses. For the two uses of data that occurred only a few times during the instructional unit, activating students as resources for one another and providing written formative feedback, products were always the data collection method. The majority of the time when Kari provided oral feedback to the whole class, it was in reference to student work products. Students made adjustments to their learning tactics

PAGE 266

256 based on their own work products. Students also self assessed based on work products they had created. However, a number of incidents occurred in the classroom where students orally reported about their self assessment. Kari based her two most frequent types of teacher data use – formative questioning with the whole class and with individuals or small groups – on data collected through questioning. When Kari made adjustments to instruction, she evenly split her data collection between observation (in this case student’s self assessment of the perception of their learning by thumbs up or a similar method) and her review of student work products. On a couple of occasions, she made adjustments based on what she heard while questioning students. Alignment of data collected and learning targets. One critical element of a formative assessment episode is that the data collected provides information about specific student learning targets. In other words, understanding the role of data collection in a formative assessment episode involves interrogating the degree to which the collected data aligns with a valued learning target. The tasks used in the data collection process were the primary source of evidence I used to evaluate the alignment of the data and the student learning targets. As described in chapter 3, I used two strategies to determine the degree to which the task(s) that was/were used to collect student learning data was/were aligned with the learning targets for the formative assessment episode. First, I considered coverage, or the degree to which the task covered the content identified in the target. Second, I compared the cognitive complexity of the learning target with the cognitive complexity of the task(s) using the Webb (1997) Depth of Knowledge (DOK) framework. This framework includes

PAGE 267

257 four levels of cognitive complexity: DOK 1= Recall and Reproduction, DOK2= Basic Application of Skills and Concepts, DOK3= Strategic Thinking and Reasoning, and DOK4= Extended Thinking. Examining the alignment between learning targets and tasks in Kari’s classroom presented two challenges. First, the major unit learning goal shared with students was narrower and less cognitively complex than the “big ideas” for the unit that Kari described in the pre unit interview. She told students that the major unit learning goal involved identifying geometric figures and describing their properties (requiring a DOK level one, Recall & Reproduce). However, when asked about the big idea of the unit, Kari explained that the major unit learning goal also included using their properties to solve problems, requiring a DOK level two or three (Skills & Concepts and possibly Strategic Thinking/Reasoning). Second, Kari stated the smaller learning targets that focused daily class activity as topics rather than using learning objective language. Even though she shared them as topics, Kari and her students treated these topics as learning targets. Thus, evaluating the alignment required inference on my part about what she meant when she shared these “topics.” Because of the nature of the content that was the focus of this unit – properties of triangles and quadrilaterals – I was able to apply the language of the overall unit learning goal to the individual topics identified during each class session to describe learning targets. Then I could analyze the alignment between these learning targets and the data collected about them. For example, when Kari shared the topic, “equilateral triangles” I interpreted the learning target as identifying equilateral triangles, describing their properties, and using their properties to

PAGE 268

258 solve problems. In all cases, the actual learning target language was not what Kari shared with students. Table 5.4 depicts a summary of my analysis of the alignment between the data collected and the learning targets for each formative assessment episode. In some cases, Kari used the same product to collect data about more than one learning target. When Kari asked students to produce written products about their self assessment of their learning progress, these products provided data about both the mathematical learning targets about which students were self assessing and the degree to which students were learning how to self assess. For the most part, students used the data collected this way to make adjustments to their learning tactics in relationship to the mathematical learning targets. However, Kari also provided feedback about this data in reference to studentsÂ’ learning to self assess. Table 5.4 also reflects instances where data sources were providing information about two different student learning targets. Table 5.4 Learning Targets and Mathematical Tasks Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Students will sel f assess. Math Progress Monitoring Students rate their understanding of the current learning topic. Students write a sentence about what help they need. These tasks provided information about studentsÂ’ self assessment. Subtract whole numbers from 1 to 12. Computational Practice Worksheets Subtraction of numbers from 1 to 12. The first sheet included tasks subtracting 1. The next sheet included tasks subtracting 2, etc. Twenty tasks were included on each sheet. The tasks covered all aspects of the learning target at the same depth of knowledge. Classify triangles by properties. Class Handout: Classify by Sides Classify by Angles Six triangles to classify by sides and six to classify by angle measures. Student directed to use rulers and proctors to complete each classification. The tasks covered all aspects of the learning target at the same depth of knowledge.

PAGE 269

259 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence Classify triangles by properties. Homework: Practice 1 Classifying Triangles (2/13/12) Classify triangles as equilateral, isosceles, or scalene using a centimeter ruler (4). Classify triangles as right, obtuse, or acute using a protractor (4). The tasks covered all aspects of the learning target at the same depth of knowledge. Understand one property of angles of a triangle that the angles of a triangle can be aligned and form a line and that the sum of the measures equal 180 degrees. Class Handout (2/14/12) Two triangles printed on sheet. Students cut out the angles from both triangles. Students were directed to explore the relationship among the angles within each triangle. The teacher directed students to put the angles together with a common vertex and describe what they noticed. This task was used to help students discover the learning target. Both the task and the target had a DOK level 2. Target provided to students: equilateral triangles. Actual target: Identify equilateral triangles and describe and use their properties. Homework: Equilateral Triangles (2/17/12) Measure sides and angles (2 triangles, six tasks for each one). Given an equilateral triangle which angles have measures equal to angle A, which sides have lengths equal to side AB, what can you say about the angles of the triangle (3). Given five triangles, identify and shade equilateral triangles (5). Given an angle measure, or two angle measures, find missing angle measures in equilateral triangles (2). Use properties of equilateral triangles to find angle measures of triangles that contain or are adjacent to equilateral triangles (3). The tasks covered all aspects of the unstated target. The tasks also aligned with the cognitive complexity of each aspects of the learning target, which ranged from DOK1 to 2. Most had a DOK 1 or 2. A few of the tasks (those that involved students using properties of equilateral triangles to find the measures of adjacent angles) had a DOK 3. Target provided to students: triangle inequalities. Use the property that the sum of the length of any two sides of a triangle must be greater than the length of the third side to solve problems. Homework: Practice 4 Triangle Inequalities (2/22/12) Measure sides of two different triangles, answer questions about side lengths and sums of side lengths (12). Complete related inequalities for each triangle (6). The first 12 tasks involved showing that the sum of the length of any two sides of a triangle were greater than the third side, with a DOK 1. The remaining six tasks involved using this property to solve problems with a DOK 2. The learning target had DOK 1 and 2. Identify and use properties of triangles. Math Journal: In class task (2/23/12) Given three examples of possible triangle angle measurements, determine if each triangle is possible and explain (3). What are two ways to identify an isosceles The tasks provided data about several different learning targets related to properties of triangles. This was the

PAGE 270

260 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence triangle? (1). Given the measure of one angle, is it correct to say the triangle is equilateral? Explain why (2). first time students were required to explain their answers, which was not specified in the learning targets. Several of the tasks had a DOK 3, consistent with the DOK of the learning target. Target provided to students: parallelograms. Identify parallelograms and describe their properties. Class Handout: Parallelograms (2/24/12) Handout with four parallelograms. Two with indications of how to cut out angles and two with indications of how to cut into two parts to compare side lengths. Students were directed to explore the relationship regarding the sides and angles of parallelograms. After guiding their exploration, the teacher asked students to describe what they noticed. The task covered the learning target. Both had a DOK 2. Target provided to students: parallelograms. Identify parallelograms and describe and use their properties. Homework 5: Practice 5, part 1, Parallelogram (2/24/12) Measure sides and angles (8). Name parallel sides (2). Name the opposite angles that are equal (2). Use properties of parallelograms to identify equal angles and solve for angle measures (6). Using properties of parallelograms, find unknown angle measures (6). The tasks covered all aspects of the unstated target. The tasks also aligned with the cognitive complexity of each aspect of the learning target. Most had a depth of knowledge of 1 or 2. The final 3 problems had a DOK 3. Target provided to students: rhombuses. Identify rhombuses and describe and use their properties. Homework 5: Rhombuses (2/27/12) Using properties or rhombuses identify sides and angles (16). Using properties of rhombuses find unknown angle measures (6). The tasks covered all aspects of the learning target. Most tasks had a DOK 2. The final six tasks had a DOK 3. This was consistent with the learning target. Target provided to students: trapezoids. Identify trapezoids and describe their properties. Class Handout: Trapezoids (2/28/12) Handout with two trapezoids printed on it and an indication of how to cut out the angles of the trapezoid. Students were directed to explore the relationships between sides and angles of a trapezoid. The teacher asked students questions about what they noticed. The task covered the learning target. Both had a DOK 2. Target provided to students: trapezoids. Identify trapezoids Homework: Trapezoids (2/28/12) Measure angles in a trapezoid (4). Using properties of a trapezoid identify equal sums of angles (2). Using properties of trapezoids The tasks covered all aspects of the learning target. Most tasks had a DOK 1 or 2. The last four

PAGE 271

261 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence and describe and use their properties. find unknown angle measures (4). Using properties of trapezoids and triangles find unknown angle measures (4). tasks had a DOK 3. This was consistent with the learning target. Target provided to students: compare quadrilaterals and triangles. Identify various types of quadrilaterals and triangles and describe their properties. Class Activity: Compare quadrilaterals and triangles (2/29/12) Describe similarities and differences between different kinds of quadrilaterals (5 similarities and differences). Identify the properties of quadrilaterals including: number of sides, properties of sides, number of angles, properties of angles (5 of each) The tasks covered all aspects of the learning target. They had a DOK level 1. Identify various types of quadrilaterals and triangles and describe and use their properties. Gap Attack Various taken from the assignments above. Students selected on which “topic” to focus. Tasks covered all aspects of the learning target and had DOK ranging from 1 to 3. This was consistent with the learning targets. Identify various types of quadrilaterals and triangles and describe and use their properties. Homework: Chapter Review (2/29/12) Matching triangle and quadrilateral properties to types of triangles and quadrilaterals (5). Use properties of triangles to find unknown angle measures for triangles (4). Classify each triangle in two ways (8). Measure triangle sides (3). Complete triangle inequalities (3). Identify and name rhombuses (2). Using properties of triangles and quadrilaterals find angle measures (6). The tasks covered all aspects of the learning targets for the unit, most with a DOK 1 or 2. The final six tasks had a DOK 3. This was consistent with the learning target. Identify various types of quadrilaterals and triangles and describe and use their properties. Test: Properties of Triangles and Four sided figures (3/1/12) Using properties of triangles, find unknown angle measure multiple choice (4). Use triangle inequalities to determine which side lengths are possible (1) multiple choice. Use properties of triangles to find unknown angle measure (1). Use properties of triangles to compare two angle measures (1). Determine the difference between two angle measures (1). Using properties of triangles find unknown angle measures (1). Using properties of triangles and parallelograms, find unknown angle measure (1) Using properties of triangles and These tasks covered all aspects of the learning targets that were the focus of this unit. Several tasks asked students to combine their understanding of properties of triangles and properties of different types of quadrilaterals to solve a single problem, generalizing their understanding from prior problems to apply them to a new context, DOK 4. This involved greater cognitive

PAGE 272

262 Learning Target(s) Task Source Task(s) or Task Examples Alignment Evidence rhombuses find two unknown angle measures (2). Using properties of triangles, find the sum of four angle measures, show work (1). Using properties of trapezoids and rhombuses, find a missing angle measure, show work (1). complexity than the big ideas from the unit as explained by the teacher. No learning target shared with students. The teacher talked about students learning meta cognitive strategies and self assessing. Math Self Assessment Students responded to prompts which asked them to predict which types of problems they did well or poorly on, explain why they expected certain results, compare their actual results to their predictions, explain how their learning tactics related to their results, and establish goals for how they will change their learning tactics in the future. The tasks required students to self assess and use several meta cognitive strategies. As presented in Table 5.4, and based on me inferences described above, the majority of the tasks that students completed, that were associated with the unit learning content, were aligned with an extended version of the major unit learning goal (to include using the properties of geometric figures) for each geometric figure introduced. Many of the tasks students completed were more cognitively complex than the learning targets that Kari communicated to them. Several of the tasks on the end of unit test required an even greater level of cognitive complexity from the students than teacher had explained (DOK 4). It is important to note that Kari adjusted her grading of the test to eliminate these tasks from the studentsÂ’ over all score. She explained to the students that these tasks went beyond what she had expected them to be able to do by the end of the unit. Even with these DOK 4 tasks removed from studentsÂ’ scores, a majority of her students performed less well on the test than Kari had expected.

PAGE 273

263 Analyzing and interpreting data about student learning. Both Kari and her students engaged in analyzing and interpreting student learning data as part of the formative assessment episodes occurring in this classroom. First, I describe the formative assessment episodes that involved students analyzing and interpreting their own learning data. Then I describe the formative assessment episodes for which Kari was the one analyzing and interpreting student learning data. Student analysis and interpretation of learning data. A little over 22% of the formative assessment episodes occurring in Kari’s classroom involved students interpreting the data about their own learning (n=42 episodes). In all but one of these instances, students also analyzed their own data. Although, in three quarters of these instances, students’ analysis was in addition to the analysis that the teacher had already done. Students both analyzed and interpreted data about their own learning each day when they engaged in “progress monitoring” based on their homework assignments. First, students graded selected items from their own homework assignments (based on Kari reading the correct answer). Then they used their unit progress monitoring sheets to assigned a rating (interpreting the data) regarding how well they did on the homework and write notes to themselves regarding on which aspects of the assignment they needed additional work. Students also both analyzed and interpreted data about their own learning the day before the end of unit test when they used the notes they had captured on their

PAGE 274

264 progress monitoring sheets about their learning progress throughout the unit to determine to which Gap Attack center to go. Finally, students both analyzed and interpreted data about their own learning after they had completed a number of problems, and checked their own work on those problems as part of the Gap Attack activity. Then they interpreted that information to provide additional notes to themselves on the back of their progress monitoring sheets about their learning. Kari prompted them with these questions to interpret this data: What did you figure out? What did you work on? What was an “ah ha” moment that you had while you were doing your Gap Attack today? Students added to Kari’s initial analysis and then interpreted their own learning data, the day after the end of unit test, using their end of unit test results. This was when Kari asked students to complete a Math Self Assessment form (see Appendix E) and engage in a full class dialogue about that self assessment. Students used the end of unit tests that Kari had already scored. Her analysis in this case involved identifying which tasks on the students’ tests were incorrect or partially incorrect, and providing an over all rating. Then students used an Answer Analysis sheet (See Appendix E) to calculate a total number of items correct for each of the learning topics addressed by the assessment, creating an additional set of scores for themselves on the end of unit test (analysis). Then they summarized, in writing, on which topics they did well, on which topics they did poorly, and what they noticed about their results (interpretation). Finally, they additionally connected their results to their learning tactics throughout the unit by responding to this prompt, “The things I could have done to improve my

PAGE 275

265 understanding and my final score,” also interpretation. In addition to providing a written record of this analysis and interpretation, a number of students shared their interpretation orally with the class. The one formative assessment episode in which Kari analyzed the data and the student’s interpreted the data happened as students were correcting their tests. Kari graded students’ tests, activated students as resources for one another, providing the conditions under which students could consider being a resource based on their test results. Then the students interpreted whether they were prepared to help other students and/or selected whom among their peers they would seek out as a resource. Teacher analysis and interpretation of learning data. The majority of formative assessment episodes occurring in Kari’s classroom (78%) involved Kari analyzing and interpreting student learning data. The vast majoring of formative assessment episodes that involved Kari analyzing and interpreting student learning data were based upon data collected informally (91%). In all of these cases, the data use immediately followed the data collection and no explicit evidence was available regarding how Kari analyzed and interpreted this learning data. Her analysis and interpretation of student learning data was “on the fly.” This represented 71% of all of the formative episodes. The remaining formative assessment episodes for which Kari was the one analyzing and interpreting student learning data involved formal data collection through student work products. This represented only 7% of all of the formative assessment episodes. The products included the following: The Great Math Race daily

PAGE 276

266 computational practice worksheets, a few of the studentsÂ’ homework assignments, the end of unit test, and the studentsÂ’ progress monitoring sheets for the unit. How Kari analyzed and interpreted the student learning data varied based on the data source, or the type of student work product. Table 5.5 summarizes KariÂ’s approach to analysis and interpretation of student learning data for each type of the student work product. Table 5.5 Analysis and Interpretation Approaches by Work Product Work Product Analysis Interpretation Homework Assignments Identify correct and incorrect responses. Assign an over all rating of 1 to 4. Identify patterns in studentsÂ’ incorrect responses. Computational Worksheets Identify correct and incorrect responses. If any student responses were incorrect, students had not mastered subtraction by that number (1 to 12). End of Unit Test Identify correct and incorrect responses. Assign an over all rating of 1 to 4. Identify patterns in missed items across all student tests. Progress Monitoring Sheets Qualitatively review the completeness and quality of studentÂ’s responses. Review for completeness of response and compare to her review of the studentÂ’s work about which he/she was progress monitoring. For student work products associated with the unit learning content, KariÂ’s analysis approach included identifying correct and incorrect responses. For homework assignments and the end of unit test she also assigned an overall score, although the homework scores were not used for student grades. For the computational practice sheets, KariÂ’s analysis approach also included identifying incorrect responses, but her interpretation was that if students missed a single problem, they had to do the tasks

PAGE 277

267 again. Finally for work products related to students learning to self assess and developing meta cognitive strategies, Kari’s analysis was more qualitative and her interpretation regarding the degree to which students’ self assessment agreed with her perception of their current level of achievement. The scale of use of student learning data. I classified the relationship between when data were collected and when and how they were used as different levels or “scales” of formative decisions. Identifying patterns in the levels of formative decisions is one way I characterized several components of a formative assessment episode at the same time. Table 5.6 Levels of Uses of Student Learning Information by Types of Uses In Kari’s classroom, a little over three fourths of the formative assessment episodes occurred in the moment or during the learning activity, meaning she collected and used data immediately. A little over 17% of the formative assessment episodes involved data collected on one day and used on a subsequent day during the unit. Types of Uses Levels of Uses In the Moment Next Class Session Next Unit Activating Students as Resources for one another 0.0% 100.0%0.0% Adjusting Instruction57.1% 42.9%0.0% Full Class Questioning/Scaffolding 100.0% 0.0%0.0% Oral Feedback (to individuals or groups)87.5% 12.5%0.0% Oral Feedback to Class40.0% 60.0%0.0% Questioning/Scaffolding100.0% 0.0%0.0% Student Self Assessment32.3% 54.8%12.9% Student Adjusting Learning Tactics 0.0% 10.0%90.0% Written Feedback0.0% 0.0%100.0% Grand Total 75.3% 17.4% 7.4%

PAGE 278

268 Finally, a little over 7% of the formative assessment episodes involved data collected during the current unit for uses that would occur in the next unit. Patterns were evident in the relationship between the type of use, the data collection method, and level of use of student learning data also. Table 5.6 includes the frequency of the level of use of student learning data by the type of use occurring in KariÂ’s classroom during the focus instructional unit. Over half of the formative assessment episodes occurring in KariÂ’s classroom involved Kari engaging students (as a full class or as individuals/small groups) in questioning about their learning. She based all of this type of formative use (100%) on student learning data collected informally and all occurred in the moment. The vast majority (88%) of formative assessment episodes that involved Kari providing oral feedback to individual students used student learning data collected informally and happened in the moment. Finally, 60% of the formative assessment episodes that involved Kari using data collected informally to make adjustments to instruction occurred in the moment, or during the current learning activity. About 40% of the time when Kari made adjustments to instructional activity, it was for the next or a subsequent class session and used data collected formally. Most of these were Kari changing the computational practice worksheets students received based on their prior performance. About 60% of the time when she provided oral feedback to the class, it was about student learning data collected formally in a prior class session during the current unit. There was only one instance of Kari activating students as resources for one another, but this utilized formal data collected during a prior class session. There

PAGE 279

269 was only one instance of Kari providing written feedback on student work in a way that students could use it formatively, and that use was for a subsequent unit. Patterns were also evident in the relationship between the levels of formative uses of student learning data and who was making use of the data – Kari or her students. Table 5.7 summarizes the levels of uses of student learning data by who was using the data (Kari or her students). Table 5.7 Levels of Use by Who Was Using Scale of Use Person Using Student Teacher Total In the Moment 5.2% 70.0% 75.3% Subsequent Class Session10.0% 7.4% 17.4% Next Unit6.8% 0.5% 7.4% Column Totals 22.11% 77.89% 100.00% In Kari’s classroom, the vast majority of formative assessment episodes (70%) involved Kari making decisions in the moment based on data about her student learning. A little over 7% of all of the formative assessment episodes involved Kari using student learning data during a class session within the current unit, but subsequent to the session when she collected the learning data. While Kari explained how she might teach this same unit differently in the future, she did not base her stated reasons for those changes on student learning data. This pattern was reversed when students were the ones using their student learning data. Less than one fourth of the formative uses students made of student learning data occurred in the moment. Half of student’s formative uses of their learning data occurred in a subsequent class session and about one fourth in planning for a subsequent unit.

PAGE 280

270 The above description of the levels of uses made of student learning data in Kari’s classroom provides evidence in support of the proposition that, Teacher use of information about student learning to determine what to do next occurs at different levels, from “in the moment” adjustments to instruction or learning tactics, to shifts in the next activity, to changes in the unit as a whole, or to changes the next time the unit is taught. It was possible to discern the different levels of Kari’s use of student learning information to determine what to do next. It was also possible to discern the different levels of students’ uses of their own learning information to determine what to do next. This distinction of who was making the use of the data extends this proposition, which as stated only considered teacher use. Summary of the evidence related to formative assessment episodes. Kari frequently engaged in formative assessment as part of each class session. The multiple formative assessment episodes occurring in Kari’s classroom provided opportunities for me to test my definition and approach to identifying formative assessment episodes. They also provided multiple replications to support my identification and description of the critical attributes of each element of a formative assessment episode in an actual K 12 classroom. My definition of a formative assessment episode and my approach to identifying formative assessment episodes in context resulted in identification of 190 formative assessment episodes across the 11 class sessions of the observed instructional unit in Kari’s classroom, or close to 17 episodes per class session. The evidence from Kari’s classroom supports the usefulness of my definition and approach to identification.

PAGE 281

271 However, the total number of formative assessment episodes doesn’t tell the full story. Although formative assessment occurred during every class session, the episodes were not evenly distributed across the class sessions. Both the volume and the types of data use evident in Kari’s classroom varied significantly between typical class sessions and the two class sessions before and after the day students took the end of unit test. More formative assessment episodes focused at the small group or individual student level occurred more frequently on the day before and after the end of unit test. This variation made it possible for me to explicitly consider the differences in Kari’s practice on typical days and the two days at the end of the unit. In addition, both Kari and her students used student learning data formatively; for nearly of the formative assessment episodes her students were the data users. This made it possible for me to consider how formative assessment episodes varied based on whether the data user was the teacher or her students. Because of how I identified formative episodes, three of the components of my definition were present in every identified episode in Kari’s classroom – a formative data use, data collection and an associated learning target. However, Kari only clarified the learning targets with her students for about two thirds of the formative assessment episodes. When the formative assessment episode focused on a learning target that was not part of the current instructional unit, Kari did not point out or clarify the learning target with her students. This may have been because she did not expect her students to self assess in relationship to learning targets outside of the current unit content. Data analysis and interpretation was not observable for all of the formative assessment

PAGE 282

272 episodes and how student learning data was analyzed and interpreted as part of formative assessment episodes varied based on who (Kari or her students) used the data. Most formative assessment episodes (78%) involved Kari using student learning data, and for the vast majority of those episodes (close to 91%) her analysis and interpretation was on the fly. So while the results of KariÂ’s analysis and interpretation were evident, I inferred her analysis and interpretation. The vast majority of the episodes for which her students were the ones using their own learning data, studentsÂ’ interpretation of their learning data was observable and captured in writing. StudentsÂ’ analysis was only observable about half of the time. I additionally considered the attributes of each element of the 190 formative assessment episodes occurring in KariÂ’s classroom which provided further evidence related to the critical attributes of formative assessment practice in context. This included data use, learning targets (and clarification), data collection, and data analysis and interpretation. In what follows, I highlight my findings related to the critical attributes of each of these formative assessment episode elements. I identified nine different types of formative data use among the formative assessment episodes in KariÂ’s classroom. By far the most frequent type of formative data use in which Kari engaged during typical class sessions was formative questioning with the full class (over half of the episodes for which Kari was the data user included this type of data use). On the days before and after the end of unit test, Kari interacted more frequently with individual students and small groups of students providing oral feedback and engaging students in formative questioning. At least once during each

PAGE 283

273 class session, Kari provided oral feedback to the whole class. Kari’s students self assessed and planned to adjust their own learning tactics every day during the observed unit. Although I categorized and labeled Kari’s most frequent data use as “full class formative questioning,” about of the students in the class engaged in the full class questioning and less than of the students provided information about their thinking as part of the full class questioning, usually the same students for each class session. Because of this, Kari’s in the moment choices about whether or not to provide additional explanation, seek additional information from students during the formative questioning process, or do additional problems together were based on data from a small portion of the students in the class. This evidence suggests that it may be important to consider how many and the level of student participation in formative uses of data that involve the whole class. Evidence from Kari’s classroom required me to go beyond some of the more general descriptions of types of data use found in the research literature to determine what counted as self assessment and written formative feedback. My application of my definition of formative assessment episodes resulted in my categorizing instances of students provided information about their perceptions of their learning and Kari using it to adjust her instructional strategies as adjusting instruction rather than student self assessment. Also, I did not count as written formative feedback instances of Kari providing feedback (even if it was descriptive) on student work that students were unlikely to see and had no opportunity to use. Finally, while Kari frequently engaged students in self assessment and identifying changes they need to make to their learning tactics, she only collected her students’ written self

PAGE 284

274 assessment once during the observed unit and reported that she did not use it to adjust her instructional practice. The learning targets that focused formative assessment episodes in Kari’s classroom included targets associated with the unit content, learning targets associated with computational practice, and learning targets associated with students self assessing and developing meta cognitive strategies. Kari only clarified learning targets associated with the unit content with her students. This suggests that which kinds of learning targets the teacher clarifies with students may be an important attribute of this component of formative assessment practice. I identified two challenges related to Kari’s identification and clarification of learning targets with her students. The first was a discrepancy between the major unit learning goal described by Kari in the pre unit interview and the major unit learning goal that she shared with her students. The former had an additional component which was more cognitively complex than what she shared with the students. The second related to how Kari shared targets associated with daily learning activities – she frequently shared “topics” rather than learning targets, even though both Kari and her students treated these topics as learning targets. This may have resulted in the Kari and her students not sharing a common understanding of the targets for their learning and has implications for the accuracy of data collected as part of formative assessment episodes, particularly student self assessment. This evidence suggests that how the teacher shares learning targets with students may be another critical attribute of this component of formative assessment

PAGE 285

275 practice – sharing topics rather than objective statements has the potential to introduce inconsistence in the teacher’s and students’ understanding of the targets. The data collection methods used in Kari’s classroom as part of formative assessment episodes included observation, questioning, student work products and student self assessment (which over lapped with the other three). The most frequent data collection method was questioning (an informal method). Over all 86% of the formative assessment episodes involved informal data collection methods. Kari also frequently collected data informally about a formal student products that she later also collected. This evidence supported my second proposition, that a variety of assessment methods are used. It also highlights the importance of explicitly considering the degree to which assessment methods are informal. Finally, this evidence suggests it may be important to consider the “data source” as a separate attribute from the data collection method, since one data source could be involved in multiple data collection methods. Patterns emerged related to the relationship between data collection methods and types of data use. Kari’s two most frequent types of data use as part of formative assessment episodes formative questioning with the whole class and with individuals or small groups – used data collected through questioning. This evidence suggests it may be important to investigate further the degree to which types of data use dictate data collection methods. Differences were also evident regarding data collection methods based on who (the teacher or her students) used the data. When the students were the users of the data, data was usually collected through products. This suggests that who uses the data

PAGE 286

276 may be another determiner (and a critical attribute) of the data collection method. This also suggests I should reword my definition of formative assessment episodes to include students as data collectors. How Kari described and clarified learning targets with her students complicated my analysis of the alignment of the tasks used in data collection with the learning targets. While the tasks students completed as part of formative assessment episodes aligned with the learning targets, at times they were more cognitively complex than what Kari communicated to her students, and what students listed on their self assessment sheets. This had the potential to introduce error into studentsÂ’ interpretation of their learning data. Specifically, targets related to identifying properties of geometric figures and targets related to applying properties to solve problems have different levels of cognitive complexity. Because students were only asked to assess their learning about the topic (e.g. equilateral triangles) and not their use of properties to solve problems, it is unclear to which their self assessment applied. Did they self assess regarding their understanding of the properties of equilateral triangles or their proficiency in using properties to solve problems? Kari did not check the accuracy of studentÂ’s self assessment until the end of the unit, but expected students to base choice about additional help they sought based on this self assessment. This evidence points to the importance of considering the degree to which the learning target that students use and understand reflects the targets on which tasks focus, especially when students engage in self assessment.

PAGE 287

277 Analysis and interpretation of data in Kari’s classroom varied based on the data collection method and who used the data (Kari or her students). For the little over 22% of the formative assessment episodes that involved students’ analyzing, and/or interpreting data about their own learning, interpretation was formal and captured in writing. By contrast, 91% of the formative assessment episodes for which Kari was the user of the data involved her analyzing and interpreting data on the fly which was not observable. Only 7% of all of the formative assessment episodes involved Kari analyzing and interpreting data that were collected formally (student work product), and how she analyzed and interpreted the data varied based on the type of product collected. This evidence again points to key differences in this component of a formative assessment based on who (the teacher or her students) is the user of the data. This evidence also suggests that the attributes of analysis and interpretation depend on the data collection method and type of student product. Kari’s decisions based on student learning data occurred at three different scales or levels. The vast majority of the formative decisions Kari made (75.3%) were regarding what to do in the moment. While the related proposition only addressed teacher use of information to determine what to do next, student use of information to determine what to do next also occurred at different levels in Kari’s classroom. Students’ formative decisions evenly split between in the moment, during a subsequent class session and for a subsequent unit. This evidence supports my inclusion of “scale” or level of use as a critical attribute of formative assessment practice. It also suggests that my proposition related to the scale of data use should be extended to include students as formative

PAGE 288

278 decision makers and that scale may vary depending on who (the teacher or the students) uses the data formatively. Collectively the evidence from Kari’s classroom supports and identifies extensions to the propositions associated with my first research question. It affirms the usefulness of my definition and approach to identifying formative assessment episodes and supports further investigation of several specific attributes as potentially “critical” to the different components of formative assessment practice. How the Social Context Influences Formative Assessment Practice The first proposition for the second research question identifies the components of the “social context” of a classroom. My analyses associated with this proposition included providing a description of what I have called the Unit Learning Context organized by the key features identified in the proposition. Formative assessment episodes occurred not just within the social context of the instructional unit, but also within the social context of the learning activities that made up the unit. Within a given class session, some aspects of the social context changed several times when the learning activity changed. Thus, the social context of the learning activities created the specific conditions within which formative assessment episodes did or did not occur. I grounded my analysis of the relationship between the classroom social context and formative assessment practices occurring in Kari’s classroom in the unit learning context, but focused more explicitly on the social context of the individual unit learning activities occurring during each class session. Therefore, after characterizing the instructional learning context for the unit over all, I describe how the social context of

PAGE 289

279 the classroom “activity” level included conditions within which formative assessment practices did occur. This analysis provided evidence related to the second through fourth propositions. Unit learning context. The unit learning context set the stage for my investigation of the relationship between the social context of Kari’s classroom and the formative assessment practice that occurred within her classroom. My description of the unit learning context in Kari’s classroom also responds to the proposition that key features of the classroom learning context are perceptible and can be described within the following categories: the object or focus of the learning activity, the tools (physical and semiotic) that mediate learning, the rules or norms guiding participation, who participates, and how they participate (roles and responsibilities). Next, I describe each component of the unit learning context in detail so that I can later refer to these details in my analysis of the relationship between the social context of the classroom and formative assessment episodes at the learning activity level. Object. On the first day of the unit, Kari posted the object, or major student learning goal for the unit, on the dry erase board on the right wall of the classroom; it remained there throughout the unit. The major unit learning goal was Students can identify different geometric figures and describe their properties Kari expected students to write this learning goal on the top of the progress monitoring sheet (described below) that they used every day of the unit. In the pre unit interview, Kari described the big idea for

PAGE 290

280 the unit as follows: “The main things that I want them to know are how to name geometric figures, describe their properties, and be able to be flexible with their use of angles and understanding how angles work in those geometric figures as well.” Kari’s posted major unit learning goal did not include the last part of her explanation of the “big ideas” for the unit – being able to be flexible with their use of angles and understanding how angles work in these geometric figures. This created the possibility for the teacher and student objects for learning activity across the unit to be misaligned. Tools. The tools used to mediate learning in Kari’s classroom included the following: physical use of space, equipment, supplies, resources posted on classroom walls, instructional materials provided by the district/school, teacher created instructional materials, and student math notebooks. Kari used each of these tools during formative assessment episodes. I describe them below in greater detail. Physical layout and use of space. One notable aspect of the learning context for Kari’s classroom was the physical layout of the classroom and how she arranged and used space within the classroom. Kari’s classroom layout was somewhat unusual. It was a square with the right back corner cut off, open to the two other 5th grade classrooms in the school. Because of this physical layout, students had to walk through one of the other fifth grade classrooms to get to Kari’s classroom. This classroom could not be isolated or closed off from the others. As a result, the noises of activity occurring in the other classrooms, students talking, moving around, etc. were always audible in Kari’s classroom. Kari seated her students at individual desks that were arranged into groups

PAGE 291

281 of 3 with each student facing the left or right walls or towards the front of the room. During math instruction, however, several students (5 or 6) would move from their desks to sit at a table on the right side of the room primarily interacting with the paraprofessional and other students seated at that table. Frequently, the paraprofessional and the students sitting at the side were engaged in parallel, but somewhat different activity while Kari was presenting or asking questions of the remaining students in the class, or while other students were asking or answering questions. As a result of the classroom being open to other classrooms, and a separate discussion occurring during math instruction in the room, this classroom was always noisy. This had the potential to influence formative assessment practices. These features of the classroom— the layout and how the physical space was used— also influenced the other tools that were used to mediate learning, the norms, and how students participated in at least some of the learning activity in Kari’s classroom. Equipment. Kari used several pieces of equipment almost every day as part of math instruction including the following: two dry erase boards (posted on the right side wall and the on front of the room), an LCD projector (mounted on the ceiling, projecting towards the front of the room), a document camera (sitting on a cart kept to the right of the front board which projects through the LCD), and a sound system which included a lapel microphone (worn by teacher each day) and speakers. Kari used the dry erase board on the right side of the room was used to post unit learning goals and missing assignments. She used the dry erase board on the front of the room daily to share information (projecting images or writing) and in guiding different learning activities.

PAGE 292

282 The LCD projector and document camera were frequently (almost daily) used to share images from the primary instructional resource. The teacher used a microphone every day to amplify her voice so students could hear her above the ambient noise from adjacent classrooms. This classroom also had a set of hand held dry erase boards, markers and erasers which students used once during the observed unit. Supplies. Various supplies used during mathematics were stored in shelves in the back of the room, including blank paper, a stapler, a tape dispenser, construction paper, scissors, rulers, protractors, and markers. Students accessed these supplies, as they needed them. Resources posted on classroom walls. Kari used the walls of her classroom to share information, and as tools to facilitate math instruction and support the social context of the classroom more generally. One resource used daily for math instruction and to support formative assessment episodes was a large laminated poster titled “The Great Math Race,” with a track printed on it that had twelve sections (numbered 1 – 12). Small (3” long) laminated cars, containing the initials of each student in the class, were stuck in various sections of this poster. Kari moved each car around a track as the associated student demonstrated he/she was proficient on each of 12 computation practice sheets used in the math computation practice activity described below. Another resource used in clarifying learning targets with students were lists of key vocabulary posted on a bulletin board. Two of the other walls in the classroom included resources related to classroom norms and rules. One of the resources posted on the walls related to student roles. On

PAGE 293

283 the right wall, close to the front, was a bulletin board with the title, “Classroom Jobs.” This board included 30 little pockets with “jobs” listed on the front of the pockets. Kari placed strips of colored card stock paper with student names (one for each student) in the pockets indicating which student had which job that week. District or School Provided Instructional materials. Kari used a new primary mathematics instructional resource for the first time this school year Math in Focus. The district adopted this resource to align with the Colorado Academic Standards and the Common Core State Standards. As part of this instructional resource, each student received their own “math workbook” and stored it in their desks. Kari also had a “teacher manual,” from which she frequently projected pages (using the document camera) on the front board during the introduction of new material as students did practice problems in class, etc. Kari copied most of the handouts provided to the students from this instructional resource, including the following: daily homework assignments, handouts used during in class activities, and the pre and post assessment for the unit. School wide, teachers supplemented the primary math resource with another instructional resource, The Great Math Race, developed by teachers at the school. They used this resource because teachers at the school decided that their student had weaknesses in computation not explicitly addressed in the new primary resource in the upper grades. This resource was the source of half sheets of computational practice problems that students complete at the beginning of three math class sessions each week that addressed different computational tasks (addition, subtraction,

PAGE 294

284 multiplication, and division), with 12 sheets per topic. The Great Math Race was also the source of a laminated race track (posted on the left wall in the classroom) and little laminated cars (one per student) moved around the track daily to represent student learning progress. During the observed unit, the computational focus was subtraction. Teacher developed materials Finally, additional instructional materials provided to students during math instruction were developed by Kari and her 5th grade team mates, including: Progress Monitoring Sheet (used daily), the Gap Attack center instructions (used the day before the final unit test), student Math Self Assessment (used the day after the final unit test), and the Assessment Analyzer (also used the day after the final unit test). See Appendix E. Table 5.8 includes a description of these resources. Kari and her team created similar tools for each math instructional unit, but with the content relevant to that unit. Kari and her students used all of these teacher developed materials as part of formative assessment episodes. Student math notebooks. Kari expected her students to keep a spiral notebook just for mathematics. Occasionally Kari asked students to take out these notebooks and use them to do a problem, capture some notes, etc. My review of 9 different studentÂ’s math notebooks revealed very little organization to what students wrote in their math notebooks. Rather, most students seemed to use their notebooks as scrap paper.

PAGE 295

285 Table 5.8 Teacher Developed Tools Tool Name Description Progress Monitoring Sheet The progress monitoring sheet was a single page with landscape oriented printing on both sides. The front side of this sheet included a box with “Math Progress Monitoring for _________” printed in it. Below that was a table with the following columns: Assignment, I don’t get this at all!, I’ve sort of got it; I get it, but I’m still making few mistakes; I’m an expert, I need help on. . (be specific!!). The back side of this sheet included “Unit: _______________________,” and the following statements with blank lines below each, “By the end of this unit, I should be able to,” and “My goal for this unit is”. Students completed these blanks at the beginning of the unit with the name of the unit, the unit learning goal, and a goal they have set about their learning activities based on their analysis of their results on the prior unit test. Still on the back side of the sheet, below the unit overview information, was a statement “Here’s how I’m doing:”. Below that in smaller print is “Date” and “Comments” and five more lines below those. Finally, towards the bottom of the back side of the progress monitoring sheet was “GAP Attack Results:” with two lines below that. Students were prompted to make notes in this section after the Gap Attack activity, the day before the end of unit test. Students turned in their progress monitoring sheet at the end of the class session the day before the end of unit test. Gap Attack (Center Directions) The Gap Attack center directions included five different sheets. On the front of each sheet was one of the major topics for the unit, and a list of page numbers and task numbers that refer to pages and tasks in the student’s workbooks. On the back side of each sheet were the problem numbers and the correct answers for each problem listed on the front. The topics for the observed unit included: Classifying Triangles; Measures of Angles of Triangles; Right, Isosceles, and Equilateral Triangles, Triangle Inequalities; and Parallelograms, Rhombuses, and Trapezoids. Math Self Assessment This was a single sheet, with portrait oriented printing on both sides. The front page was titled “Math Self Assessment – Chapter 13 – Properties of Triangles and Other Four Sided Figures.” Below the title was the following text, “I was expected to understand the following topics for this unit: Classifying Triangles; Measures of Angles of Triangles; Right, Isosceles, and Equilateral Triangles, Triangle Inequalities; and Parallelograms, Rhombuses, and Trapezoids.” Below that there were a series of statements with blank lines under each. These included: Before looking at my test, I think I completely understood these areas, Before looking at my test, I think I struggled in these areas, and The things I did to ensure I understood the concepts for this unit were. At the bottom of this side was one more statement “I think my overall score on the test would be (circle one)” with the numbers 1, 2, 3, and 4 below it. Students completed this

PAGE 296

286 Tool Name Description side of the Math Self Assessment before receiving their graded end of unit tests. The back side of this sheet included this statement at the top, “After I got my test back, I see that I got a (circle one)” with the numbers 1, 2, 3, and 4 below it. Then, below that were five more statements with blank lines under each one. These included: After reviewing my test, I see that I completely understood these areas; After Reviewing my test, I see that I did not completely understand these areas; After reviewing my test, I noticed that; The things I could have done to improve my understanding and my final score are; and What I plan to do so that I improve for the next unit is. Students completed this side of the Math Self Assessment after they received their graded end of unit tests. Answer Analysis This was a single sheet with landscape oriented printing on one side. The title of this sheet was “Answer Analysis”. Below that, the following text was printed: “Chapter 13 Properties of Triangles and 4 Sided Figures” and then “Instructions – Circle the problem numbers you answered correctly. Add up the total number right, and compare to the total number possible.” Below this was a table, with one column per unit topic: Classifying Triangles; Measures of Angles of Triangles; Right, Isosceles, and Equilateral Triangles, Triangle Inequalities; and Parallelograms, Rhombuses, and Trapezoids. For each column, the topic was listed in the first row and in the rows that followed were numbers of items from the test that pertained to that topic (one item per row). The last row included a “/” and the total number of items on the test pertaining to that topic. Students used this in conjunction with the second page of the Math Self Assessment tool to determine the total number of items per topic that they got correct on their end of unit test. Rules and norms. The rules and norms in Kari’s classroom included explicit rules posted within the classroom and those that were part of the positive behavior support program for the school. They also included school wide norms that weren’t explicit and student behavioral norms that were evident during regular classroom activity. Publicly posted rules. Two sets of written behavioral expectations were posted on the walls in the classroom. These included 1) A laminated poster displayed on the back wall of the classroom (titled Classrooms with PAWS down the side) that included the following rules: raise your hand, do your best work, be a good listener, respect

PAGE 297

287 everyone’s space, share & take turns, show kindness, follow directions, use time wisely, use I messages, keep chairs flat on floor, use materials correctly, and keep hands to yourself; and 2) A poster on the front wall (above the teacher’s desk) with “STARS” down the side and the following listed: Sit up, Track the speaker, Ask and answer questions like a scholar, Respect those around you, and Smile. The teacher did not refer to these rules during the observed unit, and students didn’t refer to them either. It was not clear if these were just posters on the wall or actually served a larger function in the classroom. Externally imposed rules. The school had a positive behavior support program titled “PAWS,” an anagram that stands for Personal Responsibility, Accepting Others, Wise Choice, and Safety. When asked about classroom rules, these were the rules to which students referred. However, it was unclear how these rules translated into student behavior. External norms. Several norms that were not formally stated, but imposed from external sources and applied to the school as a whole were evident after only a few days in Kari’s classroom. External sources frequently interrupted Kari’s class during math instruction by external sources. Interruptions included the following: the principal or someone from the main office calling over the intercom to ask for a student or several students to be sent to the office (multiple times a week), someone from the main office calling over the intercom to ask a question (e.g. do you have any absences today), “Book and a Cookie” activity (all math activity was suspended for 30 minutes one day while students got cookies and were given time to read), and a lock down drill (all math

PAGE 298

288 activity was suspended for 20 minutes while students moved to the floor in the front of the room with the lights out until a security guard came to check the classroom and release them back to normal activity). Evident student behavioral norms. While they werenÂ’t formally stated, a number of student behavioral norms were evident when the teacher presented information, or facilitated full class discussion (such as when they were solving a problem as a class). Several unstated norms were also evident related to homework. While the teacher presented information or engaged the class in solving a problem together, the classroom was frequently noisy and students did not always stay seated at their desks/table groups. Some of the noise came from other classrooms, to which this classroom was open, and some came from the paraprofessional and the students sitting at the table with her (these students and the paraprofessional talked while Kari was talking to the whole class and while other students were asking or answering questions). However, some of the noise also came from individual students talking to other students at their tables, sometimes about the task at hand and sometimes about other topics. Students often got up from their seats during activities that involved the teacher presenting to the entire class or facilitating discussion with the entire class to do one of the following: get a facial tissue, sharpen a pencil, get a drink, get blank paper, etc. One student, who typically sat at the table with the paraprofessional during math instruction, frequently left this table to wander around the room, interrupting other students.

PAGE 299

289 When students talked or got out of their seats at inappropriate times (e.g. while teacher was presenting, or while another student was talking as part of a classroom discussion) Kari asked them to quiet down, or called on an individual student to pay attention or to go back to her/his seat. She also engaged students in an activity at the end of the unit (after the unit test) during which she asked students asked to evaluate the relationship between their actions (including paying attention in class) and their performance on the end of unit test. No additional or more explicit consequences for these behaviors were evident during the observed instructional unit. Kari gave students homework assignments for math every day – including the evening after the class session during which students took the end of unit test. Each day, during math time, students corrected some of the tasks on their own homework assignments (with the teacher reading out answers and students marking problems with a red pen). Next, Kari guided students to use information about how they did on their homework assignment to make notes on their progress monitoring sheets (described in greater detail below). Then they turned in their homework assignments in a basket in the back of the room. Kari provided an overall score for each homework assignment, although these scores were not included in students’ grades. Students received their graded homework assignments for the week each Friday in a folder that they were required take home and have a parent/guardian sign. Most days Kari posted assignments that students had not turned in. Kari allowed students to turn in assignments after the day when they were due.

PAGE 300

290 Participation structures. Who participated and how they participated (roles) in learning activity and the reoccurring patterns of participation over time within Kari’s classroom can be described at various levels, including: learning activity, class session, unit, and school year. As explained in chapter 3, I defined learning activities as the sub parts of a class session distinguished by a change occurring during the class session in one or more of the following: the object (or purpose), the tools used, the rules or norms, and/or who participated and how (student roles and participation structures). Table 5.9 Learning Activity Participation Structures and Tools Learning Activity Participation Structures Tools Timed Computational Practice The teacher structured participation. The teacher distributed The Great Math Race worksheets, one at a time, to each student. Each student would get the next worksheet in the 12 worksheet series (with tasks increasing in difficulty) they had not completed with 100% accuracy. That meant that students did not all get the same worksheets each day. Students had 2.5 minutes to complete all of the items on their worksheet. The teacher indicated when it was time to start, and when it was time to stop. Students all sat at their desks and worked individually and silently. Students held up their papers as soon as they finish all of the problems and the teacher (or the paraprofessional) would pick them up. The Great Math Race worksheets Checking Homework The teacher structured participation and facilitated and managed all discourse. The teacher selected which of the problems from the prior day’s homework would be checked (usually about of those completed) and would read the answers aloud. Students used a red pen to star correct and check incorrect items on their homework assignment. Occasionally students called out requests for additional problems to be corrected and the teacher usually included the additional problem(s) for which the students asked. Students frequently talked to the other students at their “table” (group of three) while the teacher Student Homework Assignments Answer Key from the Teacher Manual of the Primary Instructional Resource

PAGE 301

291 Learning Activity Participation Structures Tools was reading the answers, but it was unclear if this talk was about the homework or something else. Students occasionally asked questions after the teacher read the answer to a problem or called out comments about their performance on the problem. In response to student questions or comments, sometimes the teacher probed deeper eliciting additional information about the students’ understanding. During this activity, the paraprofessional established the participation structure for students sitting at her table. They primarily interacted with the paraprofessional and the other students at their table. Both students at this table and the paraprofessional frequently talked while the teacher was talking. The paraprofessional sometimes used a small hand held dry erase board or the larger dry erase board behind their table to draw/demonstrate points to the students at this table. Solving Problems as a Class The teacher facilitated and managed student participation and discourse during this activity. Students responded to teacher questions or prompts. Some students (about th of the class) did not participate. After homework had been checked, the teacher would ask if students would like to see any specific problem from the homework worked out in greater detail on the board. When prompted, usually one or two students asked for a specific problem to be demonstrated. Sometimes the teacher suggested a problem. Then the teacher called on a student (often the one who asked for the problem) to begin explaining how it could be solved. She would ask questions to elicit information from the student about each step in solving the problem. The student would orally respond to the teacher’s prompts. Then the teacher would write the correct aspects of the student’s response on the board. If what the student suggested was incorrect, the teacher would either ask the student additional probing questions, or ask another student what he/she did at that point in solving the problem. The teacher did not write incorrect steps on the board. Homework Assignments Dry Erase Board Progress Monitoring The teacher established the participation structure for this activity supported by a tool – Progress Monitoring Sheet (one per student for the entire unit). After students partially corrected their homework assignments, the teacher orally prompted students to do their progress monitoring. She would write the topic of that day’s Progress Monitoring Sheet

PAGE 302

292 Learning Activity Participation Structures Tools homework on the board. She would occasionally provide additional prompts about things they should consider, and/or examples of things they might write. When prompted, most students would take out their progress monitoring sheets, write the topic of that day’s homework on the next available row, and make notes about how they did on the homework. Most would also write a phrase or two about what they need to do as a result. Then they would put the sheets back in their desks. Students used these progress monitoring sheets to select which center to go to during the Gap Attack activity (described below) at the end of the unit. Students turned in their progress monitoring sheets the day before the end of unit test. Introducing new content The teacher structured participation during this activity. At the beginning of this activity, students frequently moved to other tables if they were alone at their table (for example when the other students who typically sat with them have moved to the table with the paraprofessional, or if the other students were absent that day). The introduction of new content followed one of two patterns. Either the teacher began by eliciting information from the class about their background knowledge (sometimes projecting pages from the teacher’s manual for the primary instructional resource on the board using the document camera) and then initiating an exploratory hands on activity, or the teacher would just initiate the exploratory hands on activity. For example, one day she handed out a sheet of paper with two triangles printed on them, prompted the students to cut them out, and then guided students to use the shapes to explore the relationships between different angles and or sides of the shapes. Students were encouraged to talk with other students at their table during their exploration. This talk was not structured by questions or other discourse protocols. Students did talk with one another, although it was unclear how much of their talk was about the mathematics. The teacher usually guided them through several steps of the exploration (e.g. “compare different angles, see what you notice”). Either at discrete steps in this exploration, or after students had explored for 5 minutes or more, the teacher Student workbooks, Teacher’s Manual both from the Primary Instructional Resource Copies of sheets from primary instructional materials Dry Erase Board Document Camera/LCD Projector Occasionally individual student dry erase boards, markers and erasers

PAGE 303

293 Learning Activity Participation Structures Tools would initiate a full class dialogue about what they were discovering. The participation in this dialogue followed the same pattern as solving homework problems as a class. In Class Practice Problems The teacher structured student participation in the activity. This activity usually followed the introduction of new content. The teacher would provide problems for the students to work on, usually by projecting something from the student workbook on the board, or sometimes by writing problems on the board or providing a worksheet. Kari would then direct students would to start working on the problems individually or with their table group using pages from their math notebooks, a blank sheet of paper, or sometimes an individual dry erase board. After a few minutes, the teacher would stop the students and proceed with the same process as solving problems as a class from the homework assignments. The teacher would call on a student to begin explaining how he/or she had completed the first task. She would ask questions to elicit information from the student about each step in completing the task. The student would orally respond to the teacher’s prompts and the teacher would write the student response on the board. If what the student suggested was incorrect, the teacher would ask the student additional probing questions, or ask another student what he/she did at that point in solving the problem. The teacher did not write incorrect steps on the board. Student workbooks, Teacher’s Manual both from the Primary Instructional Resource Copies of sheets from primary instructional materials Dry Erase Board Document Camera/LCD Projector Occasionally individual student dry erase boards, markers and erasers Gap Attack The teacher established the framework for student participation. Kari gave students choices to select what tasks they attempted and structured their own discourse with their peers. The teacher set up 5 centers in the room, each reflecting a major topic that was part of the current instructional unit. Each center had a handout with a list of problems related to that topic from the student workbooks on one side, and answers to the problems on the other side. She told students to review their progress monitoring sheets from the entire unit, and their graded homework assignments to determine which centers they would visit – the topics they need to review. Students selected the center to visit and choose to work individually or collaboratively at each center. Once students selected their first center, the teacher and the paraprofessional moved around the room helping students, as requested, at Gap Attack Center Directions, Student Workbooks, Students’ individual Progress Monitoring Sheets, Students’ graded homework

PAGE 304

294 Learning Activity Participation Structures Tools different centers. At the end of this activity, students made additional notes on their progress monitoring sheets and turned in the progress monitoring sheets and notes from their Gap Attack problems. Students decided if they would take home any additional problems to prepare for the unit test. assignments from that week Unit Test The teacher structured student participation. Kari had rearranged student desks so that students sat in rows for this class session. The teacher provided some introduction regarding how many problems, and what was being assessed. Some students asked questions during this introduction. The teacher handed out blank tests. Students worked individually and silently to complete the items on the test. During the test, individual students raised their hands and the teacher came to their desk to answer questions. During the unit test, the paraprofessional also moved around the room assisting individual students at their request. It is unclear what assistance the teacher or the paraprofessional provided during the assessment, or if it varied by student. When they completed the test, the students turned them in at the back of the room. As they turned in their tests, the teacher prompted some students to make sure they had checked all of their answers. Unit Test (from primary instructional resource) Reviewing Unit Test Results, Part 1 (predicting individual results) The teacher established the framework for participation and managed discourse during this activity. A tool (Math Self Assessment) was used to guide student participation in this activity. The front side of the handout included a series of questions for students to answer about how they thought they did on the test and what they did to prepare for the test, before they receive their graded tests. Some students independently answered the questions on the handout without further prompting from the teacher. Some did not. The teacher verbally prompted students to answer each question. After a few minutes she would ask students to share with the class how they had answered the question. Two or three students shared what they wrote. Then she asked the rest of the students to show by raising their hands if they had similar answers to those shared verbally. Math Self Assessment form Graded End of Unit Tests Reviewing Unit Test Results, Part 2 (analyzing, interpreting and using The teacher established the framework for participation and managed discourse during this activity. Two handouts (tools) were used to guide student participation in this activity. The first was the back side of the handout used in part 1. The second side of this handout included questions about how the students actually performed on the test (by Math Self Assessment form Answer Analysis form

PAGE 305

295 Learning Activity Participation Structures Tools individual results) objective tested), why, and what they would do differently in the future. The questions about what they will do about their performance did not include efforts to re learn the material they did not demonstrate mastery of on the test. The second handout was a map of individual test items to the objectives assessed on the test. Some students use this to help them answer the questions about how they did by each objective. Some students independently answered the part 2 self assessment questions. Others waited to be guided through them. The teacher verbally prompted students to answer each question, and then asked some to share with the class how they have answered. Two or three students shared their answers with the class. Then the teacher asked the rest of the students to show by raising their hands if they had similar answers. Graded End of Unit Tests Correcting Unit Test Items The teacher established the participation framework for this activity, but students structured their discourse with peers. The teacher established the following expectations for this activity, 1) students would attempt to correct all of the problems for which they did not get full credit, 2) students who got problems correct or only had a small number to correct would serve as resources for other students, and 3) students would get partial credit for each problem they corrected if they explained what they did wrong the first time and showed their work on the corrected problems. Kari gave students who served as resources for other students additional guidance about not just giving students the correct answers, but really helping them to understand how to do the problems. Then students self organized into small groups (2 4) to work on their corrections. During this activity, the teacher and the paraprofessional moved around the room helping students as requested. Students also sought out the help of other students (sometimes from different groups) who had gotten problems correct that they had gotten incorrect. StudentsÂ’ graded end of Unit Tests Learning activity participation patterns. The various math learning activities evident during class sessions each had different student to student and teacher to student(s) patterns of discourse and participation. These learning activity participation patterns were similar most days, but varied substantially on the day before, the day of, and the day after the end of unit test. I summarize the student to student and teacher

PAGE 306

296 to student(s) discourse patterns and participation structures for each type of learning activity in Table 5.9. Typical math class sessions. Math instruction was approximately 1 hour each day. The student learning activity followed a common pattern, including the following activities in order: 1. Timed Computational Practice, three days a week (3 minutes). 2. Checking Homework (5 10 minutes depending on the number of problems in the homework) 3. Solving problems as a class from homework (0 – 10 minutes determined by how many problems students identified of the teacher determined the class needed to see) 4. Student progress monitoring (2 3 minutes) 5. Introducing new content (5 15 minutes) 6. In Class Practice Problems (30 40 minutes) 7. Introducing/preparing students for homework (0 2 minutes depending on how much direction the teacher provided) A typical math class sessions. The days that did not follow the typical pattern included: 1) the day before the final unit test when students participate in a “Gap Attack”, 2) the day of the final unit test, and 3) the day after the final unit test. The day before the final unit test, included the following: timed computational practice, checking homework, solving problems as a class from homework, and student progress monitoring. Then, instead of introducing new content, students engaged in a

PAGE 307

297 “Gap Attack.” They used their progress monitoring sheets and graded homework assignments to select one of the Gap Attack centers and worked with other students to complete the identified tasks. On the day of the unit test, students brought any questions they had from their unit review homework, and solved requested problems as a class. The remainder of the class session was devoted to completing the unit test. The day after the unit test, Kari provided oral feedback to the class about the over all class performance on the test, and then engaged students in three activities: Reviewing Unit Test Results, Part 1 (predicting individual results); Reviewing Unit Test Results, Part 2 (analyzing, interpreting and using individual results); and Correcting Unit Test Items. Unit level activity structure. The unit level activity structure also followed a general pattern. The first day of the unit, the teacher introduced the unit and administered a pre assessment. Sometimes this was done as a homework assignment the day before. Then a series of typical days followed until the day before the end of unit test. On this day, students did a Gap Attack instead of Kari introducing new content. The next day they took the end of unit test. Then on the final day of the unit they reviewed the end of unit test results, and corrected items. On that day, the Kari also handed out the progress monitoring sheet for the next unit, and sometimes assigned the pre unit assessment as homework. External planning influences. There were three 5th grade classrooms in this school. All three teachers worked together to plan the sequence of units across the

PAGE 308

298 year, the sequence of learning activity for each unit, and what materials they would use from the instructional resources on each day. Because this district was in the process of making adjustments to their curriculum based on new state standard, this cross classroom collaboration was more intense this year than in previous years and involved adjusting for learning topics that had been moved to different grade levels in the standards. During the pre unit interview, Kari explained it this way. “We have collaboration time every Thursday in our building . most of our collaboration time this year has been spent on math planning and looking at all those documents and we get new information from our math coach occasionally. You know, looking at the new curriculum materials, the Math in Focus stuff, and trying to figure out how that all fits in. So as we get ready to move on to you know, we gave ourselves a whole year plan but then as we hit different units, we stop and we relook at all that information. If it's changed since the last time we faced it then you know we kind of set out a unit plan all three of us together on how we're gonna [sic.] tackle the material, how far we need to go back. ‘Cause in some units we've had to go back to third grade materials in order to capture information that we feel our students don't have the background on, that they need. Like some of the fraction stuff, we went all the way back to third grade Math in Focus materials so that they could function at the level we needed them to function at the fifth grade.” A daily calendar of math topics common to all three 5th grade classes was posted on a wall common to all three classrooms. Kari did not deviate from the planned focus of each class session during the observed unit, except for once when she added an activity and extended the math class session. Additional student roles. Each week, students had specific “jobs” that were assigned to them. Kari tracked these jobs on the jobs poster on the right wall of the classroom behind the teacher’s desk. Many of these jobs did not involve activity that would have been evident during the math instructional period. However, even though

PAGE 309

299 there were jobs related to helping the teacher pass out materials, frequently the teacher (or the teacher with help from the paraprofessional) passed out homework or worksheets used during classroom activity, and/or individual student tools like rulers and protractors, without help from students. Sometimes Kari did ask the students assigned to be “teacher’s helpers” to pass out materials, but students did not offer to pass out materials, rather they waited for the teacher to ask. Student engagement. Almost all of the students in the class engaged in the in class individual activities – completing daily computational practice, completing the unit test, etc. Most, but not all, students completed their homework each evening. Student engagement varied during class discussion (working homework problems), when new content was being introduced, or during in class practice problems. Often during these typical daily activities, roughly one fourth of the class did other things – played with their rulers/protractors, talked with other students, etc. The last three days of the unit were exceptions in terms of student engagement in group activities. Most students engaged deeply in the “Gap Attack,” and most engaged in working with peers individually to correct their end of unit tests. Some students focused so intensively on the activity that they worked through the first part of recess. Evidence related to the propositions. The above description of the unit learning context for Kari’s classroom provides evidence in support of the first proposition I described the social learning context of Kari’s classroom (at the unit level) using my pre determined categories of the

PAGE 310

300 components of the social learning context. I explore the usefulness of these descriptions below. How the learning activity context influences formative assessment practice. What was the relationship between the social context of Kari’s classroom (e.g. object, tools, participation structures, and rules/norms) and the formative assessment episodes that occurred during the focus instructional unit? My analysis of the relationship between the classroom social context and formative assessment episodes occurring in Kari’s classroom focused on the individual learning activity context. In Kari’s classroom, “typical” learning activities that reoccurred across more than one class session included the following: timed computational practice, checking homework, solving homework problems as a class, student progress monitoring, introducing new content, and in class practice problems. Learning activities that only occurred once during the instructional unit, including the following: “Gap Attack,” unit test, reviewing unit test results (in 2 parts), and correcting unit test items. I identified formative assessment episodes during all of these learning activities except the end of unit test. Unfortunately, if any formative assessment episodes occurred during the end of unit test, I did not identify them due to problems with the audio recording of that class session. Table 5.10 includes two different metrics indicating how frequently formative assessment episodes occurred during each type of learning activity. First, I provide the average number of formative assessment episodes occurring during each instance of each type of learning activity. For learning activities that only occurred once during the

PAGE 311

301 instructional unit, this is the total number of formative assessment episodes occurring during that learning activity. For learning activities occurring more than once during the unit, it is the total number of formative assessment episodes occurring during that learning activity divided by the number of times that type of learning activity occurred during the unit. In the second column, I provide the percent of formative assessment episodes out of all of the formative assessment episodes that occurred during that type of learning activity. Table 5.10 Formative Assessment Episode Frequency by Learning Activity Type of Learning Activity Avg. # of FAEs per learning activity instance Percent of all FAEs during learning activity Timed computational practice 1.2 3.2 Checking homework 1.9 6.8 Solving homework problems as a class 4.5 9.5 Student progress monitoring 1.4 5.8 Introducing new content 3.8 12.1 In class practice problems 9.7 35.8 “Gap Attack” 13.0 6.8 Reviewing unit test results 31.0 16.3 Correcting items from the unit test 7.0 3.7 Different patterns emerge related to the frequency of formative assessment episodes occurring during different types of learning activities in Kari’s classroom. At the high end, the two activities during which the most formative assessment episodes occurred were reviewing test results (which happened on the day after the end of unit test) and the “Gap Attack” (which happened the day before the end of unit test). The

PAGE 312

302 learning activity during which the largest number and percent of formative assessment episodes occurred (35.8%) was in class practice problems, partially because it occurred more than one time during the unit. It is important to note that the two learning activities with both the fewest average number of formative assessment episodes and a smaller percent of all of the formative assessment episodes – student progress monitoring, and timed computational – involved formative assessment practice that had become institutionalized, or was just part of the typical daily routine during math class sessions in this classroom. In what follows, I provide additional analysis regarding the relationship between the social learning context of this classroom, at the learning activity level, and the identified formative assessment episodes, guided by the following additional propositions: The classroom social learning context influences how data about learning is used by teachers and students. The features of cultural tools used during formative assessment episodes and how they are used illuminate critical features of formative assessment practices. In a classroom where formative assessment practices are evident, information about learning (including mistakes or misunderstandings) becomes a “social object” that is valued both in terms of how it is described and how it is used. While it is artificial to describe the components of the social learning context of the classroom separately, when they all work together to influence formative assessment practice, it is also difficult to describe them all at the same time. Therefore,

PAGE 313

303 to provide evidence related to how the classroom social learning context influenced formative assessment practice in Kari’s classroom, I organized the remainder of this section of the case study report by the components of the social learning context, focusing my analysis at the learning activity level, with references to the unit level context. For each component, I describe the social learning context component in Kari’s classroom, the relationship of that component to the identified formative assessment episodes, and how that social context component related to the other components. In the subsection on tools, I provide evidence related to the proposition that the features of cultural tools used during formative assessment episodes and how they are used illuminate critical features of formative assessment practices. In the sub section regarding participation structures, I provide evidence regarding the proposition that information about learning (including mistakes or misunderstandings) becomes a “social object” that is valued both in terms of how it is described and how it is used Object. The object of learning activity was one aspect of the social context of Kari’s classroom that had the potential to influence how Kari and her students used data about learning. What were the objects of each learning activity during which formative assessment episodes occurred? Was the object of these learning activities the same as, or aligned with, the learning targets about which student learning data were collected and used? Did the teacher and students have a shared object for the learning activity in which they engaged? The answers to these questions are critical to understanding the how the object of learning activity influenced formative assessment episodes. If the

PAGE 314

304 learning object(s) was clear and shared between the teacher and the students, then formative uses of data collected as part of the learning activity would be more likely to accurately reflect students’ learning. If not, the teacher could be collecting inaccurate information about the students’ learning. These questions frame my analysis of the object of learning activity occurring in Kari’s classroom. As described in the Unit Learning Context Overview, the “big ideas” for the unit as Kari explained them during the pre unit interview were broader and more cognitively complex, than the major until learning goal she shared with her students. This created the potential for a misalignment between Kari’ and her students’ object of learning activity occurring during the unit. During various post observation and the post unit interview, Kari described her purpose for each learning activity that was part of the unit. Table 5.11 summarizes Kari’s explanations of the learning object and the relationship to learning targets about which student learning data were collected. Although we didn’t ask students directly about their object for different learning activities, student engagement in learning activity provided indirect evidence regarding their object. When students did not engage, or concurrently engaged in differently focused activity, they did not share the teacher’s object. Student engagement in learning activity in Kari’s classroom was inconsistent across different types of activities. A description of student engagement in each learning activity is also included in table 5.10. These descriptions do not include the students (3 5) who sat at a separate table with a paraprofessional during math.

PAGE 315

305 Table 5.11 Teacher Object and Student Engagement by Type of Learning Activity Type of Learning Activity TeacherÂ’s Object Learning target about which data were collected Student Engagement Timed computational practice Develop studentsÂ’ basic computational skills based on gaps teachers across the school were observing in upper grades. Subtraction of whole numbers from 1 to 12. All students complete these worksheets each day (three days a week) as fast as they could, independently and silently, and turned them in. Completing Homework Students get independent practice on skills that were learned and initially practiced in class, related to each of the major topics/concepts that are the focus of the unit. Varied by homework assignment, related to understanding and using the properties of different types of triangles and quadrilaterals. Something between 1 and 5 students failed to complete their homework each day. The teacher provided a list on the board on the right side of the room with studentsÂ’ missed assignments several days a week. Checking homework Provide students with information to use to assess their learning of the topical focus of the homework. Note: Students only checked some of the homework tasks. Kari subsequently checked the remaining ones and assigned a grade. Identify difficulties students may have had completing the homework. Varied by homework assignment, related to understanding and using the properties of different types of triangles and quadrilaterals. In general, all of the students who completed their homework participated in this activity. Some students actively participated, calling out information about how they did on certain problems and asking questions about others. Other students just checked their answers to each item. It is unclear what the students

PAGE 316

306 Type of Learning Activity Teacher’s Object Learning target about which data were collected Student Engagement who did not complete their homework did during this activity. Solving homework problems as a class Learn about student’s questions or challenges related to completing the homework assignment. Give students information about the correct way to complete some of the homework problems. Varied by homework assignment, related to understanding and using the properties of different types of triangles and quadrilaterals. Usually the most cognitively complex tasks from the homework assignments requiring student to use properties of geometric figures. About three fourths of the students in the class participated in this activity, nominating problems to solve, and sharing information orally about how they solved the problems. About a quarter of the students in the class did something else during this activity – talked to other students, played with their rulers or protractors, or moved around the class. Student progress monitoring Give students an opportunity to assess how well they understood the concepts that were the focus of the homework. Allow students to identify if their mistakes were small or reflected a lack of understanding of the concepts as a basis for them seeking help outside of math class time. Establish the foundation for students’ review of unit content/concepts the day before the end of unit test. Student self assessment. It is unclear how many students added information to their progress monitoring sheets after checking their homework each day. Most days, it seems that most students did add information to their progress monitoring sheets. Students were given 2 or 3 minutes to do this task. At the end of the unit, when students turned in their progress

PAGE 317

307 Type of Learning Activity Teacher’s Object Learning target about which data were collected Student Engagement monitoring sheets – all but a few had provided at least some information on these sheets for each topic. The quality of the information provided varied substantially, from only a rating with no additional comments about the mistakes they made or help they need, to all of the rows having comments. Introducing new content Use the activities provided by the primary instructional resource to introduce the critical concepts of the unit to the students in an engaging, hands on way that includes concrete activity to introduce abstract concepts. Varied by topic, all related to understanding and using the properties of different types of triangles and quadrilaterals. When introducing new content involved hands on activity to “discover” math concepts, such as cutting out triangles and comparing angles, almost all students engaged in the activities. When this involved full class discussion roughly one fourth of the class did not engage. In class practice problems Provide students an opportunity to practice new skills learned during the class session with support before they attempt independent practice as homework. Varied by topic, all related to understanding and using the properties of different types of triangles and quadrilaterals. On most days, when they were working on practice problems individually, or in collaboration with the other students at their table, almost all of the students engaged in

PAGE 318

308 Type of Learning Activity TeacherÂ’s Object Learning target about which data were collected Student Engagement this activity. On days when the practice problems required students to take out their workbooks, or provide their own paper, fewer students engaged in doing the practice problems. When this activity shifted to a full class discussion about the practice problems, roughly one fourth of the class did something else. Gap attack Give students an opportunity to do some additional work on topics with which they struggled during the unit, based on their own self assessment of their learning needs. Check on students that she thought might need additional support based on earlier homework assignments and participation in class discussions. Understand and use the properties of different types of triangles and quadrilaterals to solve problems. Most students were actively engaged in this activity, selecting the center on which to focus, attempting to solve the math problems associated with the center, checking their answers, and asking the teacher or each other for help when they needed it. Several students have personal conversations about something other than the math problems during this activity. This activity was interrupted several times by students being called to the office (over the

PAGE 319

309 Type of Learning Activity TeacherÂ’s Object Learning target about which data were collected Student Engagement intercom) to get their pictures taken for the year book. Reviewing unit test results Develop studentsÂ’ metacognitive skills. Facilitate studentsÂ’ making connections between their actions (learning tactics) and their test results. Give students information from which to make changes to their learning tactics in the next unit. For students how have trouble evaluating their own learning and making connections to their actions, providing verbal examples from other students as a model. Student self assessment Almost all of the students engaged in this activity. They all start filling in their Self Assessment forms as soon as they received them. They all looked through their tests and make notes on their self assessment forms and the answer analysis forms as soon as they get their tests. Close to one third of the students in the class shared information from their self assessment forms orally with the class. Correcting items from the unit test Provide students one more opportunity to learn the content that was the focus of the unit, with help from other students. For students who already mastered all or most of the content from the unit, provide an opportunity for them to further cement their knowledge and ensure that they arenÂ’t wasting time. Understand and use the properties of different types of triangles and quadrilaterals to solve problems. All students seemed to be actively engaged in this activity. Some students chose to miss their recess to finish correcting items from the unit test.

PAGE 320

310 Although the object of learning activity occurring during a typical class session changed multiple times during the class session, in most instances Kari described an object closely aligned with the learning targets about which student learning data were collected during the activity. Student engagement in many learning activities was inconsistent, especially during full class discussions. This is important because the most frequently occurring type of formative assessment episode in Kari’s classroom was full class formative questioning occurring during the full class discussions. Full class discussions were a large part of the following learning activities: solving homework problems as a class, introducing new content, and in class practice problems. Full class discussion was a smaller part of checking homework, and progress monitoring on some days, and for a few minutes at the end of the “Gap Attack” activity. While full class discussion played a role in reviewing unit test results, students simultaneously individually completed a Math Self Assessment form. On most days, roughly one fourth of the students in the class were doing other things while full class discussion took place. We did not ask students about their purpose for the alternative activities in which they engaged during full class discussions, so no information is available about why students engaged in alternative activities. In addition, two features of this classroom contributed to a high level of noise and disruption in the classroom during full class discussions, and could be an explanation for students’ inconsistent engagement in these activities. These features include: 1) the physical layout of the classroom caused the sounds from two other classrooms to always be audible in Kari’s classroom; and 2) students who sat at a

PAGE 321

311 separate table on the right of the classroom engaged in separate dialogue and frequently moved around the class during math instruction. These additional sources of noise and disruption provide a possible explanation for the reduction in student engagement during full class discussions. However, they don’t change that some students did not share the teacher’s objective during these activities and did not participate in the formative assessment episodes that occurred as part of these activities. Tools. Kari used a variety of cultural tools to mediate learning and formative assessment practice, including the following “types” of tools: equipment, supplies, resources posted on classroom walls, instructional materials (provided by the district/school), teacher developed instructional materials, and student math notebooks. I provided a detailed description of each type of tools in the Unit Leaning Context Overview. Kari and her students used many tools during learning activity to mediate all aspects of formative assessment episodes clarification of the learning targets, data collection, data analysis, data interpretation, and data use. Table 5.12 identifies the learning activities during which different tools were used and indicates how many formative assessment episodes included students using each tool.

PAGE 322

312 Table 5.12 Frequency of Formative Assessment Episodes by Tools Used Learning Activity Tools Answer Analysis Dry Erase Board Gap Attack directions Math Self Assessment Progress Monitoring sheet Unit Test (corrected) Primary Instructional Resource Great Math Race work sheets Correcting Test Items 7 Gap Attack 1 13 1 13 Grading Homework 13 In Class Practice 9 68 Introducing Content 2 23 Progress Monitoring 10 11 11 Reviewing Test Results 30 30 30 Solving Problems as a Class 18 Timed Computational Practice 6 Kari frequently wrote or projected information on a dry erase board in the front of the room as part of various learning activities and in mediating formative assessment episodes. Two additional categories of tools were frequently used during formative assessment episodes tasks that were part of the district or school provided instructional materials and used in the collection of student learning data (e.g. the mathematical tasks included in assignment, the Great Math Race worksheets, and the end of unit test), and teacher developed instructional materials (answer analysis, “Gap Attack” center directions, math self assessment, and progress monitoring sheet). The tasks that were part of district and school provided instructional materials determined the degree to which the data collected as part of formative assessment episodes that used these tools aligned with the learning targets. The teacher developed instructional

PAGE 323

313 materials provided much of the structure of the formative assessment episodes they mediated. Finally, Kari used student work as a social object and it became a cultural tool that mediated student learning and formative assessment episodes. My analysis of the relationship between cultural tools and formative assessment episodes focuses on these four tool categories – information written or projected on the board, tasks used in data collection, and teacher made instructional materials. Kari wrote or projected information on a dry erase board in the front of the room during every class session. What she wrote on the board mediated formative assessment episodes in three ways. First, she wrote the topical focus of each homework assignment on the board as part of clarifying the learning target against which she expected students to self assess. This illustrates that while she referred to these “topics” as learning objectives, and expected students to use them as learning objectives, she did not state them in learning objective terms. Second, Kari frequently wrote or projected tasks on the board for students to complete as part of in class practice problems or as part of introducing new content. Kari drew all of these tasks from the primary instructional materials used for this unit. These tasks provided a data source from which Kari informally collected data regarding her students’ learning as part of formative assessment episodes. They also illuminated about which learning targets Kari thought it was important to collect data. Third, as students solved problems as a class, Kari wrote students’ oral responses (the correct ones) to questions about those problems on the board, using student work

PAGE 324

314 as a social object. This demonstrated how she used student work as a social object. She didnÂ’t include student mistakes or misunderstandings in what she shared. Tools supporting aligned data collection. One critical element of a formative assessment episode is that the collection of student learning data aligns to a valued learning target. In the section above focused on data collection as part of formative assessment episodes, I summarized my analysis of the alignment between the tasks used in data collection and the learning targets about which student learning data were collected for each formative assessment episode. In general, the tasks used to collect data aligned with the learning targets. Most of the tasks used in data collection came from the primary instructional resource provided by the district. Kari and the other 5th grade teachers in the school chose which aspects of this resource and associated tasks to use before the unit began. Kari did not deviate from these plans based on student learning data. Additional tasks that Kari used as part of data collection came from a supplemental instructional resource, The Great Math Race. Again, these were established before the unit began. The learning targets on which the tasks used in KariÂ’s classroom focused, illuminated on what learning target associated formative assessment episodes had the potential to focus. In KariÂ’s classroom, the learning targets and associated tasks fell into three major categories: the topics associated with the major unit learning goal (treated as daily learning objectives), subtraction of whole numbers from 1 to 12, and studentsÂ’ self assessment. The tasks used in the collection of student learning data about the major unit learning goal and learning topics that were part of the unit included: items on

PAGE 325

315 each of the studentsÂ’ homework assignments; items on the end of unit test; and items from student workbooks and/or the teacherÂ’s manual that Kari wrote or projected on the board while introducing new content, or which students completed during in class practice. The tasks used in the collection of student learning data about studentsÂ’ subtraction of whole numbers from 1 to 12 included the items on the Great Math Race sheets. The tasks used to collect about studentsÂ’ self assessment included tasks that were part of all of the teacher created instructional materials. Teacher developed instructional materials All teacher developed materials used during math instruction in KariÂ’s classroom mediated formative assessment episodes. They were used to mediate the clarification of learning targets with students; in the collection of data about studentsÂ’ self assessment of their learning; in studentsÂ’ analysis and interpretation of their learning data; and in studentsÂ’ use of learning data to self assess, adjust learning tactics or plan to adjust learning tactics in the future. These tools included the following: Answer Analysis sheet, Gap Attack (directions), the Great Math Race worksheets, Math Self Assessment, and Progress Monitoring Sheet. Below I describe how each of the tools influenced Kari and her studentsÂ’ data use and evidence related to what these tools illuminated about formative assessment practices. Answer analysis sheet. Students used the Answer Analysis sheet (see Appendix E) to analyze and interpret the results of their end of unit tests, and to plan for future changes to their learning tactics. The Answer Analysis sheet also illuminated critical features of the formative assessment episodes that made use of it. Students used the Answer Analysis sheet on the final day of the unit. On the Answer Analysis sheet,

PAGE 326

316 students circled the item number for each item they answered correctly on their end of unit test, and added up the number of circled item numbers to get a total number correct by learning topic. Thus, students used the Answer Analysis sheet to do additional analysis of their end of unit test results, beyond what the teacher had done. They used this analysis to make interpretations about their learning by learning topic. The structure of these Answer Analysis sheets also illuminated that Kari expected her students to analyze their test results by the unit learning topics. This was consistent with how Kari framed the object for different learning activities. Gap attack center directions. Students used the Gap Attack center directions as part of the Gap Attack activity on the day before the end of unit test (two days before the end of the unit) in conjunction with their progress monitoring sheets to clarify the learning topics associated with problems from their workbooks, identify tasks to complete, and analyze their results as they completed different tasks. On the day before the end of unit test, students used their progress monitoring sheets and some recently completed homework assignments to determine on which major topics from the unit they need additional practice. Then they used to Gap Attack center directions to identify problems to complete (from the student workbooks) associated with that topic. Finally, they used the back side of the Gap Attack center directions, where the correct answers to each problem were printed, to analyze their answers to each problem they completed. Kari explained why they included answers to the problems as part of the Gap Attack center directions, “The answers they can check as they go so they’re making sure they’re not just continuing down a path of not understanding.”

PAGE 327

317 The structure of the Gap Attack center directions (organized by topic) was consistent with how Kari described the object of different learning activities throughout the unit. These directions also supported the participation structures for the Gap Attack activity. Kari provided the answers to different problems on the back side of the Gap Attack directions. Her explanation for why she did this provided additional evidence regarding how Kari engaged students as the users of their learning data. The Great Math Race worksheets The tools associated with the Great Math Race supported formative assessment episodes and illustrated several aspects of those episodes. Students completed Great Math Race worksheets three times a week. The ones used during the observed unit focused on subtraction of whole numbers from 1 to 12. The first day students used these worksheets, every student received the same worksheet with tasks that required them to subtract by one. The next day, Kari gave students a different worksheet if they had correctly responded to all of the tasks on the first worksheet. If not, she gave them the same worksheet again. Each class session, Kari determined the worksheet that students received based on their performance on the prior worksheet; the worksheets supported Kari making adjustments to students learning activity based on learning data. As students completed each worksheet with all items correct, Kari moved the car with their name on it around a “race course” posted on the wall; the race course provided a visual illustration for students of their learning progress. Math self assessment sheets. The Math Self Assessment sheets supported students as they engaged in self assessment and illuminated several of the features of

PAGE 328

318 this practice in Kari’s classroom. Students used the Math Self Assessment sheets in conjunction with their corrected end of unit tests, and the Answer Analysis sheets, on the last day of the unit during the Reviewing Test Results activity. They used these tools to predict their unit test results, analyze and interpret their results and to identify adjustments they needed make to their learning tactics. Kari used the Math Self Assessment sheets to provide oral feedback to the class about what she was seeing students write; as a basis for students to share orally with the class their self assessment; and to engage one student in formative questioning while he was completing his Math Self Assessment sheet. Students began filling in information on their Math Self Assessment sheets before Kari returned their corrected end of unit tests. They responded to prompts about their perception of their learning in relationship to the major topics for the unit based on having completed the end of unit test the day before. After they received their corrected end of unit tests, students identified their actual score, and responded to prompts on the Math Self Assessment sheet that required them to compare their expected to actual results – providing a possible check on the accuracy of their self assessment. Then students used the Answer Analysis sheet, to capture information about how many test items the answered correctly out of the number included on the test for each major learning topic. Based on this analysis, they then interpreted their test results, responding to additional prompts on the Math Self Assessment sheet. The prompt required students to interpret their results in relationship to the learning tactics in which they had engaged during the current instructional unit. Then students

PAGE 329

319 identified and captured how they planned to adjust their learning tactics in the next unit based on these results. As students completed the different sections of the math Self Assessment sheet, the Kari prompted students to orally share with the whole class. In some cases, she also asked the rest of the student to show, but raising their hands, if they had written the same thing. The prompts on the Math Self Assessment sheets provided a scaffold to support each step of studentsÂ’ self assessing and planning to adjust their learning tactics for the next unit. They mediated studentsÂ’ self assessment. These sheets also supported the participation structure for the learning activity. That the major topics for the unit were list at the top of the Math Self Assessment sheet illuminated that Kari organized formative assessment episodes during this unit around these major topics. The prompts included on the Math Self Assessment sheet and the Answer Analysis sheet illuminated the critical features of student self assessment during this activity. StudentsÂ’ analyzed and interpreted the data from their end of unit test results in reference to the major learning topics for the unit. They also interpreted how their results related to the learning tactics in which they had engaged throughout the unit. The final prompts on the Math Self Assessment sheet illuminated that students were expected to plan for how they would adjust their learning tactics during the subsequent unit. I did not collect evidence regarding whether or not students actually adjusted their tactics as part of this study. Progress monitoring sheets. Progress monitoring sheets provided structure for studentsÂ’ daily self assessment and illuminated critical features of that practice. Kari

PAGE 330

320 gave each student a new Progress Monitoring Sheet on the last day of one unit to use throughout the subsequent unit. Students used the progress monitoring sheets in conjunction with homework assignments (with some of the problems checked) each day as part of a progress monitoring activity, and at the end of the unit during Gap Attack activity. Students stored these progress monitoring sheets in their desks. Kari expected students to write the major unit learning goal for the new unit at the top of one side of their progress monitoring sheets on the last day of the prior unit. She expected them to write a goal related to their learning tactics for the new unit based on their analysis of their end of unit test results from the prior unit. These goals included statements like the following: come in for more help, ask questions when I donÂ’t understand something, do my homework, ask questions, and work hard and ask more questions. It is unclear whether this happened for the focus unit, since it would have happened the day before we began collecting data in KariÂ’s classroom, and Kari had a substitute teacher that day. It did happen for the subsequent unit on the final day of the unit we observed. On typical days during the unit, after students had checked some of the problems on the homework, Kari wrote the topical focus of the homework (treated as a learning target) on the board (e.g., classifying triangles, measures of angles of a triangle, isosceles triangles, equilateral triangles, triangle inequalities, ). She then prompted students to write that topic on one row of their progress monitoring sheet. This was one way that Kari clarified the learning target for homework assignments with students. She next expected students to use the data about how they did on some of the homework problems, to self assess. The progress monitoring sheets included categories for

PAGE 331

321 students to use in rating their learning and a column where students made notes about what they needed help with in relationship to the homework focus. Kari gave students a few minutes to add information to this sheet, one row at a time, during each class session after homework was assigned. If they ran out of rows, they would get a second sheet and staple it to the first. Although students used the progress monitoring sheets daily, Kari only reviewed them once, the day before the end of unit test. At a mid point in the unit, when the content was switching from a focus on triangles to quadrilaterals, Kari asked students to respond to a prompt on the back side of the progress monitoring sheet, “H ere’s how I’m doing”. She prompted students about what to write. “So, on the back side of the progress monitoring today write how are you doing understanding the properties of triangles. Give me some information on how you are doing. Be specific. Don't tell me fine. Don't tell me I'm doing great. Give me something specific, I understand how the angles in different types of triangles work, I forget sometimes all the angles are 60 degrees. Give me something specific what exactly do you understand and what exactly do you not understand.” After students had a few minutes to write, Kari asked a few students to share orally with the class what they wrote on their sheet. The statements students shared included the following: I understand all objectives about triangle angles and sides, I understand the triangles and the lengths, and remind myself about reading directions. Kari also used this opportunity to engage students in formative questioning regarding what they should know about properties of triangles at this point during the unit. During the post observation interview, she explained why she had students do this at this point in the unit. “I try to find one or two spots during every unit to have a check in time

PAGE 332

322 and... as I was looking...the last few days, realizing that we're hitting this kind of transition from triangles to other shapes, that it was a good spot to do it.” The progress monitoring sheets provided a scaffold for students’ self assessment of their learning for each homework assignment, mediating that practice. The sheets also mediated students’ choices about in which Gap Attack centers to focus their time on the day before the end of unit test. Information included as part of the progress monitoring sheets also illuminated how Kari engaged students in self assessment. First, she expected students to write the major unit learning goal on the top of these sheets and list the topic for each homework assignment on a row of the sheet. This illustrated how Kari anchored students’ self assessment in both the unit learning goal and the daily learning objectives. The categories of ratings on the sheets illuminated how students interpreted their learning based on the data from their homework assignments. The column where students wrote comments about what additional help they needed, illustrated that Kari expected students to take action based on their interpretation of their learning for each homework assignment. That Kari only collected students’ progress monitoring sheets at the end of the unit illuminated that she did not use this information to adjust her instructional practice. Using student work as a social object. Kari frequently used student work as a social object as part of daily instruction; on average this occurred seven times per class period. Social objects are reflective talk among a group of people engaged in a particular activity that they can refer back to later (Jordan & Putz, 2004). Social objects are tools that can mediate learning and formative assessment practice. About half of the

PAGE 333

323 instances of Kari using student work as a social object involved Kari using student work to mediate formative questioning with the class as a whole to complete a mathematics task. She asked individual students to offer information about each step they had taken in completing the task. Then she wrote on the board in the front of the room as the student talked. If the step in completing the task, as described by the student, was incorrect, rather than writing that incorrect step on the board, Kari would probe further with the student or ask a different student to describe what he or she did. She only wrote correct responses on the board. Thus, while she shared student work publicly, she was selective about what aspect of the student work she shared. This meant that Kari and her students could not refer back to mistakes, only correct answers. The other half of the occurrences of Kari using student work as a social object involved her asking students to verbally share what they had written on their progress monitoring sheets or on their Math Self Assessment sheets. She would repeat what the student had said, and use the studentsÂ’ descriptions to mediate other studentsÂ’ completing their self assessment, asking the rest of the class to indicate by raising their hands if they had written something similar. This occurred several times during the observed instructional unit, but not daily. Kari did not share student work in writing, even though the classroom was equipped with a document camera and she frequently used the document camera to share pages from the primary instructional resource. In addition, Kari was selective about the student work to which she asked other student to refer; she did not refer back to incorrect student work.

PAGE 334

324 Participation structures. The participation structures for various types of learning activities occurring in Kari’s classroom influenced the formative assessment episodes that occurred during each activity. Each type of learning activity during which formative assessment episodes occurred in Kari’s classroom had specific participation structures that I described in the unit learning context overview. For each type of learning activity in Table 5.13, I indicate the frequency with which formative assessment episodes occurred during the activity, and summarize the following aspects of the learning activity participation structures: The nature of the student role(s), whether it included students doing or talking about math tasks. How students participated, individually, in small groups, or as a full class. The nature of teacher role(s), whether it included the teacher interacting with or observing students or providing content through presentation. Whether or not the activity included formal data collection or students turning in a product. Several patterns related to student’s roles emerge from consideration of the participation structures of the learning activities that included formative assessment episodes. First, most involved students completing math tasks or talking about math tasks during the activity. One of those that did not – progress monitoring –involved students self assessing about math tasks they had already completed. Another type of learning activity that did not involve students completing a math task, e.g. checking homework, involved students asking questions about math tasks they had already

PAGE 335

325 completed. During learning activities that included formative assessment episodes in Kari’s classroom, students frequently engaged in talking about math tasks, and much of that talk happened as a full class. Table 5.13 Learning Activity by Participation Structure Learning Activity Student Roles How students participate Teacher Roles Formal data collection Avg. # of FAEs Do math tasks Talk about math tasks Individual Small Group Full Class Student Interaction observation Present Content Timed computational practice X x x 1.2 Checking homework x x x 1.9 Solving homework problems as a class X x x x x x 4.5 Student progress monitoring x x (once) 1.4 Introducing new content X x x x x x x 3.8 In class practice problems X x x x x x 9.7 Gap attack X x x x 13 Reviewing unit test results x x x x x 31 Correcting items from the unit test X x x x x 7 Additional patterns emerged as I considered how students participated in learning activities that included formative assessment episodes— individually, as part of a small group, or as part of the whole class. All but two of the learning activities that included formative assessment episodes involved students participating individually. Participating individually included students doing math tasks, or students formally (in writing) self assessing about math tasks. In most cases, these same learning activities subsequently involved students participating as part of a full class discussion. The

PAGE 336

326 teacher and student roles for full class discussions were very similar across the different learning activities. Kari initiated a class discussion by asking a question, or by reposing a question that an individual student had asked. Then she identified a student to answer the question, usually from among students who volunteered by raising their hands. Occasionally she called on students who had not volunteered. A subset of students within the class (5 or 6), were called on most frequently. After a student responded to the question, Kari repeated the student’s response (possibly so that the other students could hear over the general noisiness of the classroom), and either asked the student to explain his/her response, probed deeper, or asked another student to elaborate on the first student’s response. When class discussions used math tasks students had completed individually, Kari wrote the correct aspects of the students’ response to her questions on the board. Class discussion continued until students elaborated a complete response to the task. When class discussions followed students’ written self assessment of their learning, Kari sometimes followed the questioning of individual student with a question to the whole class about the degree to which their self assessment was the same as the individual student’s self assessment. As described above, close to one fourth of the students in the class did not actively participate in these full class discussions or in the formative assessment episodes that occurred during them. Of the four types of learning activities that involved students participating as a group, during two of them – introducing content and in class practice problems – group interaction was not structured and did not always include talk about mathematics. Students’ roles engaging in solving problems with a small group during the Gap Attack

PAGE 337

327 was somewhat structured by the Gap Attack center directions. However, during this activity, some small groups still engaged in talk about topics other than math tasks. During the final learning activity that engaged students participating in small groups – correcting items from the end of unit test— Kari provided specific verbal directions about students’ roles during this activity. All of the small group interaction during this learning activity focused on math tasks. The most common teacher role during learning activity that included formative assessment episodes was asking students questions about math tasks or their written self assessment of their learning. Kari only occasionally presented content. Teacher created materials or tools structured all of the learning activities that included students using data about their learning. This included progress monitoring, gap attack, reviewing end of unit test results, and correcting test items. The tools used during the activities substantively structured how students participated in the activities. In these instances, students had discretion in selecting their actions, but their actions focused on using the associated tools. Most of the learning activities that included formative assessment episodes did not include formal data collection. One learning activity that did include formal data collection, and also a large number of formative assessment episodes, was reviewing the end of unit test results. This occurred once during the unit, but involved the highest number of formative assessment episodes of any learning activity. Although it is not explicit in table 5.12, most of the uses of data during this activity were students’ self assessing based on the results of their end of unit test. Another learning activity that

PAGE 338

328 included formative assessment episodes and involved formal data collection was student progress monitoring that occurred after correcting homework tasks. The formal data that were collected included studentsÂ’ self assessment of their learning related to the topic of the homework, and studentsÂ’ identification of additional help then needed, or actions they should take to improve their learning. However, while students wrote on these sheets each day, Kari only collected them once. Students were the primary users of this written product rather than the teacher. Norms and rules. Did the behavioral norms and rules that were evident in KariÂ’s classroom influence how Kari and her students used learning data? I described the norms and rules evident in KariÂ’s classroom in the Unit Learning Context Overview. Several school wide norms werenÂ’t explicit, but clearly influenced learning activity, specifically frequent interruptions by someone from the main office asking questions over the intercom, or asking for student to come to the office. Several students missed most of the gap attack activity on the day before the unit test because someone from the main office asked them to come to the main office for year book pictures. The classroom level behavioral norms that had the greatest influence on learning activities that included formative assessment episodes were not explicit, but were evident in how students behaved during some learning activities. Those that had the potential to have the greatest impact formative assessment practices related to how students behaved during full class discussions. During activities that included full class

PAGE 339

329 discussions, roughly one fourth of the students in the class did not participate. When students would talk or get out of their seats at inappropriate times, Kari sometimes asked the students to quiet down, or called on an individual student to pay attention or to go back to their seats. However, students did not face any other consequences for these actions. These behaviors persisted regardless of the teacherÂ’s comments about them. The teacher engaged students in an activity at the end of the unit (after the unit test) during which she asked students to evaluate the relationship between paying attention in class and their performance on the end of unit test. During that activity, several students indicated they could have improved their performance by paying attention during class. It was outside the scope of this study to collect data regarding whether or not students changed their behavior after this activity. Classroom level rules related to homework assignments also had the potential to influence formative assessment episodes, because a number of formative assessment episodes used studentsÂ’ homework assignments as a data source. As described above, students had homework after almost every class session. During class sessions after which they had homework, Kari expected students to check some of the tasks on their own homework assignments (with the teacher reading out answers and students marking problems with a red pen) and use the data gained about their own learning to self assess. Also during the class session after which students had homework, Kari would engage students in a full class discussion that included formative questioning and scaffolding about certain homework problems. She informally collected data during these discussions to determine how much time to spend on reviewing homework tasks.

PAGE 340

330 At the end of the unit, students used the written record of their daily self assessment, based on their homework assignments, to determine about which of the major learning topics for the unit they would get additional practice. Norms related to student completion of homework were not consistent with homework playing a key role in the learning activities that included formative assessment episodes. Students faced no consequences for turning in homework late. Several days a week, Kari posted missing assignments on the board. However, in the moment, students faced little or no consequences for not bringing completed homework to class even though not having homework substantially limited their participation in the learning activities within which most of the formative assessment episodes occurred. Summary of evidence regarding how the social context influenced formative assessment practice. The social context of KariÂ’s classroom influenced the formative assessment practice that occurred within it. The evidence provided affirms that considering the interaction between the social context of the classroom and formative assessment episodes makes more apparent some of the features of formative assessment practice in context. The evidence from KariÂ’s classroom also supports my use of the following categories in describing the classroom social context: object, tools, participation structures, and rules and norms. I was able to describe the social context of KariÂ’s classroom using these categories identified in the first proposition) not only at the unit level, but also for the individual learning activities that made up the unit. However, the evidence from KariÂ’s classroom suggests that the physical layout and use of space

PAGE 341

331 should be explicitly included as part of the tools. Furthermore, my investigation illustrated how each component of the social context (object, tools, participation structures, and rules and norms) interacted with formative assessment episodes with some supporting and some limiting KariÂ’s formative assessment practices. From KariÂ’s perspective, the object of classroom activity and the learning targets about which she collected data aligned. This is a necessary condition for data collected during different learning activities to align with the learning targets, and for that data to accurately reflect studentsÂ’ learning. I presented indirect evidence that some students didnÂ’t share KariÂ’s object for learning activities that included full class discussion. Roughly, one fourth of the students in the class did not participate in full class discussions, and only another fourth shared their thinking as part of full class discussions. This created a condition within which data collected and used formatively during these learning activities could be inaccurate and/or misinterpreted by the teacher. The potential impact of this misalignment was greater in this context because a majority of the formative assessment episodes occurring in KariÂ’s classroom were part of full class discussions. The various cultural tools used in KariÂ’s classroom illuminated and influenced how Kari and her students used learning data. They illuminated the attributes of the learning targets on which formative assessment episodes focused and what were collected about different learning targets also. In KariÂ’s classroom, student work used as a social object became a tool that mediated formative assessment episodes and student learning.

PAGE 342

332 Teacher created materials mediated almost half of the formative assessment episodes occurring during the observed unit. The teachers at Kari’s school designed and used one tool, the Great Math Race, to adjust the computational tasks that students completed three days a week based on how each student had performed the class session before. Other tools created by Kari and the other 5th grade teachers structured all of the formative assessment episodes in which students were the users of the data about their own learning also (e.g. Math Self Assessment Sheets, Answer Analysis sheets, and Progress Monitoring sheets). These tools illuminated further details regarding the critical features of student self assessment as it occurred in Kari’s classroom, including: the learning targets on which these formative assessment episodes focused (major unit learning goal and major topics that were part of that goal), how students analyzed their learning data (by topic), how student interpretation of learning data was structured (both by rating their level of learning and associating performance with their learning tactics), and student use of learning data (to identify needed changes to their learning tactics, and plan for future action). Several of the cultural tools used in Kari’s classroom illuminated that she treated “topics” as learning targets and supported students associating these learning topics/targets with their related learning data as part of self assessment. Daily before students graded homework, Kari wrote the topic of the homework on the board. Students’ progress monitoring sheets had a space where they wrote this topic before they self assessed each day based on their homework assignments. Kari and the other 5th grade teachers organized the gap attack activity centers and directions by these

PAGE 343

333 topics. They also organized the Math Self Assessment and Answer Analysis sheets used after the end of unit test by these learning topics, and guided students to analyze and interpret their test results by these topics. The many instances of student work being used as a social object in Kari’s classroom supported part of the proposition that in a classroom where formative assessment practices are evident, information about learning (including mistakes and misunderstandings) becomes a “social object” that is valued both in terms of how it is described and how it is used. However, while Kari frequently shared student work in a way that other students could refer back to it, she made a distinction between correct and incorrect student work. She only wrote correct information on the board as students orally described how they completed a task or explained why they took a certain steps. Thus, students could not refer back to examples of work that included mistakes or misunderstandings. For many students this meant they couldn’t refer to work that looked like theirs. The evidence I provided related to the tools used during various learning activities in Kari’s classroom supported three of the study propositions. The tools influenced how Kari and her students used learning data, specifically how students engaged in self assessment. The features of tools and how students used them illuminated critical features of formative assessment practices occurring in Kari’s classroom. Finally, Kari frequently used of student work as a social object, although she excluded mistakes and misunderstandings.

PAGE 344

334 The participation structures of learning activities during which formative assessment episodes occurred, also influenced KariÂ’s and her studentsÂ’ uses of student learning data. The studentsÂ’ roles during learning activities that included Kari formatively using student learning data almost always included students doing math tasks, or talking about math tasks. Many of the learning activities during which the students used student learning data involved them writing about math tasks they had already completed. One example that occurred during almost every class session was student progress monitoring. In KariÂ’s classroom the vast majority of learning activities occurring during typical class sessions involved students participating first individually and then as a full class. Only two class sessions, the session before and the after the end of unit test, included students participating as part of a small group for a large part of the class session. KariÂ’s use of student learning data almost always (over 90% of formative assessment episodes) occurred while students were participating as a full class and included Kari adjusting the learning experiences of the class as a whole even though the data Kari used in determining what to do with the full class usually came from a small portion of the students in the class. Learning activities that involved students participating as part of a small group were not all the same. During typical class sessions, the student roles as part of a small group were not specifically defined and group level discussion did not focused on math. The participation structures during the math class sessions before and after the end of unit test were exceptional. Students participated and were engaged in small group work doing math tasks and talking about math tasks. Kari defined student

PAGE 345

335 rules during the small group activities, while simultaneously allowing students greater choices about how they participated. Kari’s role also changed on these days, she spent much more of her time interacting with individual students and small groups of students about their work. During these class sessions, the number of formative assessment episodes that occurred was much higher, and the types of formative uses of data more varied. They included Kari collecting informal data about the learning of individual students and small groups of students, and providing oral feedback and initiating formative questioning with individuals and small groups of students. Finally, the rules and norms guiding behavior in Kari’s classroom influenced formative uses of student learning data. The most significant norms were implicit – evident in how folks acted. External sources frequently interrupted Kari’s classroom during math instruction, limiting student participation in activities that included formative assessment episodes. While it was not explicit, students did not face consequences for doing things other than participating in full class discussions – talking with their peers, moving about the room. They also did not face consequences, other than the irritation of the teacher, for failing to complete their homework. These two implicit norms created conditions within which some students (about one fourth of the class) did not participate in a majority of the formative assessment episodes occurring during typical class sessions. During typical class sessions, the social context of Kari’s classroom, including the object of activity, tools mediating learning, participation structures, and norms/rules all interacted with the most frequent ways that Kari used student learning data during

PAGE 346

336 typical class sessions – full class questioning to limit student participation in those formative assessment episodes and increase the likelihood that Kari’s formative choices were based in incomplete data. The day before and after the end of unit test were exceptional, both in terms of the social context of Kari’s classroom, and in the increase in the type and frequency of Kari’s formative uses of student data Both during typical class sessions and on the two exceptional days, the social context within Kari’s classroom created conditions within which students frequently used data about their own learning to self assess and identify needed changes to their learning tactics (22% of all formative assessment episodes). However, whether frequent student self assessment resulted in students making changes to their learning tactics was not clear.

PAGE 347

337 Chapter VI Cross Case Analysis Often details become more evident in contrast. That was the case in my investigation of formative assessment practices in two different classrooms. What mattered to note, describe, or probe further, became much clearer when I examined these two classrooms together than when I considered each separately. I observed LizaÂ’s classroom first, each day believing that I was deepening my understanding of how formative assessment practice happened in her classroom. One week after I finished observations in LizaÂ’s classroom I switched to KariÂ’s classroom. As I captured notes about what was happening in KariÂ’s classroom, what mattered the most about LizaÂ’s classroom jumped out in full relief. In this cross case analysis chapter, I compare the formative assessment practice and the related aspects of the social context of the two classrooms addressing the two research questions guiding this study: 1) What are the critical attributes of formative assessment practice as it occurs in situ? 2) How does the social context of a K 12 classroom (e.g. tools, participation, student and teacher roles, norms/rules) influence formative assessment practice in situ? I additionally compare the evidence related to the propositions associated with each research question across the two classrooms. Critical Attributes of Formative Assessment Practice In response to the first research question, I compared and contrasted the critical attributes of formative assessment practice as it occurred within these two classrooms during a single instructional unit based on the components of a formative assessment

PAGE 348

338 episode as articulated in my first proposition. I present this information in the following sections: using student learning data, identifying and clarifying learning targets, collecting data about student learning, and analyzing and interpreting data about student learning. My second proposition relates to methods used in data collection and is addressed within that section. My third proposition relates to the scale of decisions made based on student learning data, this combines two components of a formative assessment episode. I present my comparison of the two classrooms related to scale of data use after the sections above described. Then I summarize my comparison of these two classrooms across all of the propositions for the first research question. Using student learning data. Both Liza and Kari frequently made formative use of student learning data in a variety of ways during each mathematics class session. In LizaÂ’s classroom, I identified 248 formative assessment episodes occurring over nine class sessions; these episodes occurred a little more than 28 times per class session. In KariÂ’s classroom, I identified 190 formative assessment episodes occurring over eleven class sessions with episodes occurring a little over 17 times per class session. LizaÂ’s class sessions were about 15 20 minutes longer most days; however, this additional time couldnÂ’t fully explain the difference in amounts of formative data use occurring in these two classrooms. The frequency of LizaÂ’s formative use of student learning data was consistent most days, whereas Kari had two days when her formative use of student learning data was significantly more frequent than on other days.

PAGE 349

339 In both classrooms, some of the formative assessment episodes involved students, rather than or in addition to, the teacher making use of their learning data. This was much more frequent in KariÂ’s classroom than in LizaÂ’s. Close to 22% of the episodes in KariÂ’s classroom involved the students using their own learning data. Less than 4% of the formative assessment episodes in LizaÂ’s classroom involved the students using their own learning data. Types of uses of student learning data. The types of uses of student learning data varied across these two classrooms, both in terms of the frequency with which each teacher engaged in different types of data use and with regard to the details of their approach to each type of data use. The formative assessment research literature informed my categorization of the types of formative uses of student learning data in these classrooms. However, I quickly discovered that the details of each teacherÂ’s practice within these broader categories, or the attributes of their practice, included both significant differences and some clear similarities. First, I compare the relative frequency of different types of data use across the two classrooms. Then I provide additional details comparing each type of formative data use of student learning data by broad categories with the types of data use presented in alphabetical order. Frequency of types of data use. The relative frequency with which each teacher employed different types of formative data use varied substantially. Both teachers employed a few types of data use more frequently than other types.

PAGE 350

340 Table 6.1 Relative Frequency of Types of Data Use by Classroom Types of Student Learning Data Uses % of all Formative Assessment Episodes in LizaÂ’s Classroom % of all Formative Assessment Episodes in KariÂ’s Classroom Activating Students as Resources for One Another 4.0% 0.5% Adjusting Instruction, (including selecting students for intervention) 6.0% 7.4% Grouping Students 2.0% 0.0% Oral Feedback to Individuals or Small groups (not with questioning) 48.0% 8.4% Oral Feedback and Questioning (individuals or small groups) 14.0% 0.0% Oral Feedback to Full Class 12.5% 5.3% Questioning (individuals or small groups) 10.0% 17.4% Questioning (full class) 0.0% 39.0% Students Adjusting their Learning Tactics 0.8% 5.3% Student Self Assessment 3.0% 16.3% Written Formative Feedback 0.8% 0.5% More than half (62%), of the formative assessment episodes in LizaÂ’s classroom involved her providing oral feedback to individuals or small groups of students. About one fifth of those included her also engaging those students in formative questioning. KariÂ’s most frequent use of student learning data was engaging the full class in formative questioning. Activating students as resources for one another was a more common practice for Liza than for Kari. Engaging students in self assessment and adjusting their learning tactics was a more common strategy for Kari than for Liza. The majority of KariÂ’s formative uses of student learning data involved the class as a whole. While Kari had her students self assess, she didnÂ’t use their self assessment data to adjust her instruction or learning activity, rather expected students to change their

PAGE 351

341 learning tactics and gave students one specific opportunity to adjust their learning tactics based on their self assessment the day before the end of unit test. Kari’s formative assessment practice did not focus on differences in the learning of individual students’ or groups of students, whereas most of Liza’s formative assessment practice focused on the specific learning needs of individual students or groups of students. Table 6.1 provides the percent of all of the formative assessment episodes identified in each classroom that included each type of data use. Activating students as resources for one another. Both teachers activated students as resources for one another, but while this was a frequent practice in Liza’s classroom, it only occurred during one class session in Kari’s classroom. For Liza, activating students as resources for one another was a component of a number of learning activities. For Kari, it was a component of only one learning activity. Liza used how she seated her students for each instructional unit as a mechanism for activating students as resources for one another. Kari seated students in groups, but she did not base those groups on students’ learning data and they did not change for different units. Liza frequently used learning data to determine intentionally which students would work in groups and to structure activity so students could be resources for one another. During the one class session when Kari activated students as resources for one another, the students chose their groups. Amidst all of these differences, some similarities were evident as well. Both teachers described the different student roles while working in groups similarly – one playing a teacher role and others a student role. However, Liza provided more structure for these different student roles– displaying

PAGE 352

342 specific steps on the board, etc. Kari provided a quick oral description. Both teachers also spoke to individual students about specific tasks with which they could help other students. On some occasions, Liza checked back with the students in the “teacher” or helper roles to find out how the other students had done. Kari did not. Activating students as resources for one another was a significant component of how learning took place in Liza’s classroom. In Kari’s classroom it was a positive side effect of a single learning activity, although, during that one activity the students in Kari’s classroom were highly engaged. Adjusting instruction. Both teachers planned major learning activities for the unit with grade level colleagues before the beginning of the unit, including identifying which sections of the primary instructional resource they would use. This planning translated into specific daily lesson plans for Kari. For Liza this translated into a sequence of learning activities in which she planned to engage students. Both teachers used student learning data to adjust instruction, however the magnitude of the adjustments were different. Liza made adjustments to instruction during every class session. Adjusting as she went was a key component of how Liza structured learning activity. She only finalized her plans for each class session after she had reviewed student learning data from the day before. Kari only made small adjustments that did not disrupt the basic plan for each class session. However, Kari had a built in mechanism, which resulted in students getting different computational practice worksheets three class sessions per week based on their performance the session before. Liza had no similar mechanisms to adjust automatically students’ learning activity.

PAGE 353

343 Grouping students. Liza used student learning data to group students. She paired this with activating students as resources for one another and with providing a separate intervention for a selected group of students. Kari never used student learning data to group students. Oral feedback to individuals or small groups. Providing oral feedback to individuals or small groups of students was the most frequent use of student learning data in Liza’s classroom, happening on average, 18 times per class session. Half of the time in every class session involved her providing oral feedback. While Kari provided oral feedback to individuals and small groups of students on average 1.5 times per class session, a majority of the occurrences were on a single day. This was not a primary instructional strategy for Kari. While Liza frequently combined oral feedback and questioning with individuals and small groups, Kari did not. Oral feedback to the class as a whole. Both teachers provided oral feedback to the class as a whole during every class session. On average, Liza did this about twice as frequently as Kari. Both provided oral feedback about data collected through student work products from a prior class session. Both provided oral feedback to the class about their observations of students during math tasks, although Liza did this more frequently than Kari. Liza also provided oral feedback to the class while using student work as a social object and as part of explaining why she was making in the moment adjustments to their learning activities. Kari did neither of these. Questioning with individuals or small groups Both teachers engaged individuals and small groups in questioning. This was a daily occurrence in Liza’s classroom – on

PAGE 354

344 average three times per class session. LizaÂ’s formative questioning followed a similar pattern to her provision of oral feedback to individuals, or small groups, and frequently occurred in conjunction with oral feedback. Questioning with the full class. This was the most frequent use of student learning data in KariÂ’s classroom. She spent more than half of each class session engaging students in full class discussions driven by formative questioning. While I characterized this as questioning with the entire class, roughly of the students in KariÂ’s classroom did not participate in this full class questioning and others did not contribute information about their thinking. This use of student learning data did not occur in LizaÂ’s classroom. Students adjusting their learning tactics. Both teachers engaged students in using learning data to adjust their learning tactics. However, this use of student learning data was more frequent in KariÂ’s than in LizaÂ’s classroom. Both teachers devoted part of the class session subsequent to the last summative unit test to students adjusting their learning tactics based on their assessment results. Each gave students opportunities to correct the incorrect items on their summative tests. Kari asked students to make an explicit connection between their performance on the summative test for the unit and their learning tactics during the unit, and to plan for how they would adjust their learning tactics for the next unit. Liza did not. Kari gave students time during each class session to make notes about how their learning tactics needed to change based on their learning data from their homework assignments. Liza did not.

PAGE 355

345 Students' self assessment. Liza and Kari both engaged their students in self assessment. They provided specific opportunities for students to review formal data about their learning and rate their learning in relationship to the learning target associated with the data. Each occasionally asked students to make a connection between their effort and learning results. However, while Kari specifically identified studentsÂ’ developing self assessment skills as a learning target and provided students feedback about their self assessment on one occasion, Liza only indicated a desire to help students make a connection between their actions and their performance on math tasks. Kari also expected her students to self asses at least once per class session. Liza only expected her students to self assess as part of a few specific learning activities. Written formative feedback. Both teachers provided written formative feedback on student products, in addition to a grade. With three exceptions, neither teacher provided students with specific time to read the feedback or specific opportunities to use the feedback to improve their work or a subsequent task. I did not count these comments as instances of written formative feedback. One exception in both classrooms involved both teachersÂ’ engaging students in correcting items from the unit summative tests; each provided students with written information about which items on their tests were incorrect and gave students class time to correct incorrect items. In LizaÂ’s classroom, another exception occurred when she provided specific written comments about her studentsÂ’ ticket out results and structured an activity to require her students to use those comments. In KariÂ’s classroom an exception happened when she provided specific comments on studentsÂ’ progress monitoring sheets (about their

PAGE 356

346 self assessment); students had an opportunity to use her comments the next day when they began a progress monitoring sheet for the next unit. Identifying and clarifying learning targets with students. I discerned a learning target for each formative assessment episode identified in both Liza’s and Kari’s classrooms. Both teachers identified the major learning goal(s) for the observed unit, posting them on a board in their classroom. Both teachers used the major unit learning goal(s) as the structure within which they identified learning targets associated with daily activity relevant to the unit. However, how the major unit learning goal related to the daily learning targets was not the same across the two classrooms. Both teachers clarified major unit learning goals with their students. They each spent class time engaging learners in developing academic vocabulary associated with the major unit learning goal(s). Liza spent more class time interacting with students orally about what she called the “essential questions” for the unit. Kari reinforced the major unit learning goal more frequently, expecting students to write the major unit learning goal on the top of a “progress monitoring sheet” to which they referred each day during the unit. During the pre unit interview, Kari provided a broader description of the “big ideas” for the unit that was more cognitively complex than the major unit learning goal she shared with her students. Throughout the unit, her students completed tasks that were more cognitively complex than the major unit learning goal she shared with them. This misalignment between the teacher’s major learning goal and what she shared with students was not evident in Liza’s classroom. Although, Liza also

PAGE 357

347 engaged students in a few tasks that were more cognitively complex than the learning targets shared with them. Both teachers usually but not always shared learning targets at the learning activity level with their students. How the daily learning targets related to the major learning goal(s) for the unit and how each teacher described them was somewhat different. Liza posted “daily learning objectives” on a board in the back of her room. Each daily objective was a subcomponent of or a step towards one of the major unit learning goals. Liza used “objective language” in her description. Kari identified learning topics that she applied to the major unit goal each session– i.e. identifying the properties of different geometric shapes. Kari described them as learning “topics”. Regardless of how they described them, each teacher used learning targets in describing the focus of learning activity that included data collection and formative data use and had her students use daily learning targets in self assessment. Both teachers collected and formatively used data focused on learning targets that were not associated with the content of the observed instructional unit. Liza frequently gave her students warm up tasks not related to the current instructional unit content and she gave a weekly homework assignment that included tasks associated with learning targets that cut across the entire school year. Three times each week Liza’s students calculated homework completion rates, which included her collecting data about learning targets that were part of a prior unit. Kari’s students completed computational practice worksheets focused on subtracting whole numbers from 1 to 12 three times a week, not the focus of the current unit. Kari engaged students in a number

PAGE 358

348 of tasks requiring them to self assess and described developing self assessment strategies as a focus of her instruction. Both teachers did not clarify with students the learning targets that were not part of the current unit content. Collecting data about student learning. I identified an associated occurrence of data collection about student learning in relationship to a discernible learning target for each instance of data used formatively in LizaÂ’s, and KariÂ’s classrooms. The methods used to collect data in both classrooms fell into three basic categories: observation, questioning and student products. Liza and Kari both employed all three of these data collection strategies, although not with the same frequency. Table 6.2 includes the relative frequency of each data collection method for all of the identified formative assessment episodes in each classroom. One additional variation across the two classrooms not represented by the table was that students were frequently the data collectors in KariÂ’s classroom. Students only collected and used their own data a couple of times in LizaÂ’s classroom. Table 6.2 Data Collection Methods across Classrooms Data Collection Method Percent Formative Assessment Episodes in LizaÂ’s Classroom Percent of Formative Assessment Episodes in KariÂ’s Classroom Observation 78.2 10.5 Questioning 12.5 61.1 Products 9.3 28.4 Liza and Kari used informal data collection methods (observation and questioning) at about the same frequency as part of the formative assessment episodes occurring in their classrooms. A little over 90% of the formative assessment episodes in

PAGE 359

349 LizaÂ’s classroom included her collecting data informally. A little over 71% of the formative assessment episodes in KariÂ’s classroom included informal data collection, but some of the instances for which products were used involved students self assessing or making adjustments to their learning tactics based on their own work products. When Kari was the one using the student learning data, over 91% of the formative assessment episodes involved her collecting data informally. The evidence from both classrooms supported the proposition that when formative assessment practice occurs, a variety of assessment methods, including informal methods, are used during (in addition to after) learning activity to collect information about student learning. While both teachers used informal data collection methods with close to the same frequency, Liza primarily collected data through observation while Kari primarily collected data through questioning. Both teachers engaged students in completing math tasks and producing written products several times during each class but collected data informally about these products. Both used student homework assignments this way. However, how the teachers and their students used student homework assignments for informal data collection varied. In LizaÂ’s classroom, students completed homework assignments based on the unit content two days each week. During the class sessions in which the homework assignments were due, Liza had students talk about their homework assignments with their table group. While students discussed how they had completed homework tasks,

PAGE 360

350 and in some instances helped one another with difficulties, Liza moved around the room observing the studentsÂ’ interaction. Based on these observations, she provided oral feedback and/or engaged students in formative questioning about their homework tasks and their discussion of them. In KariÂ’s classroom, homework assignments were due every day. During the class session in which the homework assignment was due, Kari orally provided the correct answers to around half of the tasks while each student checked his/her own assignment. Then, students nominated one or more homework tasks to do as a class. This would launch a full class discussion that involved formative questioning, with Kari collecting data about student learning through the questions she asked different students in the class. Then, students used the data from the homework tasks they had checked to self assess about their learning in relationship to the major topic of the homework assignment. The variation in the methods of data collection across the two classrooms varied with the types of data use. In LizaÂ’s classroom, the vast majority of data were collected through observation; she used that data most frequently to provide oral feedback to individuals, groups of students or the entire class. In KariÂ’s classroom where the majority of data were collected through questioning, that data was used as part of formative questioning first with the class as a whole, but also sometimes with individuals or small groups. Also in KariÂ’s classroom, where a higher percentage of formative assessment episodes involved students using their own learning data to self assess or adjust their learning tactics, in the vast majority of those instances, students used data collected

PAGE 361

351 through products. However, when Kari was the one using the data, less than 10% of the formative assessment episodes used student products. Both Liza and Kari collected data collected aligned with the learning targets associated with their formative assessment episodes. In determining task to target alignment, I considered both the degree to which the different math tasks in which students engaged covered each learning target, and the degree to which the cognitive complexity of the tasks was at least as great as that of the associated learning targets. In both classrooms, the data collected covered the learning targets for each data use. In both classrooms the tasks in which students engaged were at least as and sometimes more cognitively complex than the learning targets also. The major unit learning goal that Kari shared with students was less cognitively complex than what she explained as part of interviews as the “big ideas” for the unit. Many of the tasks in her classroom aligned with her verbal explanation of the “big idea” but were more cognitively complex than the learning target that she shared with her students. In addition, Both Kari and Liza engaged students in completing tasks that provided data about learning targets that were not part of the unit content. The data collected aligned with the associated learning target in these instances. In other words, the data they collected provided accurate information about the learning targets. Analyzing and interpreting data about student learning. When the teachers were the ones using the data, a number of similarities were evident in data analysis and interpretation. When the teachers collected data informally as part of formative assessment episodes, they analyzed and interpreted in ‘on the fly’,

PAGE 362

352 which meant the teachers’ analysis and interpretation were not observable, but rather inferred. This included 90% of the formative assessment episodes in Liza’s classroom and 71% of the formative assessment episodes in Kari’s classroom (but 91% of those for which Kari was the one using the data). In both classrooms, when the teacher was the one using the data, their initial analysis of student work products was similar. Their approach included identifying correct and incorrect response, providing some comments about what was incorrect about certain student responses to tasks, and assigning an over all rating. The ratings were unsatisfactory, partially proficient or advanced in Liza’s classroom and 1, 2, 3, and 4 in Kari’s classroom. Liza often engaged in additional interpretation of student learning data collected through work products. Her interpretation included identifying tasks or parts of tasks with which multiple students struggled, or identifying patterns in how groups of students performed. She frequently used this interpretation during the next class session to do one of the following: adjust the learning activities for the entire class, in determine with which students to “check in” during small group activities, determine to whom to provide an additional intervention, or select which students would work together as part of small group activities. Liza’s interpretation was consistent with her formative uses of student learning data. Kari engaged in similar additional interpretation only on the end of unit test. Another difference in how the teachers analyzed and interpreted student products related to the teachers’ focus on student self assessment as a learning target.

PAGE 363

353 At the end of the unit, Kari collected a student work product that included the studentsÂ’ written self assessments of their performance on assignments throughout the unit. She qualitatively reviewed these products for completeness and provided comments to the students about the degree to which their self assessment agreed with her evaluation of their work. Liza collected written information from students about their self assessment on several occasions, such as when students made notes about which tasks on the unit pre assessment were difficult for them, and whether or not they guessed on each task. Her studentsÂ’ self assessment notes informed LizaÂ’s interpretation of their responses to the actual math tasks, but Liza did not analyze and interpret her studentsÂ’ self assessment separately as evidence of their self assessment skills. Another important difference between the two classrooms was who was doing the analyzing and interpreting of data. In almost 22% of KariÂ’s formative assessment episodes involved students using their learning data, interpreting, and in most cases analyzing their learning data. Less than 4% of the formative assessment episodes in LizaÂ’s classroom involved students analyzing or interpreting their own learning data. One frequent example in KariÂ’s classroom was student analysis and interpretation of homework assignments. This occurred during almost every class session. Students corrected some of the tasks on their homework assignments (as Kari read the correct answers) and then used that analysis to rate on their learning in relationship to the topical focus of the homework assignment. From these ratings, students then provided notes about how they need to change their learning tactics.

PAGE 364

354 Both teachers engaged students in analyzing, interpreting and using their summative test results. Each teacher identified which tasks were incorrect on the student’s summative tests, provided students with an over all rating, and returned them to students for their analysis. Students in Kari’s classroom grouped tasks by major unit learning topic and calculated the number of tasks correct over total number of tasks by topic. They used this to interpret (on a form that was turned in to the teacher) their understanding of the major unit learning topics, and to make a connection between their test results and their learning tactics during the unit. Students in Liza’s classroom reviewed their incorrect items from summative tests and determined if their mistakes were small or big for each item and about which items they needed additional help, and explained their incorrect responses. Liza did not ask her students to interpret their test results in reference to the learning tactics in which they engaged during the unit. In both classrooms students then had the option to correct their incorrect test items and turn them back in to the teacher to improve their over all test grades. The scale of use of student learning data. I classified the relationship between the timing of data collection and data use occurring at different levels or “scales”. A variety of scales of use was evident in both classrooms. In Liza’s classroom, the scales of use included: in the moment (during a current learning activity), during a subsequent learning activity within the same class session, during a subsequent class session within the same unit, across units. In Kari’s classroom, the scales of use included: in the moment (during the current learning activity), during a subsequent class session within the same unit, and across units. This

PAGE 365

355 variety provided evidence in support of the proposition that, teacher use of information about student learning to determine what to do next occurs at different levels, from “in the moment” adjustments to instruction or learning tactics, to shifts in the next activity, to changes in the unit as a whole, or to changes the next time the unit is taught. In both classrooms the vast majority of data uses for which the teacher was the one using the data, occurred in the moment or during the current learning activity, immediately after data collection. This was consistent both teachers basing the vast majority of their data uses on data that were collected informally. Some differences in the scale of use across the two classrooms were also evident. In Liza’s classroom, data use sometimes occurred as part of a learning activity after the data collection, but during the same class session. This did not occur in Kari’s classroom. Kari did not make adjustments on the fly based on student learning data. In Kari’s classroom, where students more frequently used student learning data, a little over half of the time when the students were the ones using the learning data, the use occurred during a subsequent class session, rather immediately after data collection. A comparison of the evidence related to formative assessment episode propositions. What are the critical attributes of formative assessment practice as it occurs in situ? In this section, I summarize what my comparison of these two classrooms revealed about formative assessment episodes as they occurred in the two classrooms.

PAGE 366

356 Definition of a formative assessment episode. I presented evidence from both classrooms regarding the first proposition. I identified more than a dozen formative assessment episodes per class session in both classrooms. Although it was not included in my definition or the proposition, a number of the formative assessment episodes occurring in Kari’s classroom involved students, not just the teacher, collecting and using data. Both teachers clarified learning targets with students for a majority of the formative assessment. In both classrooms, when the teacher was the one using the student learning data, the vast majority of formative assessment episodes (over 90%) involved analysis and interpretation done ‘on the fly’, also. While I inferred that analysis and interpretation happened from the teachers’ subsequent actions, it was not observable. Critical attributes of formative assessment practice. The components of formative assessment episodes in both classrooms to provided additional evidence regarding the critical attributes of formative assessment practice, both the ways in which each component was similar and they ways in which they were different across the two classrooms. My comparison of the evidence related to the remaining two propositions for this research question is included in the section on data collection. Uses of student learning data. Many of the different types of formative uses of student learning data were evident in both classrooms. The attributes of each type of data use had some similarities across the classrooms. One type of data use that occurred in Liza’s but not in Kari’s classroom, was using student learning data to group

PAGE 367

357 students and adjust learning activity, subsequently. One type of data use that occurred frequently in KariÂ’s classroom, but not in LizaÂ’s was formative questioning with the full class. While both teachers activated students as resources for one another, Liza intentionally grouped students as part of this data use, while Kari allowed students to group themselves. Liza planned each class session in part based on learning data collected during the prior class session, while Kari only made a few small adjustments during a few class sessions that reflected deviations from what was planned at the beginning of the unit. While both teachers took time to write descriptive comments on studentsÂ’ work, with a few exceptions, neither teacher gave students time to read the comments or specific opportunities to use them. Liza provided oral feedback to, and engaged in formative questioning with, individual students or small groups of students much more frequently than Kari did. LizaÂ’s oral feedback and questioning varied from short quick comments to much longer interactions. KariÂ’s oral feedback to, or questioning with, individuals or small groups of students was infrequent, except on two days during the unit. Both teachers engaged students in self assessment. This practice was part of every homework assignment in KariÂ’s classroom, but not in LizaÂ’s. Kari treated developing student self assessment skills as a learning target for her students. The most frequent types of data use in each classroom differed substantially. By far, the most common use of student learning data in LizaÂ’s classroom involved her providing oral feedback to, and/or engaging in formative questioning with, individual students or small groups of students. These uses constituted 72% of all of the formative assessment episodes occurring in LizaÂ’s classroom. Another formative use of student learning data

PAGE 368

358 that occurred in Liza’s classroom relatively more frequently than it did in Kari’s was activating students as resources for one another. Liza also used student learning data to group students for subsequent learning activity; Kari never did. The most frequent use that Kari made of student learning data was to engage the full class in formative questioning. Close to half of the formative assessment episodes occurring in Kari’s classroom, for which the teacher was the one using the data, involved this type of data use. Kari frequently used the information she gained during formative questioning to adjust to the learning experiences of the class as a whole, including how many different tasks the class would work on together, how much explanation she would provide, and which aspects of student understanding she would probe further. Kari also engaged students in self assessment and adjusting their learning tactics relatively much more frequently than Liza did. Students were the users of their own learning data for almost 22% of the formative assessment episodes occurring in Kari’s classroom and slightly less than 4% of the formative assessment episodes occurring in Liza’s classroom. Data collection. The similarities and differences between the two classrooms with regard to data collection corresponded with similarities and differences in data use. When the teachers were the ones using the data, the vast majority of data collection as part of formative assessment episodes in both classrooms was informal. However, the actual method used was not the same – Kari predominately used questioning and Liza more frequently used observation. Both Liza and Kari used students work products as

PAGE 369

359 part of formative assessment episodes in a little less than 10% of all of the formative assessment episodes. Frequently, both teachers collected data informally about work products students had already generated or were in the process of generating. This evidence supported the second proposition when formative assessment practice occurs, a variety of assessment methods, including informal methods, are used during (in addition to after) learning activity to collect information about student learning. Data analysis and interpretation. The teachers’ analysis and interpretation of learning data was consistent with their informal data collection. In both classrooms, when student learning data that were collected informally they were analyzed and interpreted ‘on the fly’. This meant that the teachers’ analysis and interpretation of student learning data was frequently not observable. The teachers’ analysis of student products was also similar and involved identifying incorrect responses and providing an over all score. However, their interpretation differed in ways that was consistent with how they formatively used the data. Kari’s interpretation was in reference to the learning of individual students. Liza interpreted data collected through student products in reference to the learning of the individual students; however, she also identified tasks with which multiple students had difficulties, and or considered how groups of students performed. When students were the ones using their own learning data, data collection was almost always based on a formal work product in both classrooms, although, this occurred much more frequently in Kari’s classroom than in Liza’s. Both teachers engaged students in rating their performance. Both teachers engaged students in

PAGE 370

360 additional analysis of their graded summative tests. Although, how they had students analyze and interpret their own summative test results was not the same. Liza’s students focused their analysis on the problems which the answered incorrectly, whether their mistakes were small (computational) or big, and whether or not they needed additional assistance. Kari’s students focused their analysis on the topics on which they did well or poorly and how their performance related to their efforts during the instructional unit. Scale of use. The scale of uses of student learning data brought together data use and data collection temporally. The evidence from both classrooms supported the proposition that teacher use of student learning data to decide what to do next occurs at different levels or scales. The scales of uses of student learning data in Liza’s classroom varied more than in Kari’s classroom. A Comparison of How the Social Context Influenced Formative Assessment Practice I identified four propositions to focus my analysis related to how the social context of a K 12 classroom influences formative assessment practice within the classroom. The first proposition represents an application of the second half of my theoretical framework for this study, my definition of the components of the “social context” of a classroom. My analysis associated with this proposition, for each classroom, included providing a description of the Unit Learning Context within those components: object, tools, participation/ roles, and rules or norms. This grounded my analysis related to the remaining propositions.

PAGE 371

361 I organized my comparison of the two classrooms by the components of the social context of a classroom but combine the unit and activity level. First, I compared the evidence regarding the first proposition, the degree to which I was able to describe the unit learning context with the categories identified in the proposition across both classrooms. Next, I compared the relative frequency of formative assessment episodes occurring within different types of learning activities across both classrooms over the whole unit. Then, I compare how each component of the social context of these classrooms (both at the unit and activity level) influenced the formative assessment episodes occurring within each classroom. In the section comparing the tools used to mediate learning in each of the classrooms, I also compared the degree to which the features of cultural tools and their use illuminated the critical features of formative assessment practice in each classroom, and how each teacher used student learning data as a social object. My comparison of how the social context of these classrooms influenced formative assessment practice includes the following sections: components of the unit learning context, types of learning activities, object, tools, participation structures (who participated and their roles), norms and rules, and a comparison of the evidence related to the social context of the classrooms. Components of the unit learning context. I described the unit learning context in LizaÂ’s and KariÂ’s classrooms using the components identified in my first proposition. These included the following: the object(s) of the unit, the tools used, the participation structures (who participated and their roles), and the rules and norms. However, in KariÂ’s classroom, one additional

PAGE 372

362 aspect of the social learning context clearly had an impact on student learning – the physical layout of the classroom and how space within the classroom was arranged and used. Kari’s classroom was open to two other classrooms and she had a separate table within her classroom at which a paraprofessional engaged 3 5 students in parallel dialogue throughout each math class session. Liza’s classroom was more traditional with four walls, a door, and no separate learning activity occurring within the classroom during the math class session. Evidence from Kari’s classroom suggests that the physical layout and use of space within the classroom may be important to explicitly consider as part of the social learning context. Four additional differences across the classrooms regarding who participated and how establish further context for my cross case comparison. First, Liza taught 6th grade math; Kari taught 5th grade. While I observed math instruction in both classrooms, in Liza’s it was one of the three different 6th grade math classes she taught. In Kari’s it was the time used for math instruction for her 5th grade class. This meant that Liza could adjust the seating of her students solely based on their math learning while Kari’s students seating likely considered also the other content areas she taught. Second, Liza was in her fourth year of teaching and Kari was in her eighth. It was unclear if or how Kari’s additional experience related to this study. Third, the demographics of the students were very different. The students in Liza’s classroom were more diverse with regard to race/ethnicity and socio economic status. It is unclear how this may have related to differences between the two classrooms. Third, the amount of time devoted to math instruction in each classroom was different. Liza’s class sessions were 90

PAGE 373

363 minutes long. However, because the observed class session was the first class of the day, activities related to starting the school day took 10 15 minutes of most class sessions – the pledge of allegiance, daily school wide announcements, etc. Kari’s math class sessions were technically 60 minutes long, but sometimes this varied. One day close to two full hours were devoted to mathematics, another day less than 45 minutes were devoted to math instruction. Over all the time available for math instruction was comparable across the two classrooms. However, Kari had more flexibility to provide additional or less time for mathematics. She extended the time for mathematics one day near the end of the unit to make up for time lost to a lock down drill and a day when the entire school took a break to read a book. Types of learning activities. The types of learning activities that made up the observed units and during which formative assessment episodes occurred varied between the two classrooms. Even learning activities that I categorized the same often looked, sounded, and felt somewhat different within each classroom. Table 6.3 provides a comparison of the formative uses of student learning data occurring during each type of learning activity that was part of the observed instructional units. For each classroom, I indicate the average number of formative assessment episodes occurring per occurrence of that type of learning activity in parenthesis next to the activity name. When a similar type of learning activity occurred in both classrooms, I listed them on the same row. However, for both classrooms, I used the label that the teacher attached to the activity even if the labels were different. I included the activity, “planning”, to indicate that some formative

PAGE 374

364 assessment episodes occurred outside of class sessions as the teacher was planning for the next class session. The middle column includes my comparison of the types of data use occurring during each learning activity across the two classrooms. More formative assessment episodes occurred in Liza’s classroom than in Kari’s classroom during every type of learning activity, except three – the gap attack (no parallel activity in Liza’s classroom), and reviewing and correcting the end of unit test results (which Liza combined into one activity). These activities occurred on the day before and the day after the end of unit test in Kari’s classroom, and the social context of these activities paralleled the context of many activities occurring in Liza’s classroom on most days. I provide a more detailed comparison of the different components of the social learning context across these two classrooms at the activity level, using the unit level context information as background in the remainder of this chapter. For each component, I also consider the relationship to the formative assessment episodes that occurred during each type of learning activity. The components addressed include the following: object, tools, participation structures, and norms and rules. Following these sections on each component of the social learning context at the activity level, I provide a summary of the evidence comparing the two classrooms related to my second research question.

PAGE 375

365 Table 6.3 Types of Learning Activities across Classrooms Types of Learning Activities in Liza’s Classroom Comparison Types of Learning Activities in Kari’s Classroom Warm Up (10) In both classrooms, students spent time at the beginning of some class sessions engaged in math tasks. The tasks Liza used varied more than the tasks Kari used in this parallel activity. Liza engaged students in a wide variety of formative uses of this student learning data – providing oral feedback, facilitating formative questioning, and using student work as a social object to provide full class feedback. Kari made a single formative use each session – adjusting the tasks they received the next day. Timed computational practice (1.2) Checking Hawk Math (3) Liza’s students grading of a weekly homework assignment (Hawk Math) and Kari’s students grading of all homework assignments were similar. However, the teachers used the learning data differently. In Liza’s classroom, while they were correcting their Hawk Math assignments, students sometimes asked questions which resulted in Liza providing oral feedback to the student. In Kari’s classroom, students used their partially corrected homework results to self assess. Then Kari’s students selected some homework tasks as a focus for solving homework problems as a class. Checking homework (1.9) Homework Circles (6.3) Liza and Kari both used homework as a data source for formative assessment episodes. Liza focused on providing oral feedback and facilitating formative questioning with small groups of students, and activating students as resources for one another. Then she used individual student work on homework tasks as a social object to provide oral feedback to the whole class. Kari did not engage with individuals or small groups related to homework assignments, but she did use individual students’ solutions to homework problems as a social object and as a data source for formative questioning with the whole class. Solving homework problems as a class (4.5)

PAGE 376

366 Types of Learning Activities in Liza’s Classroom Comparison Types of Learning Activities in Kari’s Classroom Homework Challenge (2.3) Students calculated the classes’ homework completion rate three times each week in Liza’s classroom. During this activity, students sometimes asked questions which resulted in oral feedback. No parallel activity occurred in Kari’s classroom. No parallel activity No parallel activity Each day, Kari gave students a few minutes after they had completed checking some of the tasks on their homework to self assess using a progress monitoring sheet. They rated their learning of the homework topic and made notes about changing their learning tactics. Kari sometimes asked students to share their self assessment notes and asked other students to indicate if they had captured similar notes. No parallel activity occurred in Liza’s classroom. Student progress monitoring (1.4) Coding Text (1) and Investigation (18.7) Both teachers clarified the learning target (objective or topic) and vocabulary associated with the new content as part of introducing new content. As part of introducing new content, Liza’s students “coded text” from the primary instructional resource and Liza provided oral feedback to the class about what she was seeing students code from the text. Kari sometimes presented content orally from the text, but did not provide feedback as part of this. Both teachers also engaged students in some exploratory activity as part of introducing new content. Liza’s students worked in groups during this exploration and Liza used that time to engage groups of students in formative questioning or to provide oral feedback. Kari’s students sat in groups, but primarily worked independently. Kari used this structure to engage in full class formative questioning. Introducing new content (3.8) In Class Practice and Partner Switch (14.3) Both teachers provided tasks for students to complete in class after introducing new content. Liza grouped students to work on them and engaged groups or individuals in formative questioning or provided oral feedback. After they had completed some or all of the problems, Liza asked a student or a group of students to share their work on the board, and used this shared work as a social object to provide oral feedback to In class practice problems (9.7)

PAGE 377

367 Types of Learning Activities in Liza’s Classroom Comparison Types of Learning Activities in Kari’s Classroom the class. Kari’s students worked individually or talked with other students sitting near them (not intentionally grouped), and Kari occasionally provided oral feedback to individuals or small groups. Kari engaged the class as a whole in formative questioning using students’ oral responses as a social object. Ticket out (1.0) At critical junctures during the unit, Liza gave students a task to complete at the end of class. Students self assessed regarding their effort and their learning. No parallel activity occurred in Kari’s classroom. No parallel activity Reviewing Ticket Out Results (4) formative assessment episodes may have been undercounted because of audio recording difficulties. The day after students completed a ticket out task, Liza grouped students based on their performance to review their results. As students worked, she sometimes provided intensive support to a few students and engaged small groups in formative questioning or provided oral feedback. No parallel activity occurred in Kari’s classroom. No parallel activity Unit Pre Test (5) Liza’s students completed a pre test during the first day of the unit. She asked students to self assess regarding how difficult each item on the pre test was for them. She used the results of this pre test to re seat students for the unit. The 6th grade team used the pre test results to determine pacing for the unit. Kari’ students also completed a pre test as homework, but she did not use the results formatively. Unit Pre Test (no formative assessment episodes occurred) Proficiency Poster (2.5) In Liza’s classroom, students responded to a proficiency poster question once at the beginning and once at the end of the unit. She used the first one as a measure of student’s conceptual understanding before beginning the unit. She used the second one to demonstrate to students how they had progressed during the unit. She posted student responses on the wall by performance level – providing a visual rubric of performance at each level on the task. No parallel activity occurred in Kari’s classroom. No parallel activity

PAGE 378

368 Types of Learning Activities in LizaÂ’s Classroom Comparison Types of Learning Activities in KariÂ’s Classroom No parallel activity The day before the end of unit test, Kari set up 5 centers reflecting a major learning topic from the unit. Students reviewed their progress monitoring sheets from the entire unit, and some of their graded homework to determine to which center to go. Kari engaged students in formative questioning or provided oral feedback at different centers. At the end of this activity, students self assessed. No parallel activity occurred in LizaÂ’s classroom. Gap attack (13) Summative Quiz (4.3) During the summative quizzes, both teachers interacted with students who raised their hands. Liza provided oral feedback or engaged the student in formative questioning. Because of problems with the audio recording, it was not possible to discern whether Kari engaged in formative questioning or provided oral feedback to individual students during the end of unit test End of Unit Test (no formative assessment episodes) Quiz Circles (5) *I may have undercounted formative assessment episodes due to an audio recording issue. Both teachers took class time to engage students in making meaning of summative quizzes or tests. Both teachers activated students as resources for one another in doing some additional analysis and interpretation of their results and correcting their mistakes. Kari split reviewing performance and correcting items from the unit test into two activities. Before they began correcting test items, Kari engaged students in a two part self assessment process that included students determining the number of items they got correct for each major unit learning topic, writing about how they performed by learning topic, and discussing their quiz results as a class. Then students self organized into groups to correct test items and Kari activated students who did well on the test as resources for the other students. Liza combined these activities and her studentsÂ’ self assessment of their performance was less involved. Small groups talked about their incorrect item and determined whether their mistakes were small or large before they worked on correcting items. Reviewing unit test results (31) Correcting items from the unit test (7)

PAGE 379

369 Types of Learning Activities in Liza’s Classroom Comparison Types of Learning Activities in Kari’s Classroom Both teachers engaged with the small groups while students were correcting test items. Kari provided oral feedback and facilitated formative questioning with the small groups. Unfortunately, due to problems with the audio recording in Liza’s classroom, it was only possible to see that she talked with small groups, not what that talk included. Thus, it is likely that I underestimated the number of formative assessment episodes occurring during her Quiz circles. However, Kari also engaged her students in much more extensive self assessment as part of her review of summative test results, so it is likely that even with an undercount in Liza’s classroom, more formative assessment episodes occurred during these activities in Kari’s classroom. Planning (1.2) Liza usually waited until after each class session to finalize her plans for the next class session based on student learning data. Kari did not generally make adjustments to her plans for the next day based on data collected the previous day. Planning (no formative assessment episodes) Object. Many similarities were evident in the object of learning activity across these two classrooms at both the unit and activity levels. However, Liza and her students shared the object of learning activity to a greater degree than Kari and her students. At the unit level, both teachers identified the object, or major student learning goal(s), for the observed units, and defined the object of activity in terms of student learning. Both teachers posted the object(s) of the unit on a board in the classroom and discussed these objects with their students. Kari included one additional component in her oral description to me of the “big ideas” of the unit that was more cognitively complex than what she provided to her students as the major unit learning goal. Liza’s

PAGE 380

370 description the “big ideas” had the same cognitive complexity as what she shared with her students. At the activity level, both teachers identified the object of all classroom activity associated with the content of the unit. Liza communicated these as daily objectives. Kari communicated them as “topics”. Both teachers also occasionally interacted with students about the object of specific learning activities, although Liza did this more frequently than Kari. In both classrooms, some learning activity occurred that was not directly associated with the content of the unit. While both teachers described the object of these activities to me, they did not communicate the objects of these activities to their students during the observed class sessions. The degree to which each teacher and her students shared the object of learning activity had the potential to influence the formative assessment episodes occurring during different learning activities. We did not ask students about this directly in either classroom. However, student engagement in different learning activities provided indirect evidence regarding the degree to which they shared the object with their teachers. This is one aspect of the social learning context that was different across these two classrooms. In Liza’s classroom, students were very engaged in almost every learning activity and their actions were almost always in alignment with the object of class activity as represented by Liza. On the rare occasion that students’ attention was elsewhere, Liza immediately got them back on track. In Kari’s classroom, most students were very engaged in some learning activities but a significant percentage of students

PAGE 381

371 were not engaged in others. Student engagement during activities that included full class discussion (e.g. solving homework problems as a class, new content introduction, or in class practice problems) was highly variable. Usually about of the students in the class were doing other things during full class discussions the learning activities during which over half of the formative assessment episodes occurred. It is unclear what contributed to studentsÂ’ lack of engagement in full class discussions. Possible explanations included the physical layout and use of space which resulted in a lot of ambient noise, that a number of students did not complete the homework assignments that focused daily full class discussions, or that the behavioral norms of the classroom which included participation were not enforced. Regardless of the cause, when they were engaged in other activities students did not share KariÂ’s object for the activity. One challenge for formative assessment episodes occurring during an activity in which the teacher and some students do not share the object is that learning data collected during the activity could be inaccurate. Kari did not learn about understanding or misunderstanding of the current learning topic through her formative questioning of the students who did not actively participate in full class discussions (only about of the students in the class contributed information to the discussions and usually the same students). As a result, she may have underestimated their misunderstanding and overestimated the understanding of the class as a whole. She didnÂ’t consider that the data she collected did not represent some students learning when she used it to adjust learning experiences of the whole class.

PAGE 382

372 My comparison of the object of learning activity and in particular the degree to which the teacher and students shared the object of learning activity provides evidence related to the second proposition – that the classroom social learning context influences how data about learning is used by teachers and students. In Kari’s classroom, because some students evidently did not share the learning object with Kari for full class discussions, the data collected during this activity did not include them. Tools. I focused my comparison of the cultural tools used in Liza’s and Kari’s classroom on the degree to which the tools available to both teachers reflected different social contexts and how tools mediated formative assessment practice and student learning. I also provide evidence related to three of my study propositions: 1) The classroom social learning context (in particular the cultural tools used) influences how data about learning is used by teachers and students; 2) The features of cultural tools used during formative assessment episodes and how they are used illuminate critical features of formative assessment practices; and 3) In a classroom where formative assessment practices are evident, information about learning (including mistakes or misunderstandings) becomes a “social object” (a tool) that is valued both in terms of how it is described and how it is used. The features of the cultural tools used in each classroom were similar in many ways but with some important differences. The most substantial differences between the two classrooms related to the use of tools as part of formative assessment episodes.

PAGE 383

373 In this section, I compare the type of tools, their use in formative assessment episodes, and how information about learning became a social object in each classroom. Comparison of types of tools. The types of cultural tools used in KariÂ’s and LizaÂ’s classrooms were similar. They included the following: equipment, supplies, resources posted on classroom walls, instructional materials provided by the district/school, teacher created instructional materials, and student math notebooks. In comparing this aspect of the social context of these two classrooms, I first describe the similarities and differences between the classrooms by each type of tool. Equipment. Both classrooms had similar equipment that the teachers used during math instruction including the following: two dry erase boards on which information was written or projected, an LCD projector (Liza had two Kari only had one), a document camera, and four or five desktop computers on the side of the room for student use. One of the dry erase boards in LizaÂ’s classroom was a SMART board (that she had acquired through a grant) and she had a hand held electronic board that she could write on to project on the SMART board. Kari had neither. Kari had a sound system used to amplify her voice every day, which Liza did not have. Both teachers wrote information and to projected information with their LCD projectors on their dry erase boards. Both teachers used their document cameras to project pages from their instructional resources. Liza occasionally used her document camera to project student work; Kari did not. LizaÂ’s students wrote on the SMART board in her classroom most class sessions to show their work on math tasks. Kari only allowed a student to write on

PAGE 384

374 her board once during the observed unit. A small group of LizaÂ’s students did additional tasks using the computers on the side of her room during one class session. KariÂ’s students did not use the computers in her room during the observed unit. Supplies Both teachers had shared supplies and supplies that students were responsible for individually managing. In LizaÂ’s classroom, shared supplies were stored in a closed cabinet or carried on her person in an apron pocket. In KariÂ’s classroom, shared supplies were stored on shelves or in open bins in the back of the room. Some of the supplies that shared in KariÂ’s classroom (paper and extra pencils) were the studentsÂ’ responsibility to bring to LizaÂ’s classroom. The different grade levels of these classes may explain this difference. Liza and Kari managed their supplies differently. When students accessed supplies and how they accessed them was restricted in LizaÂ’s classroom, it was not in KariÂ’s classroom. How they supported students sharpening their pencils is in illustrative example. Liza carried a small manual pencil sharpener in an apron that she wore each class session. If students needed to sharpen their pencils, they raised a hand and Liza gave them the manual pencil sharpener to use. She explained that this kept pencil sharpening from interrupting learning. In KariÂ’s students frequently got up from their seats during learning activity to sharpen their pencils using the noisy electronic pencil sharpener in the back of the room. Resources posted on classroom walls and the ceiling. Both teachers used the walls of their classrooms to share information relevant to the content of the observed units and behavioral rules and norms. Liza or her students used almost everything

PAGE 385

375 posted on the walls or ceilings in her classroom during the observed unit. Some of the resources posted on the walls in Kari’s classroom seemed to be decorative. However, many of the resources pertained to content areas other than Mathematics. This was not necessary in Liza’s classroom since Liza only taught mathematics. Both teachers posted the major unit learning goals on the walls of their classrooms. Liza also posted daily objectives and an agenda on her walls each day. Kari wrote a learning topic on the board during homework checking each day. Both teachers shared information used during math instruction on their walls. Kari posted math terms and tracked student progress on computational tasks. Liza hung math terms for the current unit from the ceiling, posted algorithms used during the current unit, and created a visual rubric and posted students’ work samples on the rubric at the beginning and the end of the unit. Both teachers posted information on their classroom walls related to behavioral rules. Kari posted several different posters with behavioral rules. When asked, students did not refer to most of the posted rules. Liza only posted one rule, “Do not disrupt others learning.” Both Liza and her students referred to this rule. Kari posted student jobs on the wall in her classroom. Liza used a chalkboard to track students’ earned preferred activity time during each class session. District or school provided instructional materials. Both teachers had access to and used a primary instructional resource provided by the district. This included a teachers’ manual and student workbooks. In Liza’s classroom, students did not write in their workbooks and returned them at the end of the unit. In Kari’s classroom, the

PAGE 386

376 students did write in their workbooks. Liza used two of the investigations from her primary instructional resource. Kari used almost all of the pages from her primary instructional resource. Both teachers used materials that supplemented the primary instructional resource also used by other teachers across each of the schools. Liza used Math Mates as a weekly assignment which included tasks associated with math topics across the entire school year. Kari used The Great Math Race as an in class practice three days a week to beef up studentsÂ’ computational skills. Teacher developed materials. Both teachers supplemented the instructional materials provided by the district and school with their own materials. They used these materials to structure learning activity during which a significant number of formative assessment episodes occurred. LizaÂ’s teacher developed materials included: warm up form, homework challenge tracking sheet, ticket out forms, unit assessment overview, reference sheet, Hawk math tracking sheet, progress monitoring assessment, and progress monitoring chart (although the last three tools were not used during the unit that was the focus of this study). KariÂ’s teacher developed materials included a progress monitoring sheet, gap attack center directions, math self assessment form, and an answer analysis form. All of the 6th grade math teachers in LizaÂ’s school met weekly to plan instruction, but the other teachers did not use the teacher developed materials Liza used. Kari and the other 5th grade teachers at her school all used the same teacher developed materials.

PAGE 387

377 Student math notebooks. Both teachers expected their students to maintain math notebooks. LizaÂ’s student math notebooks were 3 ring binders with commonly defined sections and tabs. Students stored a variety of materials in their notebooks. They left them in LizaÂ’s classroom, storing them between class sessions in a cupboard against the left front wall (near the door to the classroom), and picked them up as they entered the classroom each day. KariÂ’s student math notebooks were spiral notebooks that they kept at their desk. Students primarily used these notebooks as scratch paper. There was no explicit organizational structure and in some cases very little was written in her studentsÂ’ math notebooks. Tools used in formative assessment episodes. Two groups of overlapping tools had the greatest potential for influencing formative assessment practices and illuminating the critical features of formative assessment practices in both classrooms: 1) teacher created materials used during the learning activities when a high frequency of formative assessment episodes occurred; and 2) tasks that were used in the collection of student learning data (e.g. the mathematical tasks included in assignments and assessments). The first category contributed to the definition of participation structures for different learning activities during which formative assessment episodes occurred. The second determined the degree of alignment between the collected data and the learning targets. My comparison of the relationship between cultural tools and formative assessment episodes focused on these two categories.

PAGE 388

378 Tools used to support formative assessment episodes. Both teachers used materials they created to provide structure for activities during which formative assessment episodes occurred. Liza used a few additional tools not used in Kari’s classroom including directions for small group interaction, and a timer. Both teachers used teacher created materials to help students connect math tasks with associated learning targets and to support student self assessment about their learning in relationship to the learning target. For example, Liza had students use a Warm Up at the beginning of each class session divided into sections by topic (e.g. number sense). Liza put the tasks in the section associated with the topical focus of the task. Liza’s students also wrote the learning objective on the front of their Ticket Out forms next to where they wrote the ticket out task and their response. On the back of the same form, students rated their learning and their effort towards meeting that learning objective. Kari’s students wrote the topic of each homework assignment on a row of their Progress Monitoring sheet before they rated their learning in relationship to that topic and made notes about what they needed to do to improve. Both teachers used materials they had created to support student analysis and interpretation of summative test results. The one tool used in Liza’s classroom for this purpose was the Unit Assessment Overview Using that tool, students identified the big idea for each item on the summative test, indicated if their response to the item was correct or incorrect, and if incorrect indicated if it was a “simple error”. Then they made notes about whether or not they needed additional help with the item. For the items that they responded to incorrectly, students completed an additional table that included

PAGE 389

379 prompts to indicate why they answered the item incorrectly and a space for them to provide a correct response and show their work. Kari used two tools for a similar purpose. The first was an Answer Analysis form. Students used this form to identify which items on the summative test they answered correctly by topic and to determine a score (number correct out of total number of tasks) by topic. Then they used the Math Self Assessment form to respond to prompts which guided their review of their test results. The teacher created resource that students used in LizaÂ’s classroom to make meaning of their summative tests focused student attention on explaining their errors while the ones used in KariÂ’s classroom focused student attention on how their actions during the unit contributed to their results. One important difference in the teacher created tools used in each classroom as part of formative assessment episodes, was the type of student participation the tools supported. Almost all of the teacher created tools used in KariÂ’s classroom, supported studentsÂ’ engaging in individual learning activity. One exception was KariÂ’s Gap Attack center directions, which identified math tasks for students to complete, related to a specific learning topic and the correct responses to each task. Students completed those tasks in small groups, although the tool did not provide additional support for how students worked in the small groups. A number of LizaÂ’s teacher created tools (e.g., directions prepared in advance and projected on the SMART board) supported student participation as a small group. For example, the directions that Liza projected on the SMART board as students engaged in the homework circle activity were prompts guiding interactions among a small group of students. Liza projected a timer on the board for

PAGE 390

380 almost all group activities. LizaÂ’s structure of student interactions at the small group level created a context within which a number of formative assessment episodes occurred, including Liza providing oral feedback or engaging small groups in formative questioning and or students providing feedback to one another. Kari did not provide directions to structure the activity of small groups of students. Far fewer formative assessment episodes occurred at the small group level in KariÂ’s classroom. Tasks used in the collection of student learning data. A variety of tasks were used in the collection of student learning data as part of learning activity during which formative assessment episodes occurred in both classrooms. The alignment of tasks to learning targets is a pre condition for formative assessment practice to occur. The tasks used by each teacher illuminated the learning targets about which they collect data. The tasks also determined the level of instructional decisions the teachers made based the data. I analyzed the alignment of the tasks to the learning targets for different activities in both classrooms. As described above, all of the tasks used in data collection covered the content of the associated learning target and were at the same or exceeded the level of cognitive complexity of the associated learning target in both classrooms. In LizaÂ’s classroom, the vast majority of tasks focused on the daily learning objectives, creating the potential for uses of student leaning data during the current class session or the next class session. A few of the tasks (such as those included in the pre unit assessment and the proficiency poster) went beyond the daily learning objective, focusing on one of the major unit learning goals. These tasks created the

PAGE 391

381 potential for uses of student learning that were at a level beyond the current class session, or even the next class session, but still within the current unit. Finally, some of Liza’s tasks focused on learning objectives that went beyond the current unit, creating the potential for uses of student learning data that went beyond the current unit. Kari used tasks focused on three categories of learning targets. The first category, topics associated with the major topics and the unit learning goal (treated as daily learning objectives), was similar to Liza’s classroom. Tasks with this focus created the potential for data use during the same class session or the next class session. The second category was a perceived gap in the primary instructional resource – basic computation of whole numbers. This was not a learning target from a different unit, but rather a pre requisite for units across the school year. Data collected using tasks focused on computation of whole numbers determined what computational tasks students would complete during a subsequent class session. Students’ self assessment skills were the final category. These tasks illuminated that Kari considered students’ gaining self assessment skills to be a target for learning in her classroom. Students were the primary users of data from these tasks. Kari only reviewed her students’ self assessment data at the end of the observed unit. The tasks used in both classrooms identified illuminated the learning targets and determined the level of data use for formative assessment episodes, even though the types of learning targets on which tasks focused were different. Summary of evidence related to the propositions. My comparison of the tools used in Kari’s and Liza’s classroom revealed differences in how each teacher used tools

PAGE 392

382 even though the physical tools used in both classrooms were similar. For example, both teachers had dry erase boards and document cameras. Liza used these tools to facilitate studentsÂ’ sharing their work with the class. Kari did not allow students to share their work directly with these tools. I focused on two types of tools, teacher created tools that structured formative assessment episodes and tasks used in data collection, to compare the relationship between the tools and the formative assessment episodes during which the teachers and their students used them. Both teachers used materials they had created to provide structure for formative assessment episodes. These tools supported learners making connections between the learning target and the tasks, and their self assessment of their learning in relationship to the task. Both teachers used materials they had created to support student analysis, interpretation and use of their summative test results. In KariÂ’s classroom, this included students using data about their own learning to make the connection between their performance and their learning tactics. In LizaÂ’s classroom, tools facilitated student interpretation of learning results and identification of why they made mistakes, but did not support them making the connection between their performance and their learning tactics. Liza used tools to structure student interaction in small groups. Liza provided oral feedback and initiated formative questioning with small groups of students during every class session. This did not occur or occurred infrequently in KariÂ’s classroom during typical class sessions. Both teachers used tasks to collect data about student learning aligned with daily learning targets. Both used the data in the moment, or during the current class session. Some of the tasks used in LizaÂ’s

PAGE 393

383 classroom covered the major unit learning goals, and she used the data to make decisions across the unit. Some of the tasks used in Kari’s classroom focused on student self assessment skills. Using student work as a social object. My final proposition related to the cultural tools, was that information about learning (including mistakes or misunderstandings) becomes a “social object” (a tool) that is valued both in terms of how it is described and how it is used. Both teachers frequently used student work as a social object, sometimes several times per class session. The many examples of this provided evidence in support of most aspects of this proposition. However, how each teacher used students work as a social object varied. Liza’s students wrote their own work on the board as part of sharing it. Kari wrote what students orally reported on the board for others to see. Liza encouraged students to share their work, even when it included mistakes or misunderstandings, and asked students to interact with the mistakes of their peers. Kari did not share students’ mistakes or misunderstandings. Both teachers asked students to indicate if their work agreed with the shared student work. Liza gave students’ options of more than one example of student work with which they could agree or disagree. Kari did not. While the many instances of both teachers using student work as a social object, support most aspects of the proposition, each teacher used mistakes or misunderstandings evident in student work differently. This illuminated an attribute of Liza’s formative assessment practice, she allowed her students to learn from their peers mistakes; this was not present in Kari’s formative assessment practice.

PAGE 394

384 Participation structures. The proposition that participation structures influence teacher and student data use framed my comparison of the participation structures of learning activities in KariÂ’s, and LizaÂ’s classrooms. In table 6.3, above, I compared of the formative assessment episodes occurring during different types of learning activities in each classroom. Here I compare the participation structures for the types of learning activities during which formative assessment episodes occurred. This comparison focused on following attributes of learning activity participation structures: The nature of the student role(s), whether it included students completing a math task(s), or talking about math task(s) they had already completed; How students participated individually, in small groups, or as a full class (some activities included all three); The nature of teacher role(s) during each type of activity, whether it included the teacher interacting with individuals or small groups, only responding when individual students or groups of students asked questions, or providing content (direct instruction); and If the teacher formally collected student learning data as part of the activity. Table 6.4 includes a row for each type of learning activity in each classroom during which formative assessment episodes occurred. Similar to the table above, when similar activities occurred in both classrooms, I listed those activities on the same row, but used the label the teacher used to refer to each activity. I indicated the number of formative assessment episodes during each instance of that type of learning activity in

PAGE 395

385 parenthesis after the activity name. In the middle column of table 6.4, I compare of the participation structures for each activity for each attribute listed above. Below the table, I discuss themes evident in the participation structures of activities during which formative assessment episodes occurred across the two classrooms. Similarities were evident across the two classrooms with regard to the participation structures of most of the activities that included formative assessment episodes. One similarity was that the student roles in most of these activities included doing math tasks or talking about math tasks. In KariÂ’s classroom, the student roles also frequently included students self assessing based on math tasks they had already completed. Another similarity was that many of these activities resulted in formal data collection; even though most of the formative assessment episodes occurring during the activities utilized data the teachers collected informally. Table 6.4 Comparison of Participation Structures for Learning Activities LizaÂ’s Classroom Participation Structure Comparison KariÂ’s Classroom Warm Up (10) In both classrooms, the warm up and the computational practice activity involved students individually doing math tasks. In LizaÂ’s classroom, this activity also involved students talking about the math tasks in a small group, some students writing their responses to the math tasks on the SMART board and the class as a whole using the work as a social object. It also included her initiating interactions with individual students and small groups of students to provide oral feedback or formative questioning. It did not include formal data collection. Timed computational practice in KariÂ’s classroom only included her formally collecting data, and using it to adjust the practice tasks the students received during the next class session. Students didnÂ’t talk about or interact with data collected further. It did include formal data collection. Timed computa tional practice (1.2)

PAGE 396

386 LizaÂ’s Classroom Participation Structure Comparison KariÂ’s Classroom Checking Hawk Math (3) In both classrooms, these parallel activities involved the teacher providing correct responses and students individually checking their responses to at least some of the assigned homework tasks. Both teachers primarily responded to questions students asked. In KariÂ’s classroom, students also sometimes talked about homework problems as a full class and the teacher engaged the whole class in formative questioning. In both classrooms, these activities involved formal data collection. Checking homework (1.9) Homework Circles (6.3) In LizaÂ’s classroom, as part of Homework Circles, students participated in small group discussions of math tasks they had completed. Then they participated as part of a full class discussion during which Liza used student work as a social object and provided oral feedback to the class. As part of solving homework problems as a class as in KariÂ’s classroom students participated as whole class, although roughly of the students did not participate. Kari used student work as a social object and engaged the full class in formative questioning. In both classrooms, these activities involved the teacher using student work as a social object and ended with the teacher collecting student work products, or formal data collection. Solving homework problems as a class (4.5) Homework Challenge (2.3) In LizaÂ’s classroom students participated individually in the homework challenge doing math tasks. Liza presented content, and responded to questions. She occasionally initiated interaction or observed and provided oral feedback to individual students. This activity did include formal data collection. No parallel activity No parallel activity In KariÂ’s classroom, students individually engaged in self assessing based on each homework assignment. Kari occasionally engaged the full class in discussion about their self assessment, asking students to share their self assessment. This activity included formal data collection only once during the observed unit. Student progress monitoring (1.4) In LizaÂ’s classroom, students individually took notes and coded text as part of new content introduction. They occasionally completed an example math task as they were taking notes and talked with a small group about the notes they were taking. During this activity, Liza presented content and responded to student questions. This activity did not result in students turning in a work product. Introducing new content (3.8) Coding Text (1) Investigatio n (18.7) In LizaÂ’s classroom, the introduction of new content also included students working in small groups to complete math tasks and talk about math tasks. After they had completed several tasks as a small group, they would participate as a full class with student groups writing their work on the board and Liza using it as a social object to provide oral feedback to the

PAGE 397

387 LizaÂ’s Classroom Participation Structure Comparison KariÂ’s Classroom class. In KariÂ’s classroom, the introduction of new content also included students completing math tasks, but most of the time they did that as individuals with a subsequent full class discussion. LizaÂ’s role during the Investigations included initiating interactions and observing students as they worked in small groups, providing oral feedback and formative questioning. KariÂ’s roles during introducing new content included initiating interactions with students, but usually asking formative questions of the class as a whole, rather than individual students or small groups of students. These activities did not involve formal data collection in either class. However, in LizaÂ’s classroom, students stored their investigation notes in their notebooks and Liza occasionally reviewed them. In Class Practice and Partner Switch (14.3) In LizaÂ’s classroom, in class practice involved students completing and talking about math tasks in small groups. It also involved students talking about math tasks as a full class, during which Liza used student work as a social object and provided oral feedback to the whole class. In KariÂ’s classroom in class practice involved students individually completing math tasks and then engaging in full class discussion, which included formative questioning. LizaÂ’s roles involved frequent interaction with student groups, providing oral feedback and initiating formative questioning. KariÂ’s roles also occasionally involved observing individual students as they worked and occasionally providing oral feedback. She also engaged students in formative questioning as a class. This sometimes involved formal data collection in LizaÂ’s classroom, but not in KariÂ’s. In class practice problems (9.7) Ticket out (1.0) The ticket out activity in LizaÂ’s classroom included students completing math tasks individually. LizaÂ’s role was primarily to respond to student questions. This did include formal data collection. No parallel activity Reviewing Ticket Out Results (4) formative assessment episodes may have been undercount ed because Reviewing ticket out results in LizaÂ’s classroom included students participating as part of a small group in talking about math tasks and sometimes doing an additional math task. Then students participated in a full class discussion about the math tasks during which Liza used student work as a social object and provided oral feedback to the class as a whole. LizaÂ’s role during this activity included interacting with and observing small group discussions, providing oral feedback or initiating formative questioning with the small groups. She also facilitated full class discussions about student work. This did involve formal data No parallel activity

PAGE 398

388 Liza’s Classroom Participation Structure Comparison Kari’s Classroom of audio difficulties. collection. Unit Pre Test (5) This activity only included formative assessment episodes in Liza’s classroom. Students completed math tasks individually. Students provided notes about tasks that were difficult for them. The teacher only responded to questions. This did involve formal data collection. Unit Pre Test (no formative assessment episodes) Proficiency Poster (2.5) The proficiency poster activity in Liza’s classroom involved students completing math tasks individually, and rating their math task. The teacher only responded to questions. This did involve formal data collection. No parallel activity No parallel activity The “gap attack” activity in Kari’s classroom involved students working in small groups to complete math tasks and talk about math tasks. Kari interacted with and observed small groups or individual students providing oral feedback and engaging groups in formative questioning. While students turned in their notes (formal data collection), it is unclear if Kari looked at them. Gap attack (13) Summative Quiz (4.3) In both classrooms, students completed math tasks individually. Both teachers only responded to questions when students raised their hands. This did involve formal data collection in both classrooms. End of Unit Test (no formative assessment episodes) Quiz Circles (5) *formative assessment episodes under counted due to an audio recording issue In Liza’s classroom quiz circles involved students talking in small groups about math tasks and correcting math tasks (doing tasks). In Kari’s classroom, students individually reviewed their test results and talked about their review of their test results as a full class. Correcting items involved students doing math tasks and talking about math tasks in small groups. Liza’s role during quiz circles included interacting with and observing students as they did math tasks and talked about math tasks (although the content of this interaction was not available because of problems with the audio recording). While Kari’s students corrected items from the test, she initiated interactions with small groups of students, observed their work, provided oral feedback and initiated formative questioning. The participation structures for quiz circles and students’ correcting items for the unit test were very similar in both classrooms. In both classrooms, these activities resulted in formal data collection. Reviewing unit test results (31) Correcting items from the unit test (7)

PAGE 399

389 Additional similarities were evident in the participation structure of a specific learning activity occurring in both classrooms that included formative assessment episodes. The student and teacher roles were similar during summative tests across both classrooms. The students’ roles included analyzing their test results, interpreting them and using that analysis to correct missed items. The teacher roles involved observing students as they worked, providing oral feedback and engaging students in formative questioning. Some variation in the participation structures was also evident between the two classrooms. How students participated in learning activity that included formative assessment episodes was different during most class sessions. Most of the learning activities in Liza’s classroom involved students interacting in small groups. This occurred every class session. Students only interacted in small groups in Kari’s classroom during two activities – the gap attack and correcting items from the end of unit test. In addition, these activities only occurred on two days in Kari’s classroom, the day before and the day after the end of unit test. The teacher roles across the two classrooms also varied. Most learning activities in Liza’s classroom that included formative assessment episodes involved her initiating interaction, observing students, providing oral feedback, and formatively questioning students. These teacher roles were prominent during two activities in Kari’s classroom, the gap attack and correcting items from the end of unit test. Full class discussion was a common participation structure across learning activities that included formative assessment episodes in both classrooms. These full

PAGE 400

390 class discussions frequently involved the use of student work as a social object in both classrooms, although how each teacher used student work as a social object was not the same. The types of formative data use that were part of full class discussions in each classroom were also not the same. In KariÂ’s classroom full class discussion almost always included full class formative questioning. In LizaÂ’s classroom full class discussion usually included oral feedback. My comparison of the participation structures of learning activities that included formative assessment episodes in the two classrooms illustrated how this component of the social context of classrooms influenced teachersÂ’ formative assessment practice. Differences between the two classrooms in learning activity participation structures corresponded with differences in the types of formative data use. Norms and rules. In this section, I compare the norms and rules that related to the formative assessment episodes occurring in each classroom. The behavioral norms and rules regarding student behavior during all learning activity varied substantially across these two classrooms, as did the rules related to completion of homework assignments. Both types of rules and norms had the potential to influence formative assessment practice. Behavioral norms. The behavioral norms in LizaÂ’s classroom were explicit and student behavior was consistent with the explicit norms. The behavioral norms in KariÂ’s classroom were not explicit, but were evident in how students behaved.

PAGE 401

391 The primary explicit rule posted on the wall of Liza’s classroom was “Do not disrupt other’s learning.” Liza clearly described her expectation that students participate in all learning activity and support one another’s learning. Almost all students engaged in the learning activities that Liza facilitated almost all of the time. They did not get up from their seats during learning activity. They assisted other students when asked. Liza did not tolerate even small deviations from these norms. Adherence to them brought students praise and rewards. While Kari mentioned the importance of not disrupting other students’ learning, she did not enforce this norm. Students frequently got up from their seats during all different types of learning activities. Other than the last few days of the unit, when given time to engage in discussion as a full class or with their table groups, student discussions were frequently off topic. The behavioral norms with greatest potential to influence formative assessment practice related to how student behaved during full class discussions. Roughly, of the students in the class did not participate in the full class discussions. Instead, they spent the time talking to others at their tables about other topics or moving around the room to get a Kleenex, sharpen a pencil, get a drink of water, get blank paper, etc. When students talked or got out of their seats at inappropriate times, Kari sometimes asked them to quiet down or called on an individual student to pay attention or to go back to his/her seats. Students did not face any other consequences for these actions and these behaviors persisted throughout the observed unit.

PAGE 402

392 The behavior norms in LizaÂ’s classroom established a context within which Liza spent most class time interacting with small groups of students providing oral feedback or engaging in formative questioning, while other students completed or talked about math tasks. The behavior norms evident in KariÂ’s classroom did not create this same context most of the time. During typical class sessions, when Kari interacted with one small group other groups were not productively engaged. Kari did not allow small group interaction to continue for very long without asserting structure and switching back to full class interaction. Two notable exceptions in KariÂ’s classroom were the day before and the day after the end of unit test. On these days, students engaged in the gap attack activity and in correcting their test items in small groups for a more than half of the class session. On these days, Kari interacted with small groups for extended periods of time, engaging students in formative questioning and providing oral feedback. Homework norms. The rules and norms related to homework completion had the potential to influence formative assessment practice because a number of formative assessment episodes in both classrooms used homework as a data source. Both teachers engaged students in activities that required them to use their completed homework assignments during each class session in which the assignments were due. However, the rules related to completion of homework were different across the two classrooms. LizaÂ’s rules were clearly communicated, strictly enforced and had explicit consequences for students. KariÂ’s rules were looser.

PAGE 403

393 The consequences students faced for not completing homework also varied across the classrooms. Students who did not complete their homework assignments in Liza’s classroom by the day that they were due (three days a week) were marked down in the grade book and were usually required to call home to notify a parent or guardian during the class session. Their incomplete homework also brought down the class homework completion rate which had rewards attached to it. Most days all but one or two students completed their homework assignments. Students faced no consequences for turning in homework late in Kari’s classroom, although Kari publicly posted information about missing assignments. It is difficult to estimate how many students started math class without completed homework even though it was due every day. Liza’s homework completion rules created a context within which almost all students were prepared to participate meaningfully in homework circles, an activity that included a number of formative assessment episodes. Kari’s homework completion rules were associated with students not completing their homework and not participating in solving homework problems as a class and progress monitoring – two activities which included a number of formative assessment episodes. A comparison of the evidence related to how classroom social context influences formative assessment practice. My comparison of Kari’s, and Liza’s classroom illustrated how the classroom social context influenced formative assessment practice. Below I summarize my comparison of the evidence related to each proposition for my second research question. I described the social learning context of both classrooms using the categories

PAGE 404

394 identified in my first proposition. However, one feature of Kari’s classroom that influenced formative assessment practice, but which I did not explicitly consider for Liza’s classroom was the physical layout and use of space. This feature contributed to excessive noise in Kari’s classroom, which may have influenced how students participated in full class discussion, a learning activity during which the majority of formative assessment episodes occurred. I compared evidence from both classrooms guided by my second proposition that each component of the social context of the classroom influences formative data use. My comparison focused on the following components: object, tools, participation structures (who participated and their roles), and rules/norms. Both teachers identified and communicated the object of learning activity with her students; however, variability was evident in the degree to which each teacher and her students shared the learning object for different activities. Kari’s description of the major unit learning goal was less cognitively complex than what she identified as the “big ideas” of the unit. This may have contributed to a misalignment between her and her students learning object for the unit. Student engagement in Liza’s classroom was consistent and high across all of the learning activities during which formative assessment episodes occurred, providing indirect evidence that Liza and her students shared the learning object. Student engagement in Kari’s classroom was variable. About of the students in the class did not engage in full class discussions. This was evidence that students did not share Kari’s object for activities that included full class discussions, an activity during which a majority of formative assessment episodes occurred.

PAGE 405

395 Additionally, only about of the students contributed information about their work and thinking to the full class discussions. This resulted in Kari not collecting data from a significant portion of her students for use in those formative assessment episodes, even though her use of that data in the moment was with the full class. The impact of this misalignment between the students about whose learning she collected data and the studentsÂ’ whose learning experiences were affected was greater because Kari much of the time of typical class sessions engaging students in full class discussion. The cultural tools used in both classrooms as part of formative assessment episodes were similar; however, how each teacher used some tools was different. Both teachers used tools to facilitated students making connections between learning targets and tasks used in data collection so that they could self assess. The tasks used in both classrooms illuminated on what learning targets data collection could focus, and determined the scale of teachersÂ’ uses of the learning data collected with them. Differences in how the teachers used tools to structure small group interaction, created different contexts for the teachers to provide feedback or initiate formative questioning with individuals or small groups of students. I identified numerous incidents of both teachers using student work as a social object. However, how the teachers used student work as a social object influenced was not the same. Liza allowed students to share work even when it included mistakes or demonstrated misunderstandings. Kari did not. This made it possible for Liza to provide oral feedback to the whole class about typical mistakes or misunderstandings immediately after students had made those mistakes. Kari did not have the same opportunity.

PAGE 406

396 The participation structures across the two classrooms for learning activities that included formative assessment episodes varied. I considered student roles, how students participated (individually, small group, and full class), teacher roles, and whether or not formal data were collected. Two similarities across the classrooms involved participation structures that established a context within which formative assessment episodes occurred. The student role in a variety of learning activities in both classrooms included doing or talking about math tasks. This established a context within which a variety of formative assessment occurred in both classrooms. Both teachers engaged their learners in analyzing and interpreting summative test results after the teacher had corrected their tests. This also established a context within which formative assessment episodes occurred subsequent to the summative tests for the units in both classrooms. Differences in participation structures between the two classrooms were associated with differences in how each teacher used student learning data. Most of the learning activities in LizaÂ’s classroom involved students participating in small groups and most of LizaÂ’s formative use of student learning data involved providing oral feedback and/or initiating formative questioning with individual students or small groups of students. Kari only organized student work in small groups on two days during the unit, and a much smaller percentage of KariÂ’s formative uses of data involved providing oral feedback or initiating formative questioning with individuals or small groups. By contrast, many of the learning activities in KariÂ’s classroom involved students participating in full class discussion and the majority of KariÂ’s formative uses of learning data included full class formative questioning.

PAGE 407

397 I compared two categories of norms and rules, which had the potential to influence formative assessment practices. Those categories included rules and norms related to student behavior during learning activities that included formative assessment episodes, and those related to student completion of homework assignments. Behavioral rules and norms were consistent and explicit across all learning activities in LizaÂ’s classroom. Both the behavioral norms/rules and those rules related to homework completion established a context within which almost all students participated in learning activities that included formative assessment episodes almost all of the time. By contrast, behavior rules were less explicit and norms varied in KariÂ’s classroom. Norms evident during full class discussions limited student participation in the formative assessment episodes that occurred during full class discussions, and may have resulted in Kari collecting inaccurate data about the class as a whole. The rules and norms related to homework completion in KariÂ’s classroom were associated with fewer students completing homework assignments when they were due which limited student participation in both full class discussions focused on those homework assignments and studentsÂ’ self assessment about their learning for which student homework assignments were the data source. The social context influenced (or was influenced by) the formative assessment practice occurring both classrooms. However, how the social context influenced formative assessment practice in LizaÂ’s and KariÂ’s classroom was not the same. My comparison of the evidence from both classrooms affirms the importance of considering the social context and how it shapes formative assessment practice.

PAGE 408

398 Chapter VII Conclusions and Implications Introduction A number of researchers (Black & Wiliam, 1998; Hattie, 2009; Hattie & Timperley, 2007, Herman, Osmundson, & Silver, 2010; James et al., 2007; Kluger & DeNisi, 1996; Rodriguez, 2004; Shepard, 2000) and policy makers (CCSSO, 2008; Frohbieter et al., 2011; OECD, 2005) describe formative assessment as a strategy for improving student learning outcomes in K 12 classroom settings. However, critical gaps in the formative assessment literature could limit success of efforts to implement this practice. The lack of a fully defined theoretical foundation has hampered the conceptualization of the role of formative assessment in teaching and learning (Black & Wiliam, 2009; Brookhart, 2004; Perrenoud, 1998). As a result, definitions of formative assessment have been inconsistent, and the attributes that must be present for formative assessment to be effective and what shapes formative assessment practice within the K 12 context remains unclear. The purposes of this study were 1) to illustrate how a more fully defined theoretical foundation could deepen understanding of the potential role for formative assessment in learning, 2) to posit a working definition by which to then describe what formative assessment practice looks and sounds like as it is occurring in actual classrooms, and 3) to determine and explain how the classroom social context influences formative assessment practice. I applied a socio cultural perspective of learning to interpret individual and cross case analyses of formative assessment

PAGE 409

399 practices occurring in two different classrooms during a single mathematics instructional unit in each. This study was primarily descriptive and exploratory, and pointed to many areas for future empirical research. Two research questions guided this study: 1) What are the critical attributes of formative assessment practice as it occurs in K 12 classrooms? 2) How does the social context of a K 12 classroom (e.g. tools, participation, student and teacher roles, and rules/norms) influence formative assessment practice in situ? I further identified several propositions related to each research question to actualize my theoretical framework and establish an initial focus for data analysis. The data collected for this study were part of a larger, 4 year, IES funded research project (PIÂ’s, Ruiz Primo and Sands). The data used in this study were collected in one upper elementary (5th grade) and one middle level (6th grade) classroom during every class session of a single mathematics instructional unit in each classroom in the 2011 12 school year. My approach to analysis included qualitative coding of a variety of data sources from each classroom, identifying formative assessment episodes that occurred in each classroom, and using pattern analysis to identify the characteristics of those episodes and the social context within which they occurred. For this chapter, I organized my discussion of the results into two sections, one for each of the research questions guiding the study. Then I provide a section addressing limitations of the study, implications for future research, and my conclusions.

PAGE 410

400 Discussion of the Critical Attributes of Formative Assessment Practice My first research question focused on what counts as formative assessment practice, and how to recognize it while itÂ’s occurring in K 12 classrooms. My discussion of results starts with the first proposition for that question which explicated my definition of a formative assessment episode and established a framework within which I could interrogate the critical attributes of formative assessment practice occurring within my two case study classrooms. I found the remaining propositions too narrowly defined to organize the remainder of my findings. Thus, I instead provide a description of the role that formative assessment played in instruction and learning in these two classrooms. This establishes the context for the remainder of this section, which focuses on the critical attributes of formative assessment practice organized by each action included in my definition of a formative assessment episode. Application of the definition of a formative assessment episode. My definition of a formative assessment episode included the following actions: a) the teacher identifying (implicitly or explicitly) the learning target, and clarifying the learning target with the students; b) the teacher collecting data about student learning; c) the teacher and/or students analyzing learning data; d) the teacher and/or students interpreting data about student learning with regard to what it means for studentsÂ’ learning and/or instructional practice; and e) the teacher making adjustments to learning activity, and/or student(s) making adjustments to learning tactics. My approach to identifying formative assessment episodes in situ operationalized the assertions of other authors (Brookhart, 2009; Nichols, et al., 2009; Shepard, 2009) that teacher

PAGE 411

401 and/or student use of learning data forming learning should be the determiner that formative assessment had occurred. This definition and approach resulted in substantial evidence that formative assessment practice occurred in both case study classrooms (248 formative assessment episodes in LizaÂ’s classroom during nine class sessions and 190 in KariÂ’s classroom during eleven class sessions). However, my approach did not verify all actions included in my formative assessment episode definition. Evidence from both classrooms supported some changes to that definition. These changes include the following: 1) Specify that the teacher clarifies the learning targets with students when they are part of the current instructional unit; 2) Add students as possible actors collecting student learning data; and 3) Clarify that analysis and interpretation of student learning data may not be observable. In what follows, I elaborate on each of these revisions. Formative assessment research literature suggests clarifying learning targets and success criteria with students is important, but does not address how clarification relates to the content focus of the current instructional unit (Black & Wiliam, 1998; Sadler, 1989). Liza and Kari described the learning target(s) associated with each formative assessment episode. While they collected data, they did not clarify the learning targets with their students if the associated targets were not part of the current unit content. This may have been intentional on the part of both of these teachers so as not to distract their students from the content focus of the current instructional unit. This evidence supports my inclusion of clarifying learning targets with students as part of the definition of a formative assessment episode, but with qualifications. If the scale

PAGE 412

402 of data is use by the teacher is within or outside of the current instructional unit determines whether or not the teacher clarifies the learning target with her students. My definition of a formative assessment episode inappropriately excluded students as possible data collectors in addition to the teacher. Kari’s students corrected some of the tasks on the previous evenings’ homework assignments themselves and used that data to self assess regarding their learning in relationship to the topical focus of the homework assignment during almost every class session. These formative assessment episodes involved students collecting and using their own learning data before they provided that data to the teacher. This evidence is inconsistent with the statement in the first proposition that implies by omission that only teachers collect student learning data. Including students as data collectors has not been the explicit focus of other research. Finally, the majority of formative assessment episodes in both classrooms involved the teachers analyzing and interpreting student learning data ‘on the fly’. This was associated with both teachers using informal data collection methods for the vast majority of formative assessment episodes all of which occurred in the moment, or during the current instructional activity. Evidence of their analysis and interpretation was apparent in the teachers’ use of student learning data; however, I had to infer that it happened. While several studies have considered informal data collection and informal formative assessment, (see for example Ruiz Primo & Furtak, 2006, 2007) the authors did not specifically address informal analysis and interpretation.

PAGE 413

403 The role of formative assessment in instructional practice. Both teachers in this study spent a majority of their class time using student learning data. The differences in their data uses revealed key differences in the role of formative assessment practice in each teacher’s instruction approach, and how formative assessment shaped the most frequently occurring instructional strategies in each classroom. From a socio cultural perspective (Vygotsky, 1978; Wells, 1999), formative assessment practice mediated Liza’s instructional planning. Daily, Liza: 1) planned instruction based on her analysis and interpretation of student learning data from prior class sessions; 2) indicated that she could not describe what she had planned for the next class session until she reviewed student learning data from the current class session; and 3) planned to adjust learning activity during each class session based on student learning data. Formative assessment also mediated most of Liza’s instructional strategies. Liza constantly used data during the learning process to make micro adjustments to the learning experiences of individuals or groups of students in her classroom. She provided or facilitated the provision of assistance within individual students’ zone of proximal development (Gallimore & Tharp, 1990; Wells, 1999). As Moss (2008) described it, in Liza’s classroom formative assessment became “a way of looking at the evidence available for the ongoing interactions in the learning environment,” (p. 224). Formative assessment only partially mediated Kari’s instructional planning. From a socio cultural perspective, (Vygotsky, 1978; Wells, 1999) formative assessment practice mediated some of Kari’s instructional planning and her students’ learning

PAGE 414

404 tactics on two days during the observed unit. Kari seldom deviated from her unit plan, or her pre planned daily learning activities based on student learning data, only making small adjustments a few times during the unit. However, Kari established a built in mechanism as part of her instructional planning that resulted in adjustments to studentsÂ’ learning experiences without her needing to change lesson plans during the unit. In addition, Kari planned for her students to use their progressive, daily self assessment of their learning across the unit to determine their learning activities for the day before the end of unit test. Based on the definition provided by Lave and Wenger (1991), the students in KariÂ’s classroom were legitimate peripheral participants in structuring learning activities during two class sessions. Formative assessment inconsistently mediated KariÂ’s instructional strategies. While she collected data during the learning process, she didnÂ’t use that data to provide assistance targeted at her studentsÂ’ zone of proximal development during most class sessions. The majority of KariÂ’s formative uses of student learning data during typical class sessions involved formative questioning with the class as a whole during full class discussions. She based most of her formative data use on data from only those students who actively participated in full class discussions (roughly one quarter of the students did not participate at all and only about one quarter contributed information about their learning to the discussions). Despite evidence that her studentsÂ’ learning and performance on learning tasks was variable, Kari did not vary the instructional assistance she provided to different students. This makes it unlikely that her instructional assistance targeted each of her studentsÂ’ zones of proximal development

PAGE 415

405 (Vygotsky, 1978), even though she collected data during the learning process (Shepard, 2000; Gipps, 2002). The day before and after the end of unit test, Kari shifted her formative assessment practices to focus on individuals and small groups of students. On these two days, Kari collected data about student learning during the learning process and either provided or facilitated the provision of assistance within individual studentÂ’s zone of proximal development (Gallimore & Tharp, 1990; Wells, 1999). On these days her classroom looked and sounded like LizaÂ’s classroom did most days The differences in the role of formative assessment in learning in these two classrooms made apparent that what counts as formative assessment practice is a nuanced question. From a socio cultural perspective, it depends on determining what formative assessment practice mediates (Vygotsky, 1978; Wells, 1999), instructional planning, instructional strategies and content or student learning tactics. It also depends on determining whether mediation actually happens. The idea that what counts as formative assessment practice depends on what the practice mediates, represents a new contribution to the literature on how to identify formative assessment practice in situ. This suggests other researchers should explicitly consider the question of what formative assessment is mediating in their identification of formative assessment practices occurring within classroom settings. This also suggests that teacher professional development related to formative assessment should make a distinction regarding what the data use mediates.

PAGE 416

406 Critical attributes of formative assessment episode elements. I identified over 400 formative assessment episodes across my two case study classrooms. Here, I organize my discussion of the results of my analysis of these episodes by the actions included in a formative assessment episode: 1) Identifying and clarifying learning targets with students; 2) Collecting student learning data; 3) Analyzing and interpreting student learning data; and 4) Using student learning data. Since my second proposition related to the attributes of data collection, my discussion of my results related to that proposition is included in the section on data collection. My third proposition referred to the scale of data use. My discussion of results related to that proposition follows my more general discussion of results related to types of data use. I summarize this section with a table that describes my results regarding the attributes of each action included in a formative assessment episode. Identifying and clarifying learning targets with students. How the learning targets within the instructional units related to the major learning goals of a unit was a critical attribute of formative assessment practice across the instructional units in these two classrooms. This is consistent with researchers who emphasize the importance of establishing progressions of learning as a foundation for formative assessment practice (Heritage, date unknown). Both teachers structured their learning targets similarly – they each had a major unit learning goal, which provided the structure for targets for the learning activities that were part of the unit. However, how each teacher related daily or activity level learning targets to their major unit learning goal(s) was different. Liza identified “daily learning objectives” related to the content of

PAGE 417

407 the unit; each daily objective was a subcomponent or a step towards one of the major unit learning goals. Kari identified learning topics during each class session to which the major unit learning goal applied. For example, identifying the properties of equilateral triangles on one day and identifying the properties of parallelograms on another. This difference in how each teacher approached activity or daily learning objectives could be a reflection of the different content focus of the two observed units (operations with fractions vs. properties of triangles and quadrilaterals). Alternatively, this could reflect differences in how each teacher thought about structuring mathematics learning, with Liza structuring learning as an intentional sequence of related steps and Kari structuring learning as a series of topics. Analyzing their conceptualization of how to structure learning was beyond the scope of this study. Future research should explicitly consider how the differences in teachersÂ’ construction of the relationship between major unit learning goals and the objectives of daily activity relate to their formative assessment practice. Another potentially important attribute of the learning target on which formative assessment episodes focused was whether it was part of the current unit content or outside the current unit content. Both teachers collected and formatively used data that focused on learning targets not associated with the content of the observed instructional unit and used that data formatively to determine what to do next outside of the current unit content. This is consistent with MossÂ’ (2008) assertion that determining the scales of decisions teachers make based on learning data is critical to understanding their formative assessment practice.

PAGE 418

408 Both teachers posted the major unit learning goals, posted a representation of daily learning targets, interacted with their learners about major unit learning goals and vocabulary associated with the major unit learning goals, explicitly associated learning activity with learning targets, and required students to write down the learning targets associated with their self assessment. These actions all pointed to critical attributes of clarifying learning targets with students in these classrooms consistent with other authors who have considered teacher actions related to clarifying learning targets with students as part of formative uses of student learning data (Black et al., 2003; Sebba et al., 2008). However, several authors suggest teachers should go further in clarifying learning targets with students than these teachers did (Sadler, 1989; Andrade, Du, & Wang, 2008) especially when engaging students in self assessment. Specifically these researchers suggest that teachers should give students rubrics or exemplars which demonstrate what it means to have met the learning target. Much of the self assessment research to date has focused on writing rather than math, so that neither of these teachers engaged in these actions may be a function of the content area they were teaching, math. Teachers more typically use rubrics to evaluate writing, or extended written responses to tasks and often students do less extended writing in mathematics. However, with implementation of newer standards, students will be expected to do more extended writing in mathematics. As teachers implement newer standards, the use of rubrics and exemplars in math should become the focus of future research and the focus of professional development aimed at teacher formative assessment practice in math.

PAGE 419

409 Collecting student learning data. Several potentially critical attributes of data collection emerged from these two classrooms. They included the following: what data collection method the teachers used and the frequency with which they used informal data collection methods, the degree to which informal data collection was still planned, the degree of alignment between the tasks used to collect learning data and the learning targets that were the focus of the formative assessment episode, the degree of alignment between the students about whose learning data were collected and the students whose learning experiences were adjusted based on the data, the relationship between the data collection method and type of data use, and the difference between the data source (i.e. student product) and data collection method. The evidence presented from both classrooms supported my second proposition that both formal and informal data collection occurs as part of formative assessment practice. In these two classrooms, where formative assessment practice occurred frequently, a variety of assessment methods, including informal methods, were used during (in addition to after) the learning activity to collect information about student learning. In both classrooms, the vast majority formative assessment episodes used learning data the teachers collected informally (over 90% for Liza and over 71% for Kari). This meant that no formal record resulted from the data collection. This finding provides that supports assertions by other authors that the definition of what counts as assessment should include informal data collection methods (Jordan & Putz, 2004; Moss, 2008; Ruiz Primo & Furtak, 2004, 2006, 2007). This evidence also supports those

PAGE 420

410 who have emphasized the importance of informal data collection as key to a socio cultural conceptualization of assessment and using assessment results to identify students’ zone of proximal development (Shepard, 2000; Gipps, 2002). When the teachers collected data informally, the degree to which the teacher planned the data collection, and whether that planning focused on from which students to collect data or on which tasks, were additional attributes of informal data collection in these classrooms. While the vast majority of formative assessment episodes did not result in a formal record of the data collection in both classrooms, that doesn’t necessarily mean Liza didn’t engage in planning related to her informal data collection. Frequently Liza started the class session holding a sticky note with a list of the names of students she planned to check on during the warm up or homework circles. Liza planned about whose learning she would informally collect data. Kari sometimes identified problems from the prior day’s homework to do as a class before the class session when students brought their homework to see if students struggled with them. In other words, she planned about which learning targets she would informally collect data as part of full class formative questioning based on her review of what she had assigned as homework. This affirms the findings of other researchers who have considered teacher planning of informal data collection (Black et al., 2003; Ruiz Primo and Furtak, 2004, 2007) and suggests that the focus of that planning – which students or which tasks—may be important to consider. Evidence from both classrooms also pointed to additional potentially critical attributes of data collection as part of formative assessment episodes not reflected in

PAGE 421

411 the study propositions. In both classrooms, the tasks used to collect student learning data (including tasks that were part of the data source used in informal data collection) aligned with the learning targets that were the focus of the formative assessment episodes. This alignment is a critical attribute of formative assessment practice, not identified in my initial propositions, but consistent with common conceptualizations of validity in assessment (Pellegrino, Chudowsky, & Glaser, 2001; Russell & Airasian, 2012). The degree of alignment between the students about whose learning data were collected (individuals, small groups or full class) and the students whose learning experiences the associated formative data use affected was another key attribute of data collection. During typical class sessions, Kari collected data from the students that responded to the formative questions she asked, but used that data as if it came from the whole class. This pointed to potential inaccuracies in the data upon which she based her most frequent formative assessment practice – full class formative questioning. She also depended on students to volunteer with which homework tasks they struggled to focus her formative questioning and additional help she provided. As a result, she may have underestimated the difficulties that the class as a whole had with the homework tasks, and may have failed to provide assistance that some students needed. Kari simply left some students out of her data collection. The degree of alignment between the students about whose learning data were collected (individuals, small groups or full class) and the students whose learning experiences the associated formative data use affected has not been the focus of other research related to formative questioning (Black et al., 2003; Chin, 2006, 2007; Ruiz Primo & Furtak, 2004, 2006; Ruiz Primo,

PAGE 422

412 2011). However, the evidence from Kari’s classroom suggests it is an important attribute to consider in future research. The relationship between the data collection method and type of data use may be another key attribute of informal data collection. While both teachers used informal methods to collect data, their dominant method was not the same. Liza primarily used observation whereas Kari primarily used questioning. Each teacher’s method for informally collecting data about student learning was consistent with her dominant approach to incorporating formative assessment practice into her instruction. In these two classrooms, it was critical to identify the data source (possibly a student work product like a homework assignment) and the method of data collection separately – as distinct attributes of the teachers’ data collection. Each teacher informally collected data (through observation or questioning) based on work products students had already generated or were in the process of generating, but which the teacher had not yet collected. The teachers gathered information about student work and thinking about their work informally and used that informal data as part of several formative assessment episodes before the teachers collected or reviewed the work product. For example, this occurred when Kari engaged students in formative questioning about their homework before they turned in the homework. This occurred when Liza observed students discussing their homework assignments, collected data through observation, and provided oral feedback about their homework and their discussion before students turned in their homework assignments. Other authors (Black et al., 2003; Chin, 2006, 2007; Ruiz Primo & Furtak, 2004, 2006; Ruiz Primo, 2011) have

PAGE 423

413 not explicitly considered the relationship between student work products and informal data collection. Evidence from these classrooms supports treating the data source and data collection method as a separate attributes of formative assessment episodes. Analyzing and interpreting student learning data. Evidence from these classrooms affirms the need for additional research regarding on the fly data analysis and interpretation. When they collected data informally, both teachers appeared to analyze and interpret that data ‘on the fly’. I inferred their analysis and interpretation as it was not observable. Sometimes both teachers talked about their interpretation of data collected informally during post observation interviews; however, this was not enough of a specific focus of what we asked about in those interviews to develop a description of how they analyzed and interpreted data ‘on the fly’. Several researchers have considered teacher interpretation of student learning data as part of formative assessment practice (Black et al., 2005; Frohbieter et al., 2011; Herman et al., 2011; and Shavelson et al., 2008). However, they paid little attention to this kind of on the fly analysis and interpretation. Ruiz Primo and Furtak (2004, 2006) are an exception. They explicitly considered teachers informal formative assessment practice, including ‘on the fly’ analysis and interpretation, and called for additional research regarding informal formative assessment. My study suggests that additional research must more explicitly probe teachers about their ‘on the fly’ analysis and interpretation to discern what inferences teachers are making and based on what evidence.

PAGE 424

414 The evidence from these classrooms suggests that on what teachersÂ’ interpretation focuses student learning data to identify tasks, parts of tasks with which students struggled or patterns in performance across groups of students is a critical attribute formative assessment practice. In both classrooms, when the teacher was the one using the data, their initial analysis of formally collected student work products was similar. They identified correct and incorrect responses, provided comments about incorrect responses, and assigned an over all rating. Teachers providing students information about their correct and incorrect responses is one type of feedback identified by empirical literature as associated with improvements in student learning results (Bangert Drowns et al., 1991; Li et al., 2011). Providing an over all rating, or grade, is not consistent with the empirical literature on effective feedback (Bangert Drowns et al., 1991; Butler, 1987, 1988; Butler & Nisan, 1986; Kluger & DeNisi, 1996; Lipnevich & Smith, 2008; Pointe et al., 2009). Liza engaged in additional interpretation of student work products, beyond what Kari did. She identified tasks, or parts of tasks, with which multiple students struggled and identified patterns in how groups of students performed on different tasks. Her interpretation regarding which tasks or parts of tasks with which multiple students struggled corresponded with her decisions to provide additional learning experiences related to those tasks for some or all of her students. Her interpretation with regard to patterns in the performance of groups of students corresponded with her identification of students for additional instructional assistance, either from her or in activating students as resources for one another. LizaÂ’s additional interpretation of data expanded the types of uses of student learning data in which she

PAGE 425

415 engaged. Kari did not engage in similar interpretation of student learning data (except for with the end of unit test results) and did not adjust her plans for instruction or intentionally group students to activate them as resources for one another. This confirms other research that considered the focus of teacher interpretation of student learning data as critical, if not in these more simplified terms (Black et al., 2005; Frohbieter et al., 2011; Herman et al., 2011; and Shavelson et al., 2008). Using student learning data. The types of data use evident in these two classrooms roughly corresponded with those identified by Black and Wiliam in their influential 1998 review, and expanded and clarified by Wiliam and Thompson (2007) and Wiliam (2012), confirming these initial categories of data use. I found the following types of data use occurred multiple times in one or both classrooms: activating students as resources for one another (also known as peer assessment), adjusting Instruction (including selecting students for intervention and grouping students), formative feedback (oral to individuals or small groups, oral to the class as a whole, and written), formative questioning (with individuals or small groups or with the class as a whole), and students self assessing and adjusting their learning tactics. Liza also combined some of these different types of data use. LizaÂ’s and KariÂ’s specific strategies revealed potentially significant attributes for each type of data use. The attributes that were identifiable in these two classrooms point to attributes of formative data use that could be critical in any classroom, but especially in the context of mathematics instruction. Table 7.1 provides a summary of the attributes of different types of formative data use.

PAGE 426

416 Table 7.1 Potentially Critical Attributes of Different Types of Data Use Type of Data Use Critical Attributes Activating students as resources for one another (also known as peer assessment) Whether or not student learning data determined student seating Who, the teacher or the students, determined which students are activated as resources for one another How students were grouped based on prior performance, or based on students’ perceptions regarding learning needs Whether student roles were differentiated when student performance is different What tools mediated the practice (oral directions, written directions, discussion protocols) The focus of student interaction – doing a task, reviewing a task already completed independently, talking about a task already completed independently Whether students were activated as resources for one another when data were collected about tasks in students’ zone of proximal development but not for tasks for which mastery was already expected Whether students were activated as resources for one another on the fly or if it was planned in advance Adjusting Instruction (including selecting students for intervention and grouping students) Whether or not the teacher used student learning data before class sessions to adjust daily instructional plans Whether or not the teacher used student learning data during a class session to adjust learning activity The type of data (actual student work, or student perception of their learning) upon which instructional adjustments were based Whether or not the teacher grouped students to receive differentiated and/or additional assistance If any built in mechanisms resulted in adjustments to students’ learning experiences, including the content focus of learning activity, based on student learning data Formative feedback (oral to individuals or small groups, oral to the class as a whole, and written) Whether the teacher provided feedback orally or in writing To which students (individuals, small groups of students, or the whole class) the teacher provided oral feedback The data source to which the feedback referenced – in process tasks, student talk about tasks completed before, student work products collected during a prior class session, student work being used as a social object How data were collected, through observation, questioning or student products Whether or not just in time written feedback accompanied oral feedback

PAGE 427

417 Type of Data Use Critical Attributes Whether the students to whom the teacher provided oral feedback were selected in advance based on prior performance data Whether the teacher intended the students to use the feedback or if it served as an explanation for her use of data to make instructional adjustments The likelihood that students read written feedback Whether or not specific opportunities were provided for students to read written feedback Whether or not specific opportunities were provided for students to use written feedback Whether or not descriptive written feedback was accompanied by evaluative feedback (e.g. a rating or a grade) When (in relationship to data collection) written feedback was provided, on the fly or during a class session subsequent to when the learning product was produced Formative questioning (individual or small group and full class) With which students (individuals, small groups of students, or the whole class) the teacher facilitated formative questioning What data source (an in process math task, talk about an already completed math task, an already completed math task) was used in data collection that proceeded the formative questioning The sequence of questions the teacher asked and on what the questions focused – student perception of their progress, student work, student explanation of their thinking, students’ next steps in the process of completing a task, student understanding or perception of other students’ work. Whether questioning included additional probing based an earlier responses The degree to which formative questioning served to activate students as resources for one another Whether or not student work was used as a social object as part of questioning The degree to which the whole class engaged in questioning when it was focused on the class as a whole Students self assessing and adjusting their learning tactics What data source(s) students used to self assess About what learning target students self assessed What tools were used to scaffold students’ self assessment About what students were making inferences, their learning in relationship to a target and/or their effort to learn Whether students analyzed data and on what that analysis focused What use students made of their self assessment, and if it included correcting mistakes If and how the teacher used student self assessment information

PAGE 428

418 Activating students as resources for one another (peer assessment) Several potentially important attributes of activating students as resources for one another emerged from the two case study classrooms, including: under what conditions it is appropriate to activate students as resources for one another, how prior learning data is used, whether or not students are given differentiated roles, whether or not tools or protocols mediate the practice, if it is planned or ‘on the fly’, how teachers use students’ seating as a grouping mechanism, and whether the teacher or students decide with whom they are grouped. Liza made a clear distinction between when she was collecting data about what students could do on their own (tasks they should have mastered) and when she was collecting data about what students could do with assistance from her or their peers (tasks in students’ zone of proximal development). When she was collecting data about what students could do with assistance, she always activated students as resources for one another. Liza intentionally grouped students based on their learning data as part of activating them as resources for one another. When student groups included one high and one low performer, Liza described students’ roles differentially with one student playing a teacher role and others a student role. She provided oral and written (posted on the board) protocols to guide students’ interactions when activating them as resources for one another. Sometimes Liza also activated students as resources for one another ‘on the fly’. This occurred while students were working in small groups on math tasks. If she noticed one group was substantially ahead of other groups on the task, she split them up sending each student from the group to work with another group with explicit directions about what

PAGE 429

419 help to provide. All of these were attributes of LizaÂ’s activation of students as resources for one another. Even though she only activated students as resources for one another on two days, Kari also orally described differential student roles for students as they worked in groups with one student playing a teacher role and the others a student role and provided specific directions to some students regarding the aspects of tasks about which they could provide the most assistance to their peers. Liza and Kari differed in how they grouped students to while activating them as resources for one another, although both teachers used student learning data as part of the process. Liza used student learning data to pair one student who did well with one or more who did poorly. KariÂ’s students selected with whom they would work based on who did well on tasks with which they struggled. Who determined which students are activated as resources for one another, the teacher or the students themselves, was another attribute of this type of data use emerging from LizaÂ’s and KariÂ’s classrooms. Several of the attributes of activating students as resources for one another evident in LizaÂ’s and KariÂ’s classroom aligned with attributes reported in empirical research as having a positive impact on student learning. Other researchers have focused on how students are paired for peer assessment (Topping, 2010), although how teacher subsequently described student roles has not been a focus. Others have also focused on the content of the peer feedback messages (Orsmond, Merry, & Callaghan, 2004; Cho & MacArthur, 2010), which was not something I investigated as part of this study. The evidence from LizaÂ’s and KariÂ’s classroom confirms other results which suggested that more practice with peer assessment can lead to greater student

PAGE 430

420 acceptance of peer feedback (Topping, 2010). The evidence from these classrooms also supports that the tools teachers use to mediate the practice may be important to consider (Topping, 2010). Finally, most peer assessment research has focused on the context of writing instruction, with peer assessment occurring while students were in the process of completing drafts of a writing task (Cho & MacArthur, 2010; Orsmond, Merry & Callaghan, 2004; Topping, 1998, 2010; Zundert et al., 2010). Although her content area was math, Liza made a similar distinction regarding when she activated students as resources for one another, when they were engaged in tasks within their zone of proximal development but before mastery was expected. Some of the attributes I identified in LizaÂ’s peer assessment practice, and to a lesser degree in KariÂ’s have not been the explicit focus of other empirical studies, but are important to consider in future research, including if students are seated in a way that is designed to provide a structure for peer feedback, who determines which students are partnered with one another (the teacher or the students themselves), and whether students were activated as resources for one another on the fly or if it was planned in advance. Adjusting Instruction. As discussed above, the differences between Liza and Kari with regard to their use of student learning data to adjust instruction were more evident than the similarities. These differences included the degree to which and how formative assessment mediated each teacherÂ’s instructional planning and just in time adjustments to learning activity; each pointed to potentially critical attributes of this type of data use. Other researchers have considered teacher use of student learning data to adjust daily instructional plans (Frohbieter et al., 2011; Shavelson et al., 2008), the type of learning

PAGE 431

421 data upon which instructional adjustments were based (Frohbieter et al., 2011; Herman, Osmundson, & Silver, 2010; Herman et al., 2011; Shavelson et al., 2008), whether or not teachers grouped students to receive differentiated and/or additional assistance (Frohbieter et al., 2011), and whether or not the teacher used student learning data during a class session to adjust learning activity (Black et al., 2003; Frohbieter et al., 2011; Shavelson et al., 2008). I found evidence in LizaÂ’s and KariÂ’s classrooms confirming these are important attributes of using student learning to adjust instructional practice. One attribute that emerged from KariÂ’s classroom which has not been the explicit focus of other research, but may be important to consider in future research is the degree to which the teacher uses built in mechanisms to adjust students learning experiences based on their learning data. Formative feedback. Much of the empirical feedback research literature has focused on written feedback (Bangert Drowns et al., 1991; Butler, 1987, 1988; Butler & Nisan, 1986; Dweck, 2000; Elwar & Corno, 1995; Kluger & DeNisi, 1996; Li et al., 2011; Lipnevich & Smith, 2008; Pointe et al., 2009). In her literature review and book for practitioners, Brookhart (2009) included a single chapter on oral feedback, which suggested that the research on written feedback content should apply to oral feedback as well. In that chapter she suggested that teachers had options regarding to which students (individuals, small groups or the full class) they provided oral feedback, but did not provide additional details regarding the attributes of effective feedback provided to these different audiences. The dominant mode of feedback in both LizaÂ’s and KariÂ’s classroom was oral. Thus, the evidence from these classrooms provide a preliminary list

PAGE 432

422 of critical attributes of oral feedback that adds to this literature and suggest areas for future research. These critical attributes of oral feedback included the following: 1) the data source to which the feedback referenced (tasks or student talk about tasks); 2) how data were collected, (through observation, questioning or student products); 3) whether just in time written feedback accompanied the oral feedback; and 4) whether the students to whom the teacher provided oral feedback were selected in advance based on prior performance. LizaÂ’s most frequent use of student learning data was to provide oral feedback to individuals or small groups of students (153 episodes across 9 class sessions). LizaÂ’s emphasis on oral feedback to individuals or small groups affirms Brookhart (2008)Â’s assertion that interactive oral feedback is the best format for the delivery of feedback when it is possible. Almost always, LizaÂ’s oral feedback referenced math tasks students were currently completing, or student talk about math tasks they had completed the evening before, and used data collected through observation. On a couple of occasions, her oral feedback referenced prior work and the relationship between that work and the current student work. This is consistent with a number of authors (Butler & Winne, 1995; Hattie & Timperley, 2007; Kluger & DeNisi, 1996; Nicol & Macfarlane Dick, 2006) who suggest providing feedback that references prior performance is beneficial especially to the degree that it supports student self regulation. Sometimes Liza accompanied her oral feedback with just in time written feedback. I found no other authors who considered this just in time, written feedback, although this practice is consistent with research that suggests the proximity in time between studentsÂ’

PAGE 433

423 completing a task and receiving feedback about a task has an impact on the student use of the feedback (Brookhart, 2008). Liza also occasionally planned which students she would observe and to which students she would provide oral feedback based on learning data collected the day before. This type of nesting of formative assessment practices has not been the explicit focus of other research, but may be an important attribute to consider for future research. Although Kari provided oral feedback to individuals or small groups of student less frequently than Liza did (16 replications across 11 class sessions), some attributes of her practice were similar. Almost always, KariÂ’s oral feedback to individuals or small groups of students referenced math tasks the students were currently completing and used data collected through observation. Also like Liza, on a couple of occasions, Kari provided feedback to an individual student that referenced a work product they had turned in previously. Both teachers also provided oral feedback to the full class during almost every class session (31 replications across nine sessions in LizaÂ’s classroom and 10 replications across eleven sessions in KariÂ’s classroom). Critical attributes of oral feedback to the whole class that emerged from these classrooms included the following: 1) to what data source the oral feedback references (work products previously completed, tasks in which students currently engaged, student work used as a social object); 2) if the teacher intended for students to use the feedback, or provided it as an explanation to the class regarding how she used their data; and 3) if the feedback was provided about student work while using it as a social object. This final attribute was consistent with

PAGE 434

424 Ruiz Primo and Furtak’s (2004, 2007) findings that using a strategy of comparing and contrasting student responses as part of “assessment conversations” was associated with improved student learning outcomes. Both teachers provided oral feedback to the class about data collected through student work products from a prior class session, or about observations of students as they engaged in math tasks during the current class session. Liza frequently provided oral feedback about student work while using it as a social object. Liza also provided oral feedback as part of explaining why she was making adjustments to student learning activity. In these instances, Liza did not expect students to use the feedback. How both teachers provided written feedback to individual students raised several questions regarding what counts as formative written feedback, including the following: If descriptive feedback accompanies a grade, does it count? If the feedback does not inform learning does it count? If students don’t have an opportunity to read or to use the feedback does it count? The empirical feedback literature has focused more on the content of the feedback than the format (Bangert Drowns et al., 1991; Hattie & Timperley, 2007). Consistent with some of the recommendations from Brookhart (2008), the evidence from these classrooms suggested that some additional attributes, beyond the content of the feedback message should be considered to determine whether or not written feedback is formative. These attributes included the following: the likelihood that students read the feedback, whether or not the teacher provided students specific opportunities to read and use the feedback, and if the teacher provided written feedback ‘on the fly’.

PAGE 435

425 Both teachers frequently provided descriptive written feedback on student work. However, they always included a grade or rating with the descriptive feedback, and only gave students a specific opportunity to use the feedback on a few occasions. This practice was inconsistent with the many research studies which indicate that grading or normative feedback is not effective (Butler, 1987, 1988; Butler & Nisan, 1986; Lipnevich & Smith, 2008; Pointe et al., 2009) and that descriptive feedback accompanied by a grade is largely ignored by students (Butler, 1987, 1988; Butler & Nisan, 1986). The timing of much of the written feedback provided by both teachers also diminished the likelihood that students actually used the feedback. By the time KariÂ’s students received her written feedback, she had already moved on to other learning topics. Kari only returned student products with written feedback immediately before students had an opportunity to use the feedback twice during the observed unit. Liza returned student work with written feedback on it a day or two after students completed it, but also did not give her students time to read her written feedback or direct them to do so. Only once during the observed unit did Liza structure learning activity in a way that required students to use written feedback. The written feedback both teachers gave their student about their summative test results was different. Both teachers identified correct and incorrect answers and indicated about what students had answered incorrectly. Both expected students to use that information in making meaning of their test results. These practices were much more consistent with research on effective written feedback (Kluger & DeNisi, 1996; Li et al., 2011).

PAGE 436

426 Liza carried sticky notes with her as she provided oral feedback to individuals and groups of students, and sometimes left a sticky note with written information on it as part of her provision of oral feedback. Sometimes she also wrote directly on students’ work. This ‘on the fly’ written feedback was read and used by students immediately, but didn’t conform to definitions of what counts as written feedback in the literature. However, it is a novel approach that future research and practitioners should consider. Formative questioning (individual, small group and full class) Differences with regard to which students – individuals, small groups or the full class – each teacher engaged in formative questioning suggests this was an important attribute of this type of data use. Liza frequently engaged individuals or small groups of students in formative questioning. This occurred 24 times independently from other data uses, and an additional 34 times in conjunction with oral feedback across nine class sessions. Kari engaged individuals and small groups of students in formative questioning 33 times across 11 class sessions. Full class formative questioning was a dominant use of student learning data in Kari’s classroom while Liza never engaged her full class in formative questioning Other researchers have considered questioning with small groups and the full class together, not explicitly identifying differences based on student grouping (Chin, 2006, 2007; Ruiz Primo & Furtak, 2004, 2008). The many replications in both classrooms of the teachers engaging individuals or small groups in formative questioning made it possible to distinguish a number of common and potentially important attributes of this type use of student learning data. Similar to results reported by other researchers (Chin, 2006, 2007; Ruiz Primo & Furtak,

PAGE 437

427 2004, 2008), the evidence from these classrooms affirms that the types of questions and progression of questions across an instance of formative questioning with individuals or small groups is an important attribute of this type of data use. Both teachers began by observing and asking a student or group of students a question about a math task in which they were completing, or about which several students were talking. The initial question usually focused on how it was going or having student(s) explaining their work or their thinking about their work. Sometimes this questioning moved into scaffolding, such as when the teacher asked the student(s) what his/her next step ought to be. Liza’s questions also sometimes involved her facilitating dialogue among the students in the small group, asking one student to explain his or thinking to another student, or what he or she thought about another students answer. How Liza engaged in formative questioning with her students established what Torrance and Pryor (1998) described as a “collaborative environment,” (p. 150) in which students develop capacity to evaluate the quality of their own work and as part of which student to student engagement enhances student performance and can be described as internalization of social interaction. This basic sequence described above played out across many replications in both classrooms. Only Kari’s classroom provided opportunities for me to identify attributes of full class formative questioning. On what student data source the questioning focused, how the teacher collected data for questioning, and the sequence and progression of different types of questions were all attributes of Kari’s full class, formative, questioning. Her full class questioning focused on tasks students had completed the

PAGE 438

428 evening before, or tasks they were in the process of completing. She started with a general question and added more specific questions based on student answers to the initial question (probing based on earlier responses). These attributes were similar to those examined by other researchers (Ruiz Primo & Furtak, 2004, 2008). How teachers use student work as a social object during full class formative questioning is also a critical attribute of this type of data use. While Ruiz Primo & Furtak (2004, 2008) referenced teacher comparison of students correct and incorrect responses as an effective attribute of assessment conversations (their term for full class or group formative questioning), they didnÂ’t label this as a use of student work as a social object. Kari used student work as a social object as part of full class questioning, but only wrote correct student answers on the board. If a student answer was incorrect, she did not share it and frequently asked a different student to correct the mistake. Using student work that includes mistakes makes the student work a more useful tool and expands the formative uses of the student work. Teacher hesitance to show mistakes underestimates the value for students and may limit which students relate to the student work. This suggests it may be important to explicitly focus on the use of student work with mistakes as a feature of more developed formative assessment practice. Evidence from KariÂ’s classroom suggests that a final important attribute of full class questioning may be the degree to which the full class engaged in the questioning. Because some of the same students consistently did not engage in full class questioning, Kari consistently adjusted her instruction without considering the needs of all of her

PAGE 439

429 students. While this attribute addresses the degree to which this type of formative data use mediates student learning, it has not been the explicit focus of other researchers (Chin, 2006, 2007; Ruiz Primo & Furtak, 2004, 2008). Student self assessing and adjusting their learning tactics. My definition of a formative assessment episode helped me determine what to count as student self assessment. It required me to consider what data was being collected and who was using the data and how. When the data collected was student perception (not based on a specific data source) and the students were not the users of the data, I did not count the data use as student self assessment. Other researchers have made similar distinctions in how they focused their self assessment studies (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002; Sebba et al., 2008). Both Liza and Kari occasionally (less than once per class session) collected data from students about their perception of their learning, ‘on the fly’. They asked their students to raise their thumbs, indicate with zero to five fingers, or raise their hands to indicate their confidence in or difficulty with completing a task or meeting a learning target. If they used the data, it was the teachers rather than the students who used it to adjust the current or subsequent learning activity in some way. If appropriate, I counted these episodes as the teacher adjusting instruction but not as student self assessment. Consistent with other researchers (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002; Sebba et al., 2008), I counted, as student self assessment, the students’ reviewing formal data about their learning and rating their learning in relationship to the associated learning target. This occurred daily in Kari’s classroom and

PAGE 440

430 frequently in LizaÂ’s. In both classrooms, studentsÂ’ self assessment was formal, resulted in a written record, and was based on a student product. Other researchers have considered tools such as rubrics or evaluation criteria to be critical components of self assessment practice leading to improved learning outcomes (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002). Evidence from these classrooms affirms that the tools students use in self assessment is another attribute of this type of data use. In both classrooms, teacher created tools mediated studentsÂ’ self assessment. These tools varied based on the student work product upon which students based their self assessment. Differences in how Kari and Liza guided their studentsÂ’ interpretations of learning data made evident that this is another critical attribute of self assessment. Did students interpret their learning data in reference to the degree to which they met a learning target only? Did their interpretation also include the degree to which their effort affected their learning? Both teachers expected students to provide information about what they did or did not learn as part of self assessment. Kari also expected her students to interpret their learning data in reference to their efforts to learn. Liza only asked students to make that connection a couple of times. In both classrooms, student self assessment of summative test results was more elaborate. However, each teacher had her students analyze their summative test results differently. LizaÂ’s students determined whether each incorrect response represented a small mistake or one that required additional help from her or a classmate. This allowed LizaÂ’s students to identify on which specific tasks they needed additional assistance. KariÂ’s students grouped

PAGE 441

431 responses by the learning target on which they focused and created a score by learning target. This allowed KariÂ’s students to determine which learning targets represented strengths and which represented remaining challenges. This level of detail regarding how students analyze and interpret their data as part of self assessment is a potentially important attribute of student self assessment that has not been the focus of other researchers (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002). Other researchers have focused on student inferences about their learning only and not on student inferences about their effort (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002). Evidence from these classrooms suggests that what use students make of their self assessment is another important attribute of this practice. Both teachers expected students to use self assessment to correct tasks on their summative tests. Both expected students to use their self assessment of their learning based on other work products throughout the unit in different ways. This confirms findings from other studies that asserted that how students use self assessment matters (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002). If and how the teacher uses studentsÂ’ self assessment information is an important attribute of this practice that has not been the focus of other researchers (Andrade & Valtcheva, 2009). Liza used student self assessment of their learning to add to her interpretation of studentsÂ’ pre test results. Kari did not use studentsÂ’ self assessment information herself. Combining types of data use. The evidence from LizaÂ’s classroom suggests that teachersÂ’ layering of different types of data use could be an important attribute of

PAGE 442

432 formative assessment practice which has not been an explicit focus of other researchers. Liza frequently combined different formative uses of student learning data in a single interaction with individuals or small groups of students. Over 4 times per class session, she combined oral feedback with formative questioning. She also used formative questioning of students working in a small group to activate students as resources for one another. In these instances, students received two types of instructional assistance during a single interaction. Scale of use. Consistent with Moss (2008), I described the scale of teachersÂ’ formative use of student learning data as the levels of decisions teachers made about student learning. This concept served to temporally connect data collection and data use. My third proposition related to teachers use of information to determine what to do next occurring at different scales. A variety of scales of uses of student learning data were evident in both classrooms, providing supportive evidence for the proposition, and supporting my consideration of the scale of data use as a key attribute of formative assessment practice. The scale of teacher decisions regarding what to do next is an important attribute of formative assessment practice, when that practice mediates instructional planning. In both classrooms, most decisions that the teachers made about what to do next occurred in the moment, as part of the same learning activity during which data collection occurred. Liza frequently used data to determine what to do during a subsequent class session during the current unit. Kari did this rarely. This scale of use

PAGE 443

433 was consistent with Liza’s instructional approach, which included planning and making adjustments based on student learning data. This is consistent with the Frohbieter et al. (2011) characterization of adjustments made immediately or in planning for the next day or within the week as “true” formative assessment. Evidence from Kari’s classroom suggests the scale of use also depends on whether the user of the data is the teacher or the students. In Kari’s classroom, almost a quarter of the formative assessment episodes involved students as users of their own learning data. In a little over half of these, students used their learning data during a class session subsequent to when they collected the data. My third proposition didn’t consider the possibility that the ones determining what to do next could be students. Summarizing attributes of formative assessment episodes. Table 7.2 provides a summary of the attributes of each of the actions included in formative assessment episodes evident across both classrooms. In the table, I phrased each attribute as a question. Table 7.2 Attributes of Formative Assessment Episodes Formative Assessment Episode Actions Attributes Identifying learning target(s) To what do the targets that focus formative assessment practices relate, major learning goal(s) of the current unit, content from a prior or subsequent unit, or pre requisites for the current unit? How do the learning targets within a unit relate to the major learning goal(s) of a unit? Does that relationship reflect the structure of the unit learning content? Are learning targets within the unit sub parts of the major unit learning goal(s)? Is some other relationship evident? Are the learning targets that focus formative assessment episodes clear to the teacher?

PAGE 444

434 Formative Assessment Episode Actions Attributes Clarifying learning target(s) with students Are the major unit learning goal(s) and learning targets associated with formative assessment episodes posted in the classroom where students can see them? Does the teacher interact with students regarding the vocabulary associated with the learning targets? Are students required to write down or somehow interact with learning targets? To what degree does the teacher explicitly associate learning activity with learning targets? Does the learning target come from the current unit content or not? If not, does the teacher clarify the learning target with his/her students? Collecting Data What data collection method did the teacher use, observation, questioning, or products? How frequently did the teacher use informal data collection methods (to collect data during the learning process? To what degree did the teacher plan in advance about what data would be collected? On what did planning focus, from which students to collect data or about which tasks to collect data? Did the tasks used in data collection align with the learning targets in terms of coverage and cognitive complexity? Did the studentsÂ’ about whose learning data were collected (individuals, small groups or full class), align with the students whose learning experiences the data use affected? Are the data collection method and type of data use consistent? What is the data source (separate from data collection method)? Analyzing Data Was data analyzed on the fly? Did analysis include identification of correct and incorrect responses (math or content where a right answer is identifiable)? Did analysis include assigning an over all rating? Interpreting Data Was data interpreted on the fly? Did interpretation only focus on inferences regarding individual studentÂ’s mastery or did it identify about what aspects of the learning target(s) individual students need additional assistance? Did interpretation include identification of tasks or parts of tasks with which multiple students struggled? Did interpretation include identification of groups of students with common learning needs (patterns of performance)? To what degree did the teachersÂ’ interpretation expand or restrict her use of student learning data? Using Data Who used the data (teacher or students)? If the teacher used the data with which students (individuals, small groups or full class) did the teacher use the data? To what degree did the scale of teacher data use (across multiple

PAGE 445

435 Formative Assessment Episode Actions Attributes formative assessment episodes) include one or more of the following: using data collected during one unit about learning content outside of the unit, using data collected during the current class session to determine what to do in a subsequent activity during the current class session, using data collected during one class session to decide what to do during a subsequent class session during the current unit? Did teacher data use include using data collected during one class session to decide what to do during a subsequent class session during the current unit (necessary for formative assessment practice to mediate instructional planning)? What was the scale of student data use when students were the ones using the data? Did the data use fit into one of the following categories: activating students as resources for one another (formerly known as peer assessment), adjusting Instruction (including selecting students for intervention and grouping students), formative feedback (oral to individuals or small groups, oral to the class as a whole, and written), formative questioning (individual or small group and full class), and students self assessing and adjusting their learning tactics? Were more than one type of data use combined in a single interaction between the teacher and his/her students? Discussion of How Social Context Influences Formative Assessment Practice From a socio cultural perspective, learning is individual internalization of social action (Vygotsky, 1978). This perspective assumes that the social context within which learning takes place shapes learning, and that the different elements of the social context influence one another (Moss, 2008). Anchored in this perspective, I described the role of formative assessment as a cultural tool that mediates learning. In other words, I defined formative assessment as one element of the social context that influences learning and that influences and is influenced by the other elements of the social context. A few of other researchers have similarly considered the relationship between the classroom social context and formative assessment practices occurring within the

PAGE 446

436 context of math instruction (Moss, 2008; Oxenford O’Brian, Nocon & Sands, 2010; Torrance & Pryor, 1998). However, each of these studies focused somewhat differently than the study reported here. Moss (2008) reanalyzed the teacher practice detailed in Magdalene Lampert’s book, Teaching Problems and Problems of Teaching (2001 as cited in Moss, 2008) from a socio cultural perspective, with a focus on how Lampert used evidence of student learning to inform adjustments to her instruction and to provide opportunities to learn. Moss concluded with a call for assessment theory that “take(s) into account the way in which assessment functions as part of – shaping and shaped by – the local learning environment . .,” (p. 254). Oxenford O’Brian, Nocon and Sands (2010) focused on the relationship between the social context of one classroom and student participation in making meaning of summative assessment results (self assessment). Torrance and Pryor (1998) drew upon socio cultural theories of learning to study the ‘microsociology of classroom assessment and classroom learning’ to interrogate the relationship between ‘assessment incidents’ and student learning. Their assessment incidents were similar to what I have defined as formative assessment episodes. However, Torrance and Pryor more narrowly analyzed the discourse structure, and while they grounded their research in socio cultural theories, they didn’t explicitly foreground the components of the classroom social context. My analysis adds to this literature but also represents an application of a much broader base of literature regarding a socio cultural perspective of learning (reviewed in Chapter 2) to the specific context of formative assessment practice in mathematics classrooms.

PAGE 447

437 Consistent with this perspective, my second research question focused on the interaction between the classroom social context and formative assessment episodes. My first proposition associated with this research question defined the components of the social context of the classroom. I followed Moss (2008) and Oxenford OÂ’Brian, Nocon and Sands (2010) in using the components of an activity system proposed by Engestrm (1987, 1993, 2001) to define the social context. These components included the following: object of activity, tools used, who participated and the nature of that participation (including student and teacher roles), and the rules and norms that guided participation. I used these components to describe the social context of both classrooms at the instructional unit and learning activity levels because some aspects of the social context changed for each activity that made up the instructional units I observed. Using those components, I was able to describe the social context of both LizaÂ’s and KariÂ’s classroom at the unit and activity level. This supported my first proposition. However, I also discovered that the physical layout and use of space influenced social interaction and learning in KariÂ’s classroom. Because it was different from typical classrooms, these features stood out. Although I didnÂ’t explicitly identify physical layout as a type of tool and thus part of the social context of the classroom, evidence from KariÂ’s classroom suggests physical layout is important to explicitly consider. My application of this definition of the social context of classrooms to LizaÂ’s and KariÂ’s classrooms at the activity level revealed some similarities and some key differences across the two classrooms. It also revealed substantial differences in the social context across individual learning activities within KariÂ’s classroom. The

PAGE 448

438 similarities represented common conditions in both classrooms that established a context within which formative assessment practices could and did occur. The differences corresponded with the differences in the type of formative uses of data that occurred during different learning activities and what formative assessment practice mediated in each classroom. In what follows, I first discuss my findings regarding the relationship between the social context of each classroom over all, at the unit level, and the formative assessment episodes that occurred within them. Following that, I discuss the patterns evident across both classrooms regarding how each component of the social context influenced formative assessment practice at the learning activity level. I organize that section by the components of the social context. I include conclusions related to the remaining propositions in the tools section. The unit level social context and formative assessment episodes. The findings from these classrooms suggest that deeply understanding formative assessment in the context of actual K 12 classrooms means critically and simultaneously considering the social context of those classrooms. All of the components of the social context of LizaÂ’s classroom came together to create conditions within which Liza was able to constantly provide assistance to her students within each studentÂ’s zone of proximal development. Liza and her students consistently shared the object of learning activity and that object focused on student learning. She articulated and enforced behavioral norms that reinforced that focus. Supported by tools and behavioral norms, the participation structures of a majority of the learning activities in LizaÂ’s classroom

PAGE 449

439 involved students actively engaging in doing math or talking about math in small groups. This established a context during which Liza could provide assistance to students within their zone of proximal development. Some similarities in the social context of Kari’s classroom were evident; however, during typical class sessions, many components of the social context of Kari’s classroom were more limiting. Kari did not share the object of activity with all of her students during the most frequently occurring learning activity– full class questioning. Her behavioral norms corresponded with a number of Kari’s students not participating in full class discussions, and Kari only giving a few students active roles during full class discussions. While Kari used a variety of tools to mediate learning activity in ways that supported formative assessment episodes, her tools also limited her practice in ways Liza’s tools did not. For example, Kari’s use of student work as a social object during class discussions did not provide opportunities for students to compare and contrast different student responses, a characteristic that Ruiz Primo and Furtak (2004, 2007) identified as associated with better student results. The social context of Kari’s classroom limited the degree to which formative assessment practice mediated her instructional strategies. The day before, and after the end of unit test, Kari changed the social context of her classroom; she created a different context within which formative assessment episodes could and did occur. The contrast in the social context and associated formative assessment episodes between Liza’s and Kari’s classroom, and between typical days and the two exceptional days in Kari’s classroom, reinforce the importance of the social context in influencing formative assessment practice. This is consistent with the findings from others who have also considered this

PAGE 450

440 relationship (Moss, 2008; Oxenford O’Brian, Nocon & Sands, 2010; Torrance & Pryor, 1998). This contrast also illustrates that the social context can mediate formative assessment in different degrees. The similarities and differences between Kari’s and Liza’s formative assessment practices became clearer as I considered the relationship between their formative assessment practice and the social context of their classrooms. This also has implications for interventions aimed at increasing or improving teachers’ formative assessment practice – those interventions must also address explicitly the various aspects of the social context. The activity level social context and formative assessment episodes. The patterns evident across both classrooms revealed also how each separate component of the social context of classrooms can influence formative assessment practice. My findings regarding how the various components of classrooms influence and are influenced by formative assessment practices occurring within them largely confirm many of the related findings and assertions of other researchers. However, my foregrounding of each component of the classroom social context helped me to lift up aspects of the social context others had either assumed or not explicitly considered. Next, I discuss how the object(s), tools, participation structures (who participated and how), and norms/rules influenced the formative assessment episodes that occurred in these two classrooms. Object. How does the object of learning activity influence formative assessment practice? Evidence from these classrooms suggest that nature of the object of activity

PAGE 451

441 and the alignment between the teacher and her students with regard to the object of their joint activity determines whether the assistance the teacher provides is within the studentsÂ’ zone of proximal development. Both Kari and Liza identified learning objectives as the object of almost all learning activity. No formative assessment episodes occurred during learning activities with objects that werenÂ’t student learning objectives (e.g. a cookie and a book activity in KariÂ’s classroom and class preferred activity time in LizaÂ’s classroom). The teacher establishing the object for learning activity as a student learning objective(s) was a necessary condition for formative assessment episodes to occur. Liza and her students shared the object of all activity occurring in her classroom which contributed to the learning data collected during these learning activities being accurate. Kari and her students shared the object for many learning activities; however, approximately one quarter of her students did not share the object for learning activity that included full class discussions. As a result, the formative assessment episodes occurring during full class discussions used inaccurate data making it likely that the associated assistance Kari provided was outside of the zone of proximal development of some of her students. Whether or not and how many of the students shared the learning object with the teacher influenced formative assessment episodes in both classrooms. This relationship between the alignment of the teacher and her students object for learning activity, and the formative assessment episodes occurring during different learning activity has not been the focus of related studies. This study suggests it should be the focus of future studies.

PAGE 452

442 Tools. How did the tools used as part of learning activities influence the types of formative data use in both classrooms? What did the tools used in both classrooms illuminate about the formative assessment practices occurring within them? To what degree did student learning data become a social object in classrooms where formative assessment practice occurred? These questions correspond with three propositions related to the second research question for this study and focus my discussion of findings related cultural tools. How tools influenced data use and what they illuminated about formative assessment practices. I found three types of tools influenced formative data use in both classrooms, including teacher created materials, math tasks students completed, and student work used as a social object. Teacher created materials illuminated the features of student self assessment in both classrooms and structured learning activities within which a number of formative assessment episodes occurred. This finding is similar to the results reported by Oxenford OÂ’Brian, Nocon and Sands, (2010) who found that tools played a key role in structuring student self assessment of summative test results. Consistent with MossÂ’s (2008) findings, I also found that the learning targets on which tasks focused in both classrooms influenced and illuminated the scale of the teachersÂ’ data use. In both classrooms the tools used as part of student self assessment illuminated features of this type formative data use. The tools explicitly connected the learning targets to the tasks in which students engaged and provided a scaffold for student

PAGE 453

443 analysis and interpretation of the resulting learning data. These tools illuminated that students connecting learning targets to tasks is part of student self assessment. This is consistent with a number of authors who have considered attributes of self assessment associated with improvements in student learning (Andrade, Du, Wang, 2008; McDonald & Boud, 2003; Ross et al., 1999, 2002) and with Oxenford OÂ’Brian, Nocon and Sands, (2010) who also found the teacher they studied used tools to help students connect learning targets to tasks. The tools Kari and Liza used to scaffold self assessment for most assignments supported studentÂ’s rating their performance. However, the tools used to scaffold student interpretation of learning data were different across the two classrooms. In LizaÂ’s classroom the tools students used to self assess on their ticketÂ’s out prompted them to rate their effort in relationship to the learning target. The questions Liza asked of students while they interpreted their learning data as part of completing the unit pre test prompted students to describe their difficulty completing the tasks on the test. In KariÂ’s classroom, the tools that students used to interpret data from their daily homework assignments and that they used to interpret their summative test results prompted students to explain their performance based on their learning tactics, and identify how they would change their learning tactics moving forward. Kari expected her students to use their self assessment to change their learning tactics. Other researchers have considered the degree to which students revise their work based on self assessment (Andrade & Valtcheva, 2009), but have not focused on the tools students use in self assessment informing teacherÂ’s interpretation of performance

PAGE 454

444 data, or students’ interpretation of the relationship between their performance data and their effort or their performance data and their learning tactics. The tools teachers used to structure small group interactions also influenced the type of formative data use that occurred in both classrooms. When tools were used to structure small group interactions, the teachers’ formative use of data included frequent oral feedback and formative questioning of individuals and small groups of students. Torrance and Pryor (1998) considered assessment incidents occurring within the context of small group interactions, but not the tools the teacher used to structure those interactions. The learning target on which tasks focused influenced and illuminated the scale of the teachers’ data use in both classrooms. Some of the tasks used in both classrooms focused on content associated with the current instructional unit, and some focused on content outside of the current instructional unit. The teachers used data collected based on tasks that focused on learning targets associated with the current instructional unit to make decisions about what to do next during the current unit –either within the same class session or during a subsequent class session. Teachers used data collected based on tasks that focused on learning targets outside of the current instructional unit to make decisions about what to do next in learning activity that remained outside of the current unit. In other words, the tools used (tasks) determined the scale of teacher data use. Moss (2008) considered the scale of a teacher’s data use, but not how the tasks on which student learning activity focused influenced that scale.

PAGE 455

445 Finally, evidence from both classrooms supported my fourth proposition, that where formative assessment practices are evident, information about learning (including mistakes or misunderstandings) becomes a “social object,” a tool that is valued in terms of how it is described and how it is used. Both teachers frequently (more than once per class session) used student work as a social object to mediate formative assessment practice and student learning. Moss (2008) and Jordan & Putz (2004) asserted that teachers who thought about assessment as a way of engaging in instruction used student work as a social object. The evidence from my case study classrooms supports also the following extension to my fourth proposition, how teachers use student learning data as a social object influences the associated formative assessment practices. The formative practices mediated by the teachers’ use of student work as a social object varied. Kari used student work as a social object to mediate full class questioning; Liza used student work as a social object to collect data about student’s agreement different approaches to completing a task and as a basis for providing oral feedback to the whole class. Liza required her students to write their own work on the board and used both correct and incorrect student responses as a social object. Kari asked her students to share their work orally, and only wrote correct work on the board for others to see. Liza’s use of student work as a social object included student mistakes or misunderstandings and Kari’s did not. The features of the teachers’ use of student work as a social object influenced their formative assessment practice. My findings extend the findings of Ruiz Primo and Furtak (2004, 2007) that teachers’ use of a strategy of comparing and contrasting student responses is associated with

PAGE 456

446 improved student outcomes. My findings also suggest that considering how teachers use student work as a social object can deepen understanding of teacher formative assessment practices. Participation structures. How do the participation structures of different learning activities influence formative assessment episodes occurring during them? Evidence from both classrooms supports the proposition that participation structures influence the type of data use that occurs during formative assessment episodes. Student roles that included doing math tasks, talking about already completed math tasks, or self assessing regarding math tasks may be a pre requisite for formative assessment episodes to occur. Whether or not the teacher organizes students into small groups for learning activity influences the type of data use and frequency of formative assessment episodes. Finally, student seating assignments, if based on student learning data, creates conditions within which teachers can frequently activate students as resources for one another. In both classrooms, all formative assessment episodes occurred during learning activities for which student roles included doing math tasks, talking about already completed math tasks, or self assessing regarding math tasks they had already completed. For example, both teachers engaged students in a learning activity after summative tests that involved students analyzing and interpreting the results of their tests and correcting items that they missed. As a result, formative assessment episodes could and did occur based on the summative test results.

PAGE 457

447 How each teacher grouped or did not group students for learning activity influenced the types of data use that occurred. Liza frequently had students work in small groups during learning activity. Kari only had students work in small groups during the class sessions the day before and after the end of unit test. Liza daily and constantly provided oral feedback and/or initiated formative questioning with individual students or small groups of students focused at their zone of proximal development. Kari only did the same the day before and after the end of unit test. Other researchers (Frohbieter et al., 2011) have focused on whether or not teachers used student learning data to group students for differentiated learning experiences, and assessment incidents that occurred while students were working in small groups (Torrance & Pryor, 1998). However, the influence of grouping on the types of data use in which teachers engage has not been the focus of prior formative assessment studies. My findings suggests that how teachersÂ’ group students for learning activity may determine the types of formative data use that can occur. How each teacher seated her students became the default student grouping for learning activity during all class sessions. The seating arrangements in LizaÂ’s classroom supported her activating students as resources for one another (or peer assessment). She seated students based on their pre test results, with each table including one student who did well on the test, one who did poorly, and one or two in the middle. Kari also seated her students in groups of three or four and they also took a pre test at the beginning of the unit. However, Kari did not base her studentsÂ’ seating assignments on their performance on the pre test. It may have been difficult for her to do so since she

PAGE 458

448 taught more than math to the same group of students. The seating arrangements in KariÂ’s classroom did not establish a context within which she could easily activate students as resources for one another and know that students who struggled worked with students who did not. Other studies focused on the attributes of peer assessment (or activating students as resources for one another) that lead to improvements in student performance have not focused on the role of student seating (Cho & MacArthur, 2010; Gielen et al., 2011; Orsmond, Merry, & Callaghan, 2004; Topping, 1998, 2010; Zundert et al., 2010). In fact much of the peer assessment research has been done in post secondary settings where seating is less frequently the prerogative of the teacher. This study suggests that how K 12 teachers assign student seats may be a catalyst for activating students as resources for one another. Norms and rules. How do norms and rules influence formative assessment practice? Evidence from LizaÂ’s and KariÂ’s classrooms suggests that the influence of rules and norms on formative assessment practice is indirect. Differences in the behavioral norms and rules and how they were enforced corresponded with differences in how these two teachers grouped students for classroom activity during which formative assessment episodes occurred. LizaÂ’s formative uses of data most frequently included interactions with groups of students. KariÂ’s more frequently involved students working independently or Kari interacting with the full class. Rules related to homework completion in both classrooms determined which students could participate in activities that included a number of formative assessment episodes. Liza had strict rules and her students

PAGE 459

449 experienced significant consequences for not bringing completed homework assignments to class on the day they were due. Kari’s students did not appear to experience any consequences for coming to class without completing their homework assignments on the days they were due. Liza’s students came to class with their homework completed. It was difficult to estimate how many of Kari’s students started math class without completed homework, but several did each day. As a result, fewer students participated in formative assessment episodes that used homework as a data source in Kari’s classroom than in Liza’s. The evidence from both classrooms supports the proposition that rules and norms influence data use within classrooms. Within these classrooms that influence was indirect, through the roles that students and the teacher played in each classroom. Other studies considered the conditions that influence teacher use of student learning data, for example, Herman et al. (2010, 2011) considered the influence of teacher pedagogical knowledge, and Frohbieter et al., (2011) considered “the kind of information teachers gleaned from assessment,” (p. 2). However, these studies did not consider the influence of classroom rules and norms. The evidence from these classrooms suggests the influence of norms and rules on classroom data use, and in particular the indirect influence, should be the focus of future research. Limitations of Findings Several features of the design of this study limit the use and generalizability of my findings. This study was a cross case analysis of formative assessment practices occurring during a single mathematics instructional unit in one 5th and one 6th grade

PAGE 460

450 classroom. Case study as an analytical approach is appropriate when the questions guiding the study relate to how something works or how things relate, especially when it isnÂ’t possible to manipulate behaviors in ways that are required by an experimental design (Yin, 2009). My research questions related to how formative assessment practice works in context and how the social context of the classroom and formative assessment practice relate to one another. My context was K 12 classrooms, an environment within which it is always difficult to use experimental design since parents hesitate to allow their students to be a part of an experiment. In addition, when an area of research is relatively new or theory is still developing, case study as an analytical approach can be a key strategy for establishing theories worth studying further (Yin, 2009). These conditions applied to current state of research on formative assessment practice. Most of my findings related to establishing theories worth studying further. The same conditions which made a cross case analysis an appropriate analytical approach, however, limit the appropriate uses of the results of this study. I described the critical attributes of formative assessment practice as it occurred in two K 12 math classrooms. Because I based my analysis on multiple replications of formative assessment episodes occurring within these classrooms, I could generalize my findings to formative assessment practice occurring within these classrooms. My research was empirical, but met no criteria for generalizing my findings to other classrooms. I didnÂ’t describe the critical attributes of formative assessment for all 5th or 6th grade mathematics classrooms. Rather, I provided evidence that confirms what other empirical researchers have found regarding critical attributes of formative assessment

PAGE 461

451 practice and I identified attributes for future researchers to investigate. I also explained the relationship between the social context of these two classrooms and the formative assessment episodes that occurred during a single instructional unit. Again, these findings included evidence affirming what other researchers have found and pointing to important areas for future research; however, they do not generalize to all 5th and 6th grade math classrooms. Two additional limitations of this study relate to the design of the study. First, while I indicated that some of my findings may be specific to mathematics, I did not collect data during instruction any other content area. Second, while several researchers have identified pedagogical content knowledge as a mediator of formative assessment practice (e.g. Herman, Osmundson, & Silver, 2010; Herman et al., 2011), I did not collect or analyze the pedagogical content knowledge of Liza or Kari. As a result, I could not consider how these teachersÂ’ pedagogical content knowledge may have interacted with the attributes of their formative assessment practice. Conclusions and Implications for Future Research This study responds to a critical gap in the current formative assessment literature. It more fully describes a theoretical foundation for the role of formative assessment in teaching and learning from a socio cultural perspective and identifies that role as a mediator of instructional planning, instructional practice and student learning tactics. This study provides an empirically tested definition of a formative assessment episode that distinguishes it from other similar practices occurring in classrooms. Also, it provides a methodology for identifying formative assessment episodes as they occur

PAGE 462

452 within a K 12 classroom context. This study paves the way for future research to test the role of formative assessment in learning and my definition and methodology for identifying formative assessment episodes in additional classroom contexts. My identification of attributes of each element of a formative assessment episode across the many occurrences in my case study classrooms confirms prior empirical research, but points also to potentially important attributes others have not considered. These attributes represent an extension of the definition of formative assessment that increases the utility of that definition for both researchers and practitioners. They expand the list of attributes of formative assessment practice that determine potentially the effectiveness of the practice in improving student learning outcomes and again, pave the way for additional empirical research. This study also affirms the importance of the classroom social context of classrooms, including the following: object, tools, participants, roles, and rules or norms. Social context matters not just in a general sense, but also specifically in determining whether or not and how formative assessment practice occurs. Whether or not teachers and students share the object of learning activity, determines the accuracy of data collected about student learning during that activity. How teachers structure student participation – individually, in small groups, or as a full class determines the types of formative data use that can occur. The tools that teachers and students use to structure and support participation structures also structure formative assessment episodes. The tools teachers and students use during classroom activity illuminate critical features of formative assessment practices – features that are not evident without considering the

PAGE 463

453 tools. Whether or not and how teachers use student work as a social object serves as an indicator that formative assessment practice is occurring. Classroom rules and norms and how they are enforced either extend or limit the potential for formative assessment practice. This study underscores the importance of researchers explicitly considering the classroom social context in any study of formative assessment practice. This study also supports practitioners who aim to expand and improve teacher implementation of formative assessment within K 12 classrooms. First, it provides a definition and approach to identifying formative assessment episodes making it easier to recognize formative assessment as it occurs in situ. Second, it gives practitioners a starting point for considering the attributes of formative assessment practice and determining the degree to which those attributes are present within any classroom. Finally, it establishes the importance of the social context of the classroom, paving the way for practitioners to consider the degree to which the different components of the social context support or fail to support formative assessment practice. Finally, this study has implications for policy. Teacher evaluation in Colorado and across a number of states now uses observation of professional practice to determine teacher ratings and on going employment. My experience observing two teachers and characterizing their practice is relevant to policies related to teacher evaluation, although it wasnÂ’t the focus of this study. I observed both classrooms for multiple and subsequent days. It took me over a week of subsequent observations to notice that Liza started most days with a sticky note in her hand that had students names listed on it with whom she planning to check in during the first part of the class based on data

PAGE 464

454 collected the class session before. It took several subsequent days of observation, post observation interviews, and ultimately a review the audio recording of each class sessions for me to come close to catching the formative assessment episodes occurring in both classrooms. These experiences are inconsistent with current teacher evaluation policies which recommend school leaders use two or three non sequential observations across a school year as the basis for half of teachersÂ’ evaluations (MET Project, 2013; Reform Support Network, 2013). If school leaders are to adequately capture and assess the presence and quality of formative assessment practices occurring in classrooms, frequent and sequential observation is essential and must be paired with teacher interviews about their practice. This study is a call for future research of formative assessment practice embedded within the K 12 in context that starts with the practice of formative assessment rather than the assessment instruments used in data collection. The call is for research that uses a theoretically derived definition of the role of formative assessment and a clearly specified definition of the elements of episodes of formative assessment. The call is for research that simultaneously considers the social context of the classroom over all and separately considers each element of the social context as a direct or indirect influencer of formative assessment practice.

PAGE 465

455 REFERENCES Airasian, P. W. (1991). Classroom assessment. New York, NY: McGraw Hill. Andrade, H. L., Du, Y., & Wang, X. (2008). Putting rubrics to the test: The effect of a model, criteria generation, and rubric referenced self assessment on elementary school students' writing. Educational Measurement: Issues and Practice 27(2), 3 13. Andrade, H., & Valtcheva, A. (2009). Promoting learning and achievement through self assessment. Theory into Practice 48 (1), 12 19 Bandura, A. (1977). Self efficacy: Towards a unifying theory of behavioral change. Psychological Review, 84(2), 1910215. Bangert Drowns, R. L., Kulik, C. L. C., Kulik, J. A., & Morgan, M. (1991). The instructional effect of feedback in test like events. Review of Educational Research 61 (2), 213 238. Bell, B., & Cowie, B. (2001). Formative assessment and science education Dordrecht, Netherlands: Kluwer Academic Publishers. Bell, B., & Cowie, B. (2001). The characteristics of formative assessment in science education. Science Education 85 (5), 536 553. Bennett, R. E. (2011). Formative assessment: A critical review. Assessment in Education: Principles, Policy & Practice. 18 (1), 5 25. Doi: 10.1080/09695 94X.2010.513678 Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education, 5 (1), 7 68. Black, P., & Wiliam, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80(2), 139 148. Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment Evaluation Accountability, 21 5 21. Doi: 10.1007/s11092 008 9068 5 Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience and school. Washington, DC. National Academy Press. Brookhart, S. M. (2004). Classroom assessment: Tensions and intersections in theory and practice. Teachers College Record, 106 (3), 429 458. Brookhart, S. M. (2008). How to give effective feedback to your students Alexandria, VA: ASCD

PAGE 466

456 Brookhart, S. M. (2009). Editorial, Educational Measurement: Issues and Practice 28(3), 1 4. Butler, R. (1987). Task involving and ego involving properties of evaluation: Effects of different feedback conditions on motivational perceptions, interest and performance. Journal of Educational Psychology 79 474–482. Butler, R. (1988). Enhancing and undermining intrinsic motivation: The effects of task involving and ego involving evaluation on interest and performance. British Journal of Educational Psychology 58 1–14. Butler, R., & Nisan, M. (1986). Effects of no feedback, task related comments, and grades on intrinsic motivation and performance. Journal of Educational Psychology 78 (3), 210. Butler, D. L., & Winne, P. H. (1995). Feedback and self regulated learning: A theoretical synthesis. Review of Educational Research 65 (3), 245 281. Chin, C. (2006). Classroom interaction in science: Teacher questioning and feedback to students’ responses. International Journal of Science Education 28 (11), 1315 1346. Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing. Learning and Instruction. 20 328 338. doi: 10.1016/j.learnins truc.2009.08.006 Cizek, G. (2010). An introduction to formative Assessment: History, characteristics, and challenges. In Andrade, H. & Cizek, J. (Eds.), Handbook of formative assessment (pp. 3 17). New York: Routlegde. Cole, M (1990). Cultural psychology: A once and future discipline? In J. J. Berman (ed.), Nebraska Symposium on Motivation: Cross Cultural Perspectives. Lincoln: University of Nebraska Press. Cole, M. (1999). Cultural psychologh: Some general principals and a concrete example. In Y. Engestrm, R. Meittinen, & R. L. Punamaki (Eds.) Perspectives on Activity Theory (pp. 87 106). Cambridge, MA: Cambridge University Press. Cole, M. (2002). Sociocultural Perspectives on Assessment Learning In Wells, G. & Claxton, G. (Eds.), Learning for life in the 21st century: sociocultural perspectives on the future of education, United Kingdom: Blackwell Publishing. Cole, M., & Engestrm, Y. (2007). Cultural historical approaches to designing for development. The Cambridge handbook of sociocultural psychology 484 507. Council of Chief State School Officers (CCSSO, 2008). Attributes of Effective Formative Assessment. Washington, DC.

PAGE 467

457 Crain, W. (1992). Theories of development: Concepts and applications. (5th Ed.) Englewood Cliffs, NJ: Prentice Hall, Inc. Dweck, C. S. (2000). Kinds of praise and criticism: The origins of vulnerability. In Self theories: their role in motivation, personality, and development, (pp. 107 115) New York: Psychology Press. Elwar, M., & Corno, L. (1995). A factorial experiment in teachersÂ’ written feedback on student homework: Changing teacher behavior a little rather than a lot. Journal of Educational Psychology Engestrm, Y. (1987). Learning by expanding. An activity theoretical approach to developmental research. (Doctoral dissertation) Engestrm, Y. (1993). Developmental studies of work as a test bench of activity theory: The case of primary care medical practice. Understanding Practice: Perspectives on Activity and Context 64 103. Engestrm, Y. (2001). Expansive learning at work: Toward an activity theoretical reconceptualization. Journal of Education and Work 14 (1), 133 156. Frohbieter, G., Greenwald, E., Stecher, B., & Schwartz, H. (2011). Knowing and doing: What teachers learn from formative assessment and how they use the information. (CRESST Report 802). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Gallimore, R., & Tharp, R. (1990). Teaching mind in society: Teaching, schooling, and literate discourse. In L.C. Moll (Ed.), Vogotsky and Education: Instructional implications and applications of sociohistorical psychology (pp. 11 126). New York: Cambridge University Press. Gee, J. (2008). Introduction. In P. A. Moss, D. C. Pullin, J. P. Gee, E. H. Haertel, & L. J. Young (Eds.), Assessment, Equity, and Opportunity to Learn (pp. 1 16). New York: Cambridge University Press. Gielen, S., Dochy, F., Onghena, P. Struyven, K. & Smeets, S. (2011). Goals of peer assessment and their associated quality concepts, Studies in Higher Education, 36:6, 719 735. Giest, H., & Lompscher, J. (2003). Formation of learning activity and theoretical thinking in science teaching. VygotskyÂ’s educational theory in cultural context 267 288. Gipps, C. (1996). Assessment for learning. Assessment in Transition: Learning, Monitoring and Selection in International Perspective. Oxford: Pergamon

PAGE 468

458 Gipps, C. (1999). Socio cultural aspects of assessment. Review of Research in Education, 24 355 392. Gipps, C. (2002). Sociocultural perspectives on assessment learning In Wells, G. & Claxton, G. (Eds.), Learning for life in the 21st century: Sociocultural perspectives on the future of education. United Kingdom: Blackwell Publishing. Guskey, T. R. (2010). Formative assessment: The contributions of Benjamin S. Bloom. In Andrade, H. & Cizek, J. (Eds.), Handbook of formative assessment (pp. 106 124). New York: Routlegde. Guskey, T. R., & Bailey, J. M. (2001). Developing grading and reporting systems for student learning Thousand Oaks, CA: Corwin Press, Inc., A Sage Publications Company. Harris, L., Irving, S. E., & Petersen, E. (2008). Secondary teachers’ conceptions of the purpose of assessment and feedback. In annual conference of the Australian Association for Research in Education, Brisbane, Australia Hattie, J. (2009). The black box of tertiary assessment: An impending revolution. Tertiary Assessment & Higher Education Student Outcomes: Policy, Practice & Research 259 275. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81 112. Heritage, M., Jones, B., & White, E. S. (2010). Supporting teachers’ use of formative assessment evidence to plan the next instructional steps. In annual meeting of the American Education Research Association (AERA), Denver, CO Herman, J. L., Osmundson, E., Dai, Y., Ringstaff, C., & Timms, M. (2011). Relationships between teacher knowledge, assessment practice, and learning – chicken, egg, or omelet CRESST Report, 809. Herman, J. L., Osmundson, E., & Silver, D. (2010). Capturing quality in formative assessment practice: Measurement challenges. CRESST Report, 770. James, M. (2008). Assessment and learning. In S. Swaffield (Eds.), Unlocking Assessment: Understanding for Reflection and Application London: Routledge. James, M., McCormick, R., Black, P., Carmichael, P., Drummond, M., Fox, P., & Wiliam, D. (2007). Improving Learning How to Learn. United Kingdom: Routledge. Jordan, B., & Putz, P. (2004). Assessment as practice: Notes on measures, tests, and targets. Human Organization 63 (3), 346 358.

PAGE 469

459 Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta analysis, and a preliminary feedback intervention theory. Psychological Bulletin 119 (2) 254 284. Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation Cambridge, UK: Cambridge University Press. Le Compte, M. D., & Schensul, J. J. (Eds.). (1999). Designing and Conducting Ethnographic Research (Vol. 1). Rowman Altamira. Lee, D., & Gavine, D. (2003). Goal setting and self assessment in year 7 students. Educational Research, 45 (1) 49 59. Li, M., Yin, Y., Ruiz Primo, M. A., & Morozov, A. (2011). Identifying effective feedback practices on student mathematics learning: A literature synthesis. A paper presented at the annual meeting of the American Education Research Association (AERA), Denver, CO Lipnevich, A. A., & Smith, J. K. (2008). Response to assessment feedback: The effects of grades, praise, and source of information. Princeton, NJ: ETS. Marzano, R. J. (2006). Classroom assessment and grading that work Alexandria, VA: ASCD. Marzano, R. J. (2001). Designing a new taxonomy of educational objectives. Experts in assessment Thousand Oaks, CA: Corwin Press, Inc., A Sage Publications Company. Mavrommatis, Y. (1997). Understanding assessment in the classroom: Phases of the assessment process the assessment episode. Assessment in Education: Principles, Policy & Practice, 4 (3), 381 400. Doi: 10.1080/0969594970040305 McDonald, B., & Boud, D. (2003). The impact of self assessment on achievement: The effects of self assessment training on performance in external examinations. Assessment in Education 10 (2), 209 220. Doi: 10.1080/0969594032000121289 Mory, E. H. (2004). Feedback research revisited. In D.H. Jonassen (Ed.), Handbook of Research for Educational Communications and Technology New York, NY: Simon & Schuster Macmillan. Moss, P. (2008). Socio cultural Implications for Assessment I: Classroom Assessment.