Assessment Education Perspectives

Using Summative Assessment Data – a Heretical Pragmatist’s Opinion Part 1: Evolution of a Position

Student Learning

Assessment data is important in education at all levels. In our blog, we often look at how students, teachers, and parents can better use and communicate about assessment to support student learning. School leaders, district administrators, state departments of education, and policymakers all need data to make informed decisions. However, their specific needs to achieve their shared goals can differ. In this two-part series, Beata Thorstensen shares the perspective of a state educational leader working to help districts and schools to understand and use results of summative assessments appropriately for continuous improvement and advocacy.

Since 2003, I have been engaged in state policy and local implementation discussions on the use of assessment data for school improvement. I have worked at the state level, helping to plan and implement large-scale standards-based assessment systems, and at the district and school levels, helping schools and teacher teams use data to plan, implement and monitor improvement processes. Along the way, I have developed what I view as a pragmatic position on using summative data, a position that can, at times, disagree with common wisdom of assessment designers and psychometricians.

The evolution of my thinking began in 2005. New Mexico had just received a large grant from The Wallace Foundation to help school leaders use data effectively, and my state organization, the Office of Education Accountability was the project lead. We discovered that most school district leaders were unable to effectively access our state assessment data. The data arrived in a flat file on a CD. Most of our school districts lacked personnel with expertise in accessing and using these types of files. For many, these CDs did little but collect dust. We started small—creating district-level files using simple pivot tables to help district leaders answer basic questions like, “which of our subgroups are struggling most in mathematics?” I drove around the state in a little white Corolla delivering professional development to school leaders on how to use these pivot tables. Cool fact: New Mexico is the fifth largest state in the Union. We have school districts that are geographically larger than the state of Rhode Island. Over the course of several years, I put more than 200,000 miles on that car.

At the time, our files contained composite scores in ELA and Math—which due to reliability and validity concerns—were considered the most granular data we could provide. Superintendents and school principals—who had hitherto had little to no data to examine, happily sliced and diced these data by each grade and subgroup. But they inevitably said to me something like this, “Beata, this is great—but I need more. This tells me we’re struggling in math, but I need to know where in math we’re struggling.” My response—a mini lecture on issues of technical quality in assessment—was an argument that has been espoused in various academic publications on assessment. For example, in The Principles and Standards for School Mathematics (2000), the National Council of Teachers of Mathematics asserts:

Tweet: Using Summative Assessment Data – a Heretical Pragmatist’s Opinion Part 1: Evolution of a Position https://ctt.ac/2cceQ+ #edchat #education“Mathematics is not a collection of separate strands or standards, though it is often partitioned and presented in this manner. Rather, mathematics is an integrated field of study. Viewing mathematics as a whole highlights the need for studying and thinking about the connections within the discipline, as reflected both within the curriculum of a particular grade and between grade levels.” (p. 64)

But there are pragmatic problems with this statement. We don’t teach that way, and we don’t use assessments that way. Education accountability has increasing consequences for schools that fail to meet summative targets. School leaders don’t have the time or money to replace their curricula wholesale and hope that the new curriculum can solve an achievement problem. They need targeted information from us. They need us to help them be precise.

Our consequences for failing to meet targets under ESSA have, for the most part, only become more serious. Some states now use summative scores not only to rate schools, but teachers, and as exit exams for high school students– often regardless of whether or not these assessments were designed for those purposes. Indeed, our systematic re-purposing of summative assessment has become frankly endemic. Some states utilize student scores to assess teacher effectiveness using regression models that label residuals as “teacher effects.” Others use norm-referenced college placement exams as proxies for demonstrations of high school competency. We have placed more stakes on these exams than they were ever intended to carry. But the barn door is open—and we aren’t going back any time soon.

In next week’s blog, Beata shares recommendations on how to use summative assessment data appropriately for continuous improvement and to advocate for resources to improve student learning.

Beata I. Thorstensen

Beata I. Thorstensen has worked in the fields of large-scale assessment and continuous improvement for the better part of 15 years. Her main area of focus has been on how to develop assessments that provide educators with actionable data for improving instruction. She has worked as a consultant for the US Department of Education as well as at the state and district levels. Her current passion is helping educators in Rio Rancho Public Schools use assessments to examine and improve the art and science of teaching.