In last week’s blog, Beata Thorstensen shared the story of how her view of summative assessment data and its use evolved as she worked with districts and schools across her state to understand and use these data. This week, Beata shares some recommendations on how to use summative assessment data appropriately for continuous improvement and to advocate for resources to improve student learning.
Given the uses of assessment by our accountability systems, it becomes incumbent that state and local education agencies provide school leaders with data that are granular enough to provide a summative picture of instructional strengths and weaknesses within content domains. Schools need to have this level of granularity in order to connect these data meaningfully to instruction. We must explain, in plain terms, what these data can and cannot say about student performance. Without this level of detail and explanation, we are effectively hamstringing improvement efforts. Without this detail we are essentially telling people, “Improve student achievement in math,” and when they ask, “how?” we are responding with silence, or worse, purveying platitudes like, “teach to the standards and the test will take care of itself.” We need to do better than that. We need to ensure that our assessments are technically rigorous enough to allow for detailed analysis down to the sub domain. If our tests can’t do that—we need to improve our tests.
What level of detail is enough? The exact answer to that question will indubitably be argued for years to come, but here’s my perspective. It has to be detailed enough to get teachers on the path to examining their instruction in targeted ways. “Math” is clearly too broad. “Numbers and Operations” is better, but for my money, I’m looking for things like, “place value”. Give me that, and I can start asking questions like, “How are we teaching place value, what are some examples of student work that show us where student understanding falls apart, and how will we amend our instruction to solve the problem?”
In New Mexico, we are making progress in providing as much detailed data as possible— PARCC evidence statements, for example, help tremendously– although we still have miles to go. Attempts at a state data warehouse system in New Mexico complete with dashboards and real-time data have been, to date less successful, but are still part of the futurescape. In the meantime, in districts we build our own data displays, purchasing Custom off the Shelf (COTS) dashboard systems, and using an awful lot of Excel to slice and dice data effectively. It’s not flashy, there’s a lot of manual work, and frankly we need to do better. We need real-time data. We need connections with school climate and early warning variables. We need connections to post-secondary outcomes data. The list could go on ad nauseum.
Using Data for Improvement and Advocacy
Nevertheless, while these data are used by the state as a platform for accountability, we now use them at a district level as a platform for continuous improvement and advocacy.
As we work with school teams examining data—we begin with the big picture of our summative data. We ask questions like:
- Where are we struggling most—ELA, Math? Where is it most important to focus our improvement efforts?
- Within ELA and Math—what sub domains are we struggling with the most?
- Do all grades and subgroups struggle equally? Or are there gaps in performance? If so, where?
- Do we see the same areas of struggle in our interim assessments? In student work?
- How are we teaching those things that students are struggling with? How do we improve that instruction?
- How will we measure our progress? What are our goals?
- What supports and resources do we need to improve our instruction?
We ask questions of the data to understand our strengths and weaknesses and build improvement plans, we use the data to advocate for resources on behalf of our schools, teachers, students and families. For example: in my current district, we have identified through PARCC data that students across all grades and subgroups are struggling to meet Common Core expectations in writing. This data has allowed schools to advocate for better supports in writing. So, the district is building comprehensive professional development and coaching in writing that provides opportunities to teachers to improve their understanding of grade-level expectations and how to teach writing in ways that (we hope) will facilitate long-term improvement.
We recognize the need for and legitimate uses of summative assessment data in our continuous improvement plans. In many ways, it’s a compass for a lot of our forward momentum. That is also something that won’t – and probably shouldn’t—change any time soon. More and more, we’re aligning our resources and efforts to support quality standards-aligned instruction. Those alignment and resource decisions are based on summative data. We are working on creating better and better systems for data delivery to help school principals and teachers use those data to advocate for targeted curriculum resources and other supports to improve outcomes for students.
Summative data, ultimately can and should be used as an analysis of where we are and where we need to go with curriculum and standards-based instruction. We should use it to advocate for resources for schools, teachers and students, and to align our supports for schools. But in order to do that, we need to ensure that our assessments are technically rigorous enough to produce data detailed enough to inform those things.