Given the important role that assessment plays in the classroom, at the school and district level, and at the state and federal level, it still amazes many how little educators really know about them. In a recent ASCD article – All About Accountability / Needed: A Does of Assessment Literacy – W. James Popham stated, “What most of today’s educators know about education assessment would fit comfortably inside a kindergartner’s half-filled milk carton.”
Let’s first back up and share a little bit of the importance that assessments play in the day-to-day lives of all educators. Educators generally agree that there are three broad categories of educational assessment – formative, interim, and summative. Formative assessment guides learning and includes giving clear, actionable feedback to students, sharing learning goals, and modeling what success looks like.
Summative assessment certifies learning. Generally, educators administer a summative assessment near the end of an instructional unit to help them answer the question, “What did students learn?” Despite this array of summative instruments, state accountability tests most often come to mind. High stakes can be associated with summative assessment, such as selection, promotion, and graduation. And, policymakers use state assessment data to communicate the state of education to the public.
Interim assessment guides and tracks learning. A wide middle ground exists between teachers’ day-to-day formative assessment of student learning and the formal protocols of state summative assessment. This middle ground offers opportunities—captured under the umbrella term interim assessment—to gather information about many things that are relevant to the teaching and learning process.
Understanding the differences between these three types of assessments is important in determining how best to use the data and insight each one provides. As Popham mentions in his article:
This situation is analogous to asking doctors and nurses to do their jobs without knowing how to interpret patient charts. Because health professionals are evaluated according to the longevity and physical well-being of their patients, you can be certain that those professionals thoroughly understand how to ascertain a patient’s vital signs. They’re called vital signs because they’re vital!
We’ve discussed in the past what we call the four pillars of assessment literacy, and they are worth repeating.
1. Assessment-literate educators demonstrate data literacy:
- They know the different kinds of data that exist and which kind of data to use for which decision.
- They evaluate the accuracy and sufficiency of each kind of data they will use.
- They transform data from a variety of sources (classroom, school, district, state) into actionable information to guide decisions.
- They hold themselves accountable for ethical generation, interpretation, and application of assessment data.
- They create and/or select assessments that balance formative and summative purposes to meet the information needs of all stakeholders, including students.
- They establish clear learning targets that form the basis of instruction and assessment.
- They ensure that their assignments and assessments match the learning targets that have been or will be taught.
- They select assessment methods to match types of learning targets.
- They create and/or select assessment items, tasks, and scoring guides that meet standards of quality.
- They sample learning appropriately.
- They control for factors that can bias results.
3. Assessment-literate educators know how to integrate assessment practices and assessment results into instructional decisions:
- They use formal and informal assessment processes and tools for the purpose of diagnosing instructional needs.
- They interpret results accurately and act on them appropriately to increase student learning.
- They involve students in the assessment process, developing students’ ability to self-assess, set goals for further learning, and self-regulate.
4. Assessment-literate educators know how to communicate accurately about student learning:
- They adhere to sound grading practices.
- They keep students informed of their learning progress and the intended learning targets, using both qualitative and quantitative data.
- They keep parents informed of their children’s learning progress.
- They are able to explain accurately to all stakeholders the meaning and appropriate use of results from all assessments given.
As Popham concludes, and we most certainly agree:
This is a sincere plea. We are members of a profession currently buffeted by a test-based tornado that too few of our members understand. We must get a sufficient number of our colleagues to learn more about the rudiments of education assessment and, having done so, communicate what they’ve learned. What we need, in short, is to propagate the promulgators. And we need to do it sans delay.