22 Oct The future(?) of state assessment (Part I): A conversation with NWEA’s Abby Javurek
By Dale Chu
Abby Javurek is the Senior Director of Large Scale Assessment Solutions at Northwest Evaluation Association (NWEA), an education services organization probably best known for their computerized adaptive “MAP” tests. They recently announced the development of an adaptive, “through-year” assessment, a new solution that NWEA says eliminates the need for states to administer an end-of-year summative test. They are currently partnering with Nebraska and a consortium of districts in Georgia as early adopters. (Readers might recall that the effort in Georgia is part of ESSA’s demonstration pilot). I recently talked with Abby about their announcement and what it might mean for the future of state assessment. In part one of this two-part interview, Abby discusses through-year assessment and how it compares with previous efforts to improve testing systems. Here’s what she said.
Dale Chu: What is through-year assessment? What makes it important?
Abby Javurek: Through-year assessment is an adaptive assessment for grades 3-8 that measures both growth and proficiency. It is administered three times a year – in the fall, winter, and spring – and delivers timely insights on where students are in their learning and how much they’ve learned over time, even if they are above or below grade level. It also assesses student performance relative to grade-level expectations and produces summative proficiency scores at year’s end – eliminating the need for a separate annual summative test.
Dale: Wasn’t through-year assessment promised once upon a time as part of the two federal assessment consortia (i.e. PARCC and Smarter Balanced)? What’s the difference this time around, if any?
Abby: PARCC and Smarter Balanced did good things for testing: they helped expand conversations about rigor and measurement of high-quality college and career ready standards, helped raise the bar on conversations for accessibility, and reopened conversations about the power of adaptive assessments. However, both resulted in systems that, at their core, still require big end-of-the year summative tests and, while they are better versions of what we already had, did not change the way in which students experience assessments as disruptive events.
This solution is different in the way that it adapts and builds on what we already know about students to produce valuable information on both student growth and proficiency throughout the year and leverages regular check-ins that are already part of how we do school instead of requiring a separate end-of-year test that often feels disconnected from teaching and learning.
Additionally, PARCC and Smarter Balanced were consortia efforts, requiring lots of states to use the same blueprints, while this through-year solution can be configured individually to meet the standards and blueprint of each state.
Dale: What are the key differences between through-year assessment and what states and districts are typically accustomed to?
Abby: We’re looking to address multiple pain points for multiple stakeholders whose needs for the most part haven’t been met with the same solutions. State departments of education are increasingly looking for ways to reduce overall testing and support teachers while still challenging students to meet or exceed grade-level expectations. District leaders and teachers have been saying for years that they want summative test results to be timelier and more meaningful to inform their policy, program, and instructional decisions. Right now, these different needs are often satisfied by using separate, sometimes very disconnected assessments. Through-year assessment brings state and district assessments together to increase efficiency and coherence in assessment systems, while providing better data more quickly to both educators and state leaders – ultimately to support learning for students.
The data insights from through-year assessment will be a major differentiator for our users. Unlike traditional summative assessments, which are taken once a year and don’t yield results until months after the school year ends, through-year assessment will make those insights available in time for educators to actually use them in the classroom while they can still impact students’ attainment of proficiency and can help highlight where systems are doing great jobs helping students grow.
In some cases, current “aligned” interim assessments are designed only to predict the end of the year tests and don’t really go far enough to help teachers understand where students are, if they are tracking above or below grade level. In other cases, interim assessments are being used to measure where students are, and to track their growth in more general terms, and don’t provide full insight as to how students are performing against grade-level expectations. Through-year assessment is able to provide both the growth and proficiency information to districts, teachers, administrators and policy makers need to understand a more holistic picture of school and student progress.
And it’s important to note that we’re not asking states and districts to do more testing, but making more efficient the assessments that are already embedded in our education system. Right now, districts are already using interim and benchmark assessments to measure student progress throughout the year, and many places have this embedded into their school improvement discussions. The through-year solution streamlines this existing process to provide more information about growth and proficiency at each administration.
Dale: Some states have struggled to administer end of year summative assessments and have repeatedly had to forego releasing test results (e.g., Tennessee). What is the danger in asking states and districts to do more test administration throughout the year?
Abby: Using the same system throughout the year, instead of asking students and teachers to switch between interim and summative systems, allows those users to become familiar with the technology and its challenges rather than creating systems that are reliant on a single big, disconnected, make-or-break event. It will also help ensure any kinks in the system can be worked out early on so they don’t continue to cause disruptions. Our organization has a long history of using online adaptive tests to measure student academic achievement, and currently supports over 70 million student test events each year. So while we can’t promise there will be no technology concerns ever for our partners, we aren’t going into this blind. We understand and have the team to help districts work through challenges as they arise.
Stay tuned for part two of this interview, which will be released on October 24th.
This interview has been lightly edited for clarity.