
24 Oct The future(?) of state assessment (Part II): A conversation with NWEA’s Abby Javurek
By Dale Chu
Abby Javurek is the Senior Director of Large Scale Assessment Solutions at Northwest Evaluation Association (NWEA), an education services organization probably best known for their computerized adaptive “MAP” tests. They recently announced the development of an adaptive, “through-year” assessment, a new solution that NWEA says eliminates the need for states to administer an end-of-year summative test. They are currently partnering with Nebraska and a consortium of districts in Georgia as early adopters. (Readers might recall that the effort in Georgia is part of ESSA’s demonstration pilot). I recently talked with Abby about their announcement and what it might mean for the future of state assessment. In part two of this interview, Abby discusses how teachers view NWEA’s new effort, what it might mean for equity, and the next steps to look out for.
Dale: Could multiple assessments throughout the year have an unintended consequence of more “teaching to the test”? In other words, as time passes, teachers might get a better sense of the content of each exam. As a former teacher, it seems to me the incentives here could get a little wacky. What am I missing?
Abby: We know that there’s no substitute for the knowledge teachers and school leaders have of the kids in their schools. We’re designing through-year assessment to better support teaching and learning and retain local control of curriculum and pacing. Because through-year assessment is adaptive, it does not require districts to use the same pacing guides or curricula. Each assessment will draw from the full summative blueprint and adjust based on the learner’s performance on the previous assessment. This means students will have multiple opportunities to demonstrate proficiency throughout the year, and teachers can focus on teaching what students need now instead of cramming to get a specific scope covered just for a test event.
You asked about PARCC earlier – some of the challenges to the initial PARCC vision actually stemmed from the fact that their approach required specific scope and sequence to be taught in each window, which raised concerns about loss of local control related to curriculum and pacing. While we can certainly collect curriculum and pacing information from districts in states that implement through-year assessment in order to optimize our test engine for efficiency and opportunity to learn, students are not penalized if they haven’t yet learned a concept at the time it is tested. They simply get another chance to demonstrate proficiency on the next test.
For example, if a concept is introduced on the fall assessment before it has been taught or learned, the student will have another chance to show mastery on the winter assessment (and another chance on the spring assessment, if needed). The goal is simply to increase testing efficiency; districts will retain local control of curricula and pacing, eliminating some of the pressure to “teach to the test.”
Dale: Have you had a chance to preview this system with educators? What’s been the reaction so far?
Abby: Teachers and administrators have been, and continue to be, key partners in our development of the solution. Teacher advisory boards in our pilot states are engaged in the design and development of much of this work, and have helped us to refine the vision and development of the through-year model. We’re working closely with our early adopter partners in Georgia and Nebraska and have had many conversations across other states with leaders at the district and state level.
We are also conducting focus groups to help us develop innovative reporting approaches that make this information more accessible for teachers so that they can spend less time deciphering data and more time putting insights into action for their students.
Dale: Talk a little bit about the innovative assessment “engine” underneath the hood. What does it entail?
Our test engine is unlike anything the market has used before. We call it our “constraint engine,” but it should really be called an “unconstrained engine” because of all we can do with it. It allows us to ensure that students receive test questions that are aligned not only to a state’s standards, but also to a state’s summative blueprint. We also work closely with educators in the state to define range achievement levels, which articulate how skills become more sophisticated as learning within each standard deepens. In configuring the solution, we use items aligned to these skill levels to produce assessment results that are meaningful and actionable for educators. Down the road, we even expect to be able to input information on curriculum and pacing to help optimize the tests for students and further enhance the value of the assessment data for teaching and learning.
Dale: EdSurge recently ran an article on the convergence of formative and summative assessment. What are your thoughts on the author’s thesis?
Abby: Obviously, I share much of David’s excitement! There is a lot of promise in these approaches to innovation that seek to bridge the gap that has grown between states and districts in the world of assessment. Much of the work going on in the innovative assessment space right now is really focused on how we bring pieces of the system together to tell the whole story. The consortia assessments were powerful in that they helped open up discussion about why the traditional way of assessing students isn’t enough in a modern world; but change can be frightening, especially in education, and I think largely this resulted in the consortia assessments not going quite as far in the innovative space as many hoped.
Everyone working in this space is focused on helping maximize support for our kiddos and is recognizing that even with the advancements that we have, we can still do better to meet our students where they are and lift them up. This doesn’t mean that there aren’t challenges to overcome, and certainly in some places policy needs to shift to allow some of these more flexible models to take hold, but the right conversations are happening across the country to thoughtfully think through how we support systems.
Dale: Some have expressed equity concerns regarding the broader effort to move beyond the current testing regime. As states consider new options like NWEA’s, is there a risk that we might inadvertently sacrifice some of the features (e.g., disaggregated data by student group) that have been important to advocates?
Abby: In practically every conversation throughout our development process, we look at equity through two lenses: equity in outcome and equity in opportunity.
Equity in outcome is what we usually talk about when we think about traditional summative tests. Obviously, this matters tremendously: we need to set the bar high for all students, and measure against that bar to make sure all students and systems are reaching for that bar, and that we are never allowing ourselves to inadvertently set the bar lower for some students.
However, equity in opportunity is just as important. All students deserve the opportunity to learn, and to do that, need to be met where they are so they can be challenged appropriately to grow throughout the year. We’re intentionally building through-year assessment to provide nuanced information showing how students progress from beginning to sophisticated levels of understanding within each standard so results will help teachers find the point of productive struggle for all students.
Equity must be a core tenet of any innovative system if it is to succeed. Certainly, this solution still allows and provides reporting at the school, district and subgroup levels to attend to gaps in our educational system. But it also unlocks the potential to shift conversations about equity to realize it’s more than a “proficient or not” determination.
Dale: Last month, I did an interview with FutureEd’s Lynn Olson and asked her about through-year assessments. Some of the concerns she raised revolved around whether these tests cover the depth and breadth of state standards over the course of the school year. Do you share these concerns?
Abby: When systems are designed thoughtfully, the need to ensure we are measuring against the fullness of the blueprints can certainly be accomplished. I think it’s healthy for folks to be skeptical, and important for innovative solutions to be transparent in how they are aligning and are attending to the need to ensure that appropriate grade-level standards are being assessed.
The system that we’re building with through-year assessment will yield deeper insights than are currently available on students’ growth toward proficiency in state standards. This information can help teachers meet struggling students where they are to scaffold them toward standards-based learning targets and help advanced students extend into above-grade content. The solution meets state-level requirements by ensuring that the full summative blueprint – required by federal law – is covered over the course of the three assessments, producing summative proficiency scores for accountability at year’s end.
Dale: Now that NWEA has formally announced through-year publicly, what’s the next step?
Abby: The state of Nebraska, for whom we currently provide interim and summative assessments, will be transitioning to through-year assessment over the next couple of years. In 2020-21, they’ll engage in research studies to inform their transition to through-year assessment, which is planned to begin in 2021-22. Also, a consortium of districts in Georgia will engage in a research pilot of through-year assessment in 2020-21 with plans to move to field testing in 2021-22. This work is part of Georgia’s participation in the Innovative Assessment Demonstration Authority (IADA)–Georgia’s application was approved in July 2019).
We are building an assessment that uses an approach unlike any ever used before and are committed to fully testing it with our partners before making it available more broadly. We will make it generally available for all interested state partners after we’ve completed research studies and field testing.
This interview has been lightly edited for clarity.
No Comments