Game-based assessments: hollow promise or promising frontier?

Game-based assessments: hollow promise or promising frontier?

Author: Dale Chu

When it comes to the future of assessments, gaming is often enthusiastically cited as a potential solution. After all, if standardized tests are boring and gaming is fun, the race to replace the former with the latter is entirely comprehensible. The aura of innovation and excitement surrounding video games, in particular, makes this an alluring approach, as does the burgeoning market for augmented and virtual solutions to today’s most pressing dilemmas—including assessments that are less static and more individualized.

Last month, an Education Week blog post attempted to make the case for more game-based assessments. Among the reasons cited by the author was a litany of familiar, if over-simplified, complaints against the current testing regime: they cause too much stress; they focus on compliance over independent thinking, and they sap the joy out of learning. In contrast, game-based assessments—as exemplified by a new initiative in Georgia—are ostensibly more authentic, healthier, and enjoyable for students, teachers, and families.

I have written previously about the need for more assessment R&D, and I have no problem with exploring the potential of leveraging digital games to improve our assessment of student performance. While PARCC and Smarter Balanced weren’t aimed at games, an intriguing startup effort highlighted earlier this year does have standardized tests like the SAT squarely in its sights. But when it comes to K-12, there are at least three factors to consider before we can say game-based assessment is the wave of the future.

First, given the higher engagement levels of these tests, they can also be more time-consuming. As such, game-based assessment might not be the most efficient way to measure student learning. Considering the blowback state tests have received in spite of the relatively nominal time requirement, it’s worth asking whether educators, policymakers, and parents would be open to state tests that require even more time to administer.

Second, while game-based assessment often provides the opportunity to go deeper, it’s suboptimal for covering a broad range of topics. For example, while a traditional test often assesses a specific set of standards (e.g. numbers and operations, algebra, geometry), a well-designed game-based approach is more focused and would avoid trying to capture all of these in a single test. Greater depth can be useful for classroom formative assessments, but there are many unanswered questions as to whether this would work for state summative exams.

It’s worth taking a moment here to draw out the differences between formative and summative evaluations. Formative assessments are diagnostic in nature, and usually, do not have stakes attached to them. They are better suited for providing teachers with real-time feedback. Summative assessments are focused on the outcome and are designed to help render a verdict as to whether mastery was achieved. They are more difficult to “gamify” because they include the breadth of all content covered during the academic year. However, both objectives are complementary (i.e. an effective assessment approach incorporates both formative and summative measures).

Third, game-based assessment doesn’t fit many of the traditional methods used to evaluate the validity and reliability of assessments. To deploy them at a larger scale, validation studies are required to ensure they measure what is intended. Research is underway, but it’s still relatively early in the process—and again, there are more questions than answers.

These challenges, which don’t include cost considerations (e.g. developing something in-house vs. off-the-shelf products, licensing fees, etc.), surface the tension between state testing systems and the next generation of assessments. The question is how to balance the need for common and reliable measures that hold all students to the same bar while encouraging new and innovative testing models. It’s a question I plan to continue exploring in this blog, as it’s central to moving beyond state testing systems as we know them today.

In the meantime, a couple of states (Louisiana and New Hampshire) have already stepped forward to push the boundaries on assessments—albeit not through digital games per se—under the aegis of ESSA’s innovative assessment provision. Two other states (Georgia and North Carolina) have submitted applications that are currently under review. Although this pilot doesn’t include additional funding or technical assistance, these efforts will be worth watching in the months and years ahead.

No Comments

Post A Comment