By Diana Nunnaley, Director, TERC’s Using Data

March Madness annually takes over the country, or at least the media and the minds of U.S. college basketball fans who give itFather and son playing basketball their frenzied attention each spring. At the same time, another March Madness is going on that does not garner the same enthusiasm and  does not make national news in quite the same way. It’s the March Madness going on in schools across the country as teachers and administrators ready for spring, state-initiated student accountability assessments. These tests are considered by some to definitively provide feedback on how much students have learned this year, and correspondingly – how effective their teachers are. (That second-tier “madness” could fill volumes, and I chose to let the pundits continue to hash out that one.)

Boy taking a testThere are multiple reasons to call this time during the school year “madness.” For one, the stakes are a lot higher compared with other types of student assessment cycles. Additionally, the outcomes actually drive a critical part of our national dialogue about how to help schools increase student learning and close persistent gaps between groups of students. This single spring data point, unfortunately, influences the public’s understanding (or misunderstanding) of schools and their work. And just as in the sport’s world, there is no shortage of armchair coaches, fueled by the spring test results, who feel qualified to tell schools what they should be doing to perform better.

Cultivating a Dream

Each basketball team in the National Collegiate Athletic Association begins each season with one dream: winning the national championship. Not all succeed. In fact, not all even get to try. Prior to March Madness, an intricate system takes into account regions of the county, local sports program guidelines, and game wins to build 16-team “brackets” that define four regional, rank-ordered lists of noteworthy teams eligible to compete for that dream.

In my education analogy, rather than selecting 16 noteworthy teams, I’ve identified 16 noteworthy factors to consider as a single “bracket” of must-have elements for making meaningful data-informed decisions. Students always rank at the top of the list. The rest can be ordered depending on your context and how successfully you have already integrated each factors into a winning school improvement scenario. They are all important. What areas immediately need more attention in your school or district?

  1. STUDENTS – who are excited about learning & reaching their potential
  2. Teachers – actively engaged in a culture of meaningful data-driven dialogue
  3. Principals – who provide resources to support effective assessment practices
  4. Data Coaches – with skills to facilitate teacher-level collaborative inquiry
  5. Professional Development to help teachers and leaders engage collaboratively with data
  6. The right data reports – disaggregated in multiple ways
  7. Item level test data – including state and district test items
  8. Formative assessments embedded into lessons
  9. Quick turnaround of assessments into results
  10. Open-ended questions – and the ability to analyze for understanding
  11. Time for teachers to analyze data every week
  12. Time to share findings, reflect, and act on implications of the findings
  13. Time to revise curriculum
  14. Time to create more rigorous lessons
  15. District support and leadership
  16. A shared vision for improvement and how to monitor success toward reaching the vision

Celebrating the Winners

In the NCAA March Madness, it comes down to one team who takes home the trophy. In my dream, there are no losers. In the world of teaching and learning, it takes all 16 factors to produce winners. The ones we want to see come out on top are all our kids and their teachers. They are the pre-chosen champions.

I’m happy to report that well-coached, school-level Data Teams can help create these winners—high-achieving schools where students reap the benefits. They increase achievement by careful and varied data analysis that reaches well beyond one spring test. They compare state tests results with district benchmark assessments, student work samples, informal classroom assessments, student observations, and more. They use the accumulated evidence to focus on students’ strengths and challenges in ways that produce more rigor, better curriculum alignment, vertical grade and cross-course alignments, deeper content and pedagogical knowledge, and bigger scores. And, scores aside, this work ultimately leads to more meaningful learning and more engaged students—the real measures of success.

Here’s to eliminating March Madness in education and supporting an effective culture of teacher-led improvement that helps every student achieve the dream.