Data Isn’t All It’s Cracked Up To Be

Data: It seemed like a good idea at the time.

Data: It seemed like a good idea at the time.

Over at ShankerBlog, Matthew Di Carlo digs into a new study finding that teachers don’t use district-wide online data systems to do much data analysis. As he summarizes:

Teachers need time and training, not only to use the system itself, but also to act on the recommendations (e.g. “reteach”). The assessments must be “in sync” with the curriculum. And, of course, teachers need to believe that the information is going to be useful to their practice, or they are unlikely to use it.

Yup, that’s about right. I once had to administer a district-wide interim assessment that failed to take into account that the district had shifted the school year by three weeks that year. Some teachers gave the test during the official “window”, and others waited until they’d covered the test’s content. The result was tests across the district administered at very different points in the curriculum, making the results almost impossible to compare across classrooms or years.

That’s an extreme case, but it’s not surprising that teachers in general don’t use “data” as much as we might hope, and even become cynical about it.

Still,  it’s a bit of a dodge to blame the failed promise of data entirely on the sorts of logistical constraints Matthew highlights. The fact is that there are real limits to the utility of the sort of data we’re talking about here. Yes, “teachers need to believe that the information is going to be useful to their practice”, but what would it really take for that condition to be met?

Consider the always-popular-in-theory notion of “reteaching”. Depending on the population in your classroom and the assessments you’re giving, it may be the case that you never – never – have 100% of your students demonstrate “proficiency” on any particular assessment or aspect of an assessment.1

That means that teachers almost always have to make decisions about whether to “move on” to different content or to “reteach” the previous content so that more students get it.

For sufficiently strong and homogeneous classrooms it may be possible to do both: to spend a little extra time reteaching with the handful of students who didn’t quite pass the bar – maybe at lunch? after school? – while moving on with the class as a whole.

In most cases, however, the choice is starker. How many students have to fail to reach proficiency before it makes sense to halt the entire class’ progression through the content? 20%? 40%?

3272203321_497dfc171e_mEven for very low proficiency rates – say, 30% – the choice is not an obvious one, especially in courses where later content doesn’t presuppose mastery of previous content.2 After all, the fact that students haven’t passed a somewhat-arbitrary proficiency threshold doesn’t mean they still haven’t learned a fair amount. And however little of the previous content they’ve mastered, they’ll master even less of the subsequent content if postponing it means you spend less time on it or don’t cover it at all.3

Of course, teachers do sometimes decide – justifiably – to reteach content. But if you think about it for a while in terms of how teachers actually spend their time, it becomes clear why they don’t do so more frequently. Yes, it’s partially about data quality and adequate time for analysis. More than that, though, it’s about the fact that the data often don’t matter.

These sorts of issues are technically captured in the “teachers need to believe that the information is going to be useful to their practice” formulation of the problem, but that phrasing makes it sound a little like the issue is that teachers’ beliefs are just in need of correction. There might be some truth to that, but it’s by no means the whole story of why teachers aren’t “using data” the way data advocates sometimes envision.

The whole story does probably involve improving how teachers, schools, and districts collect and use data. It also, however, probably involves some basic truths about the practical limits of data.

  1. In principle, you can avoid that problem by administering easier assessments, but then what’s the point? The goal is to figure out what your students know and can do, not to generate better-looking numbers. Right? []
  2. Yes, in many courses it’s not strictly necessary – or even all that helpful – to master the earlier content before grappling with the later content. I think there’s sometimes a naive intuition to the effect that good courses should be organized to require linear progression through the material, but classes typically don’t work that way and that’s mostly fine. []
  3. Pacing guides and assessment calendars sometimes make these decisions for teachers, but many of the trade-offs would be real even in the absence of external accountability. []
This entry was posted in Teaching & Learning. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Leave a Reply