My Five Most Popular Posts of 2014 (And a Couple of Other Favorites)

My writing slowed down quite a bit this year, but I still had (just) enough posts on this site to justify producing a ‘top 5′ list:

1. There is Probably No Crisis in American Education. This post was particularly popular among “reform critics”, but my view is that it really cuts both ways in the education reform debates.

2. Why Education Reform is Probably Not the Best Way to Fight Poverty. My personal favorite from my writing this year, and one that resonated with people who had strong reactions to it. I think it’s held up very well to criticisms, I stand by it, and it remains highly relevant to education reform debates and politics.

3. More Evidence of the Trouble with ‘Student-Centered’ Teaching. This is also one of my favorites from 2014 as it centered on one study that allowed me to tie together a few of my favorite education hobby-horses.

4. Reform Math Went Poorly in Quebec. This post was about a fascinating and important study that reflected very poorly on reform math and progressive education more generally. We should still be talking about this study, both because its findings are important and because it’s the sort of large-scale real-world implementation study that education needs more of.

5. For Reformers: An Important Paper on Worker Compensation and Incentives. This post nicely illustrates why, as sympathetic as I am to a lot of reformy positions in education, I don’t identify with the reform movement in general. The problem is myopia. When reformers talk about American education, they generally think only about 1) America and 2) education. This is unfortunate because other countries and other sectors deal with many very similar problems and neglecting them leads to some very confused – or at least highly simplistic – thinking about American education.

And here, as a little bonus, are my favorite of my pieces written in other venues this year:

What Do We Really Know About Eva Moskowitz’s Success? Written for the Fordham Institute. I don’t do anything especially complicated here, but I think I do lay out reasonably clearly the questions about Success Academy charter schools that we don’t have answers to and often don’t even seem interested in asking.

My Goodbye & Retrospective at This Week in Education. If you look at all my writing, it turns out that at some level I’m just saying a few things over and over again. But I really like those things!

Posted in Education, Education Reform, Teaching & Learning | 1 Response

Reform Math Went Poorly in Quebec

3783982655_eef214e9a9Starting in 1999, schools in Quebec implemented an ambitious curricular & instructional program at all schools in the province. Broadly speaking, this program can be considered “constructivist” and the math program in particular seems to have been of the “reform math” variety. To get a sense for what the reformers had in mind, they described wanting students to increasingly

find answers to questions arising out of everyday experience, to develop a personal and social value system, and to adopt responsible and increasingly autonomous behaviors


Instead of passively listening to teachers, students will take in active, hands-on learning. They will spend more time working on projects, doing research and solving problems based on their areas of interest and their concerns. They will more often take part in workshops or team learning to develop a broad range of competencies.

A little over a decade later, a team of economists went in to see how these reforms were going. (An older, ungated version of the paper can be found here.)

Apparently it did not go well.

Catherine Johnson has a good rundown, but I wanted to highlight a few things in particular.

First and foremost, the overall results in terms of student learning appear to have been quite bad:

Our data set allows us to differentiate impacts according to the number of years of treatment and the timing of treatment. Using the changes-in-changes model, we find that the reform had negative effects on students’ scores at all points on the skills distribution and that the effects were larger the longer the exposure to the reform.

This study provides support for my pedagogy of privilege hypothesis, namely that “progressive” teaching may be acceptable for the strongest students, even if most students, and especially the weakest students, are likely to flounder:

In grade 2, only students in the 75th percentile appear to be significantly impacted by the reform. However as we move from grade 2 to grades 9–10, the effect also becomes significant for lower and average performing students. In grades 9–10, the magnitude of the coefficients is the largest for students in the 25th percentile, and slowly decreases as one moves toward the upper tail of the distribution. Looking at the top of the distribution (90th percentile), we also find negative effects across all grades, but the estimates are generally not significant. It is possible that the reform did not harm top performers. It is also possible that the reform did impact top performers, but that the number of observations at this mass point is too small to obtain precise estimates…

Lower performing students were impacted more severely, and the effects grew larger as students progressed from primary to secondary school. These large negative effects are worrying, and suggest that the reform may have harmed those most in need.

Notably – and ominously – the reforms in Quebec seem to be aligned with reforms that have been advocated in other places, including the United States:

Evidence…suggests that most OECD countries are moving away (or have long moved away) from the traditional (more academic) teaching approach. More specifically, the teaching approach promoted by the Quebec reform is comparable to the reform-oriented teaching approach in the United States. As of 2006, this approach was widely spread across the United States (although more traditional approaches remained dominant) and it was supported by leading organizations such as the National Council of Teachers of Mathematics, the National Research Council, and the American Association for the Advancement of Science.

Katharine Beals predicted – correctly, in my experience – that advocates of reform math would respond by claiming that there must have been “implementation” problems. This is not an inherently unreasonable argument to make, but it has a whiff of wishful thinking about it and is not clearly supported by the evidence.

Consider, for example, that these changes in Quebec were rolled out in an extraordinarily cautious fashion by the standards of education reform. From the paper:

click to enlarge

click to enlarge

The implementation timeline spanned more than a decade and was rolled out in only one or two grades per year, always preceded by a year of planning, training, and professional development:

Extensive training was provided to support the new program. The year prior to the implementation in Elementary Cycle 1, teachers, principals and government officials began the task of preparing the implementation of the reform. Sixteen pilot schools along with several other Lead schools in the English sector experimented with the key concepts of the program of study, as well as school organizational approaches that could be best suited to the strategies required to maximize the effectiveness of the learning environment.

In June 2000, principals in conjunction with teachers began developing their implementation plans for September 2000. Each school was allowed to develop its own approach to deal with the implementation since no single approach was believed to meet the needs of each school across Quebec…In 2000, all schools, both elementary and secondary, participated in some way to the development of the implementation of the reform despite the fact that it did not affect all levels of schooling at the time. Guides for teachers were produced. The implementation was staggered over many years (grades), giving time for teachers to adapt to the new programs.

It may be that the reforms could have been implemented more effectively, but this is nevertheless a lot of preparation. If a decade of gradual phase-in with elaborate supports is inadequate for effective reform math implementation, arguably the problem is not with “implementation” per se.

The strongest support for the “poor implementation” hypothesis is the fact that the researchers found that as time went on, younger students seemed to experience less of a negative effect from the instructional reform. This could suggest that implementation was getting better over time, but as the authors note this finding is neither unambiguous nor encouraging:

We find that grade 2 students, 8 years after the implementation of the reform, no longer seem to experience a significant negative effect…The reform being ambitious, it is possible that it took a fair number of years for teachers to develop the necessary skills to fully deploy all aspects of the reform. It may also be the case that, observing the decline in students’ academic performance, teachers informally decided to reintroduce some of their pre-reform teaching approaches, and set aside in part or in totality the reform approach…[W]e are unable to identify which of these two explanations is dominant. In any case, this finding implies that at best the provincial reform had no long run effects on the development of procedural mathematics skills.

In other words, teachers may have gotten better at teaching math using constructivist techniques or they may have given up on trying. In either case, the authors were unable to find any significant positive effect of the reform for young students even after their schools were 8 years into implementation.

2395285789_e12d351af8_nTo see if their findings are limited by the particular math test taken by students in Quebec, the authors also look at TIMSS and PISA results. Those international tests assess a broader range of skills and allow them to compare trends in Quebec to trends in neighboring Ontario. The patterns are similar. For TIMSS:

Grade 8 students’ performance shows a similar pattern when results from 2007 and 2011 are compared with results from all previous years: Quebec’s performance in both mathematics and sciences is trending downwards, while the performance in Ontario is increasing or stable…

Estimated effects are large and negative in all cases. They are significant in mathematics in both grades, but only in grade 8 in science.


The ERES project has recently produced two reports comparing the math knowledge and the French proficiency of grade 11 students exposed to the reform to that of pre-reform ERES. The math test uses 25 questions from exercises administered during the 2003 and 2006 PISA assessments…In sum, students in the reform group scored slightly lower on average, with a larger difference in geometry and algebra. As for the French proficiency test, they do not find any significant differences, but almost 30% of grade 11 students in the two groups did not complete the assessment.

Overall, the evidence from TIMSS and PISA suggests a worsening of Quebec’s students performance post reform in mathematics and at best a stand still in science and French.

The authors also did not limit themselves to academic indicators, and looked at the effect of reform on (self-reported) indicators of student behavior. Again, they found some negative effects and – at best – some null effects:

We find that the vast majority of coefficients across all grades and outcomes suggest a negative impact of the reform on students’
behavior. With a few exceptions, these effects are rarely significant. In grades 5–6 and 7–8, we find a significant worsening of the situation for the following measures: hyperactivity, anxiety, physical aggression, interpersonal competencies and emotional quotient. In grades 5–6 and 7–8, the strongest evidence that the policy had an impact on behavior is for hyperactivity and anxiety (more than 50% of a std. dev.). The effects are rather strong, positive (more hyperactivity and anxiety) and robust to sample and method. In grades 9–10, the estimated effects are significant only for prosocial behavior, physical aggression and property offense.

Our results are in line with those reported by ERES, which found no effect on social adjustment, personal and emotional adjustment and intrinsic motivation. They also found that post-reform students felt less well-adapted to secondary school, male students were found to have lower self-esteem, and at risk students were less engaged in school work. We therefore conclude that the reform did not improve the behavior of students measured using the self-reported NLSCY behavioral indicators.

Needless to say – and as the authors themselves acknowledge – this study is not really “definitive” in any meaningful sense. This study was only able to measure short- and medium-term effects of this particular reform on some academic abilities and, to a lesser extent, some behavioral effects. And rigorous studies of large-scale curricular reforms are few and far between, so we don’t have a huge body of research to pull from and one study should never be relied on too heavily.

Nevertheless, this is considerable zeal in some quarters for widespread adoption of constructivist, progressive, or “student-centered” teaching approaches. Advocates of such approaches should find these results concerning.

Posted in Education Reform, Teaching & Learning | Tagged , , | 6 Responses

The Common Core Will Not Double The Dropout Rate

8033498840_3b737ee4e2_nJohn Thompson, citing a report from the Carnegie Corporation and doubling down here, claims that the Common Core standards are going to cause the high school dropout rate to double.

So, it is doubly important that Carnegie commissioned McKinsey to use the reformers’ data “to test whether or not it might be possible to avoid large drops in graduation rates using human capital strate­gies alone.”

A year ago, Carnegie and McKinsey concluded, “The short answer is no: even coordinated, rapid, and highly effective efforts to improve high school teaching would leave millions of students achieving be­low the level needed for graduation and college success as defined by the Common Core.”

They determined that the six-year dropout rate would double from 15% to 30%. If, as Carnegie projects, the four-year graduation rate drops from 75% to 53%, that would be a blow that Common Core probably couldn’t survive.

If the dropout rate were to double that would indeed be horrific, which is precisely why it won’t happen.

It helps first to understand how the Carnegie Corporation arrived at its conclusions. Essentially, the authors assume that under the CCSS, roughly twice as many students (67% vs. 34% now) will be considered “below grade level” when they enter high school. If, like today, half of such students drop out, we can expect the dropout rate to double right along with the number of students “below grade level”.

This is crude, to say the least.

The core of the confusion is this: Even if we assume that the Common Core standards are considerably tougher than the state standards they are replacing, it does not follow that students will be held to substantially higher requirements in practice.

For one thing, whether students are identified as “below grade level” depends at least as much on the Common Core tests – and associated cut scores – as it does on the standards themselves. While there is pressure from some quarters to raise cut scores on these tests and thereby identify fewer students as “proficient”, there will also be downward pressure on cut scores if officials are reluctant to tell many more families and communities that their kids are not as “smart” as they thought they were.  The authors of the Carnegie Corporation report assume that Common Core cut scores will end up matching NAEP’s, but this is basically a guess on their part.

More importantly, whether a student is identified as “below grade level” in this way doesn’t tell us much about the extent of her academic challenges in high school. The biggest academic challenges she is likely to face are course expectations and – depending on where she goes to school – exit exam requirements, and those are largely independent of Common Core tests.

Consider course requirements. While many CCSS supporters would like teachers to make their classes harder in proportion to the new standards and tests, many teachers, especially of struggling students, will know or quickly determine that drastically raising their course expectations will cause kids to flounder. As a result, they will “soften” the expectations for students, lowering the difficulty of the work and giving passing grades for work that is, in some sense, “below grade level”.

Indeed, this is how it works now: despite the fact that all classrooms in a state are theoretically held to the same content standards, not all classes are equally rigorous nor are students within a single class all held to the same absolute expectations. This is partly because teachers vary in their interpretations of the standards and their perceptions of students, but it is also because teachers differentiate on the basis of their students needs and abilities. Teachers do not, as a rule, want to see their students struggle and fail.

What this means is that in practice many students – and academically vulnerable students in particular – are not likely to see dramatic changes in the difficulty of their courses. They might engage in different sorts of activities or cover different content, but their teachers will adjust the difficulty of the course so that students are not completely overwhelmed.

A similar logic applies to exit exams. While such exams should be aligned to the Common Core, nothing requires that they be as challenging as the CCSS tests. Whether or not a “below grade level” student can “pass” an exit exam is a choice made by adults, most of whom are disinclined to subject children to unnecessary – or politically awkward – failure.

In other words, to argue that the Common Core will double the dropout rate, you have to assume – implausibly and uncharitably – that teachers and policy-makers are heartless, oblivious automata who will not respond to the effects of standards on students.

Now, it’s fair to say that Common Core supporters have sometimes tried to have it both ways here, claiming that the new standards will “raise the bar” for students and, simultaneously, that kids will not suffer as a result of greater challenges in school.

And it’s also likely that, Common Core notwithstanding, our dropout rates will increase in the coming years since they are currently at an all-time low and an improving economy will give marginal students better alternatives outside of school.

For the record, it is entirely possible that the CCSS will contribute modestly  to future increases in the dropout rate. The Common Core will – by design – make some courses more difficult for many students, and for marginal students that may be enough to nudge them out of school altogether.

There is, however, no reason to think that the Common Core will “double” the dropout rate.

Posted in Education, Education Reform, Teaching & Learning | Tagged , | 6 Responses

We Need MANY More Teachers than Doctors or Lawyers

It was a little surprising to see the AFT take a stand against the edTPA teacher licensing test given President Randi Weingarten’s support for similar “bar exams” for teachers, and it got me thinking about “professionalizing” teaching in general.

That teaching needs to be “professionalized” is a mostly-platitudinous claim, but you often hear from both sides in the education reform debates.

You often hear people reason about professionalization by analogy: that we need to change the way teachers are certified to make the profession more similar to law or medicine.

This is probably a bad way of thinking about teaching.

It’s easy to forget, but the United States actually needs a lot of teachers. In 2012, public and private K-12 schools employed roughly 3.7 million teachers.

For comparison, in 2012 the United States employed approximately 835,000 doctors and 1,268,000 lawyers.


In other words, we need three times as many teachers as we have lawyers and more than four times as many teachers as doctors.

Another way to think of it is this: 3.7 million teachers represents nearly 2.8% of the civilian labor force and 8% of all college graduates in the labor force in 2010.

And teachers, of course, make substantially lower salaries than doctors or lawyers, which will complicate any efforts to reduce the profession’s attractiveness or to throw up additional barriers to entry.

So it’s really not obvious that it’s possible to make teaching much like medicine or law even if we wanted to.

Posted in Education Reform, Teacher Training | 2 Responses

How Much Do Reformers Think Job Security Is Worth To Teachers?

7408506410_715acb5f6f_mA couple of weeks back at This Week in Education I tried to explain why the Vergara decision in California doesn’t have easily-predictable major consequences, even if you hand-wave away all of the inevitable legal wrangling and assume tenure and seniority rules  for teachers do end up changing significantly. Partisans really don’t like thinking – or at least talking – about trade-offs1 but they almost always matter in the long-term.

The upshot is that, even if you operate with an extremely naive model in which only student achievement outcomes matter, it’s not obvious that tenure reform2 will have large net benefits:

There will probably be some good effects and some bad effects of tenure reform, and much depends exactly on how the tenure rules are changed and how the state, districts, and schools respond. For example, will districts raise salaries in response to limitations on tenure? Will administrators find work-arounds to reduce energy spent on evaluations?

As a result, it’s hard to know which effects, if any, will dominate in the long-term. They may largely cancel each other out.

One of the central tensions for reformers when it comes to improving teacher quality is that on the one hand they believe teachers are fighting desperately for excessive job security but also, on the other hand, that you can substantially reduce that job security without making teaching significantly less attractive.

In theory this is not impossible. Making it work, however, requires admitting that job security is a benefit for teachers and that taking it away will – all else equal – make being a teacher less appealing.

How much less appealing? I think it’s very hard to say, but via Tyler Cowen, Lee Ohanian makes a (very rough) estimate [emphasis mine]:

I use historical data from the Bureau of Labor Statistics to identify the probability that a worker in the model is involuntarily separated from their job, which is about a 4 percent chance per month, average duration of unemployment, which is about 3.5 months, and the probability of continuing employment, which is about 96 percent.

For the case of the public sector, the probability of involuntary separation is just 1.3 percent, which is one-third as high as the probability in the private sector case. I then calculate the difference in compensation between the public sector (low unemployment case) and the private sector, such that a worker would be indifferent between working in either sector. I find that workers would be willing to work for about 10 percent less compensation in the public sector, given the additional benefit of much higher job security. This estimate is conservative in terms of considering today’s labor market, as average unemployment duration today is much higher than its historical average.

In other words, Ohanian thinks you could use job security as a means of attracting employees into the public sector even if you offered salaries roughly 10% lower than in the private sector because the job security itself has some value.

Now, Ohanian is a conservative economist writing for a conservative think tank, so he unsurprisingly concludes that you can do away with these job security benefits because public sector workers are so wildly overcompensated to begin with that the marginal value of the “excess” compensation is very small.

My sense, however, is that many education reformers – who are often left-leaning – don’t want to say that at all. On the contrary, they will often say that we want the “best and brightest” – i.e., significantly above-average workers – to go into teaching and that teachers should receive more compensation (contingent on performance).

The trouble is that, as Rick Hess puts it, courts are good at “access, not quality”. The Vergara lawsuit is ultimately a very crude way of enacting policy change; the decision does not require, for example, any measures to compensate teachers for reductions in job security.

Reformers may want salary increases but since they weren’t judicially mandated – and do not otherwise appear to be forthcoming – we have to consider a world without them.

I’m not opposed, in principle, to carefully paring back tenure protections for teachers in exchange for higher salaries or other benefits. I could therefore be made more sympathetic to many reformers’ projects if they seemed to be taking these trade-offs more seriously.

  1. At this StudentsFirst forum I submitted to the panelists a question about trade-offs, and was assured all questions would eventually be answered in person or online, but never got a response. []
  2. You can perform a similar exercise for changes to seniority rules. []
Posted in Education Reform, Teacher Compensation | Tagged , | Leave a comment

More Evidence of the Trouble with ‘Student-Centered’ Teaching

248447327_2f8a1c8249_nI’ve long had many related-but-separate complaints about ‘student-centered’ teaching practices. A new study in Educational Evaluation and Policy Analysis lends new evidence to several of them. (You can also check out good write-ups from Sarah Sparks and Bettina Chang.)

The authors used data on a large number of first grade students to see what strategies their teachers used to teach them math. They then looked to see whether teachers tended to use different strategies when students had stronger or weaker math skills, and then grouped these strategies together based on whether they would normally be considered ‘teacher-directed’ or ‘student-centered’.

Their methods – including an elaborate set of statistical controls for variables like student SES and prior achievement – also allowed them to make tentative causal inferences about which teaching strategies seem to be more effective for students who were stronger or weaker in math to begin with.

The results are mostly unflattering to student-centered approaches.

Student-centered Teaching Can be a Pedagogy of Privilege.

A couple of years ago I claimed that teaching methods typically considered ‘student-centered’ together represent a ‘pedagogy of privilege‘; such methods might be good – or at least good enough – for relatively strong students, but they often do not meet the needs of students with weaker skills.

The authors of this new study reach a basically similar conclusion, at least in regards to first grade math instruction:

Controlling for many potential confounds, we also found that only more frequent use of teacher-directed instructional practices was consistently and significantly associated with residualized (value added) gains in the mathematics achievement of first-grade students with prior histories of MD [i.e., mathematics difficulties]. For students without MD, more frequent use of either teacher-directed or student-centered instructional practices was associated with achievement gains. In contrast, more frequent use of manipulatives/calculator or movement/music activities was not associated with significant gains for any of the groups.

An important contribution of our work is that we find that teacher-directed instructional practices are associated with achievement by both students with a prior history of persistent MD, as well as those with a prior history of transitory MD. In contrast, other, more student-centered activities (i.e., manipulatives/calculators, movement/music) were not associated with achievement gains by students with MD.

In other words, the most fortunate students will manage one way or the other but the less fortunate kids are not well-served by student-centered approaches.1

Student-centered Teaching is Attractive in Low-Skill Settings

Despite their inappropriateness for struggling students, I’ve also hypothesized that student-centered approaches may – paradoxically – be more favored when students have fewer or weaker skills.

My guess was that student-centered approaches can obscure skill gaps, which tend to be more salient in low-skill classrooms. When students are mostly proficient or advanced, teachers, administrators, and parents tend to have plenty of independent verification that students are skilled; ambiguous, student-centered activities are not relied on for demonstrations of mastery. With lower-skilled students, adults are more likely to be worried about their students’ skills, because much of the available evidence (e.g., test scores, independent classwork) suggests those skills are absent or weak. When students engage in student-centered activities, they can easily give the illusion of proficiency – talking to one another, handling materials, and so on –  especially if you don’t examine their work too closely or don’t know what you’re looking for.  And it’s easy to interpret ambiguous evidence of learning favorably if you really want to see proficiency (as most educators do).

This new study finds evidence consistent with my theory, at least for some student-centered teaching strategies:

We found no significant relation between the percentage of MD students in the classroom and the frequency of teacher-directed or student-centered instructional activities. However, we did find that…classes of students with higher percentages of MD students were more likely to be taught these skills and with instructional practices emphasizing using manipulatives/calculators and movement/music. As reported below, these instructional activities…were not associated with mathematics achievement gains by students with MD.

Regardless of the reason, however, it seems that teachers are choosing to use less effective methods especially with those students who need the most help.

Student-centered Teaching is not Obviously Research-based

Of course, if you ask adults who favor student-centered methods, they will very often say that those methods are ‘research-based’. There is some sense in which this is true, at least to the extent that you can find seemingly-reputable education studies to support almost any instructional decision.

The trouble is that a great deal of education research is ideologically-motivated, and well-controlled studies of instructional effectiveness are difficult to perform in any case. So how strong, really, is the research base for student-centered teaching?

This new study suggests it is probably not as strong as it is often made out to be:

Some types of instructional practices are commonly considered “evidence-based,” and so presumably their use by teachers should result in increased mathematics achievement. For example, Baker, Gersten, and Lee’s (2002) synthesis of researcher-directed intervention studies yielded a weighted ES of .66 for the use of structured peer tutoring on low-skilled children’s mathematics achievement. Additional syntheses also support peer tutoring as an evidence-based practice (Elbaum, Vaughn, Tejero, & Watson, 2000; Mathes & Fuchs, 1994). Yet our estimate of student-centered instruction, which includes peer tutoring, was statistically non-significant when used with students with prior histories of MD (Guarino et al. [2013] also reported a statistically non-significant finding for peer tutoring).

The authors suggest this might be related to implementation fidelity problems with student-centered approaches, and I suspect that’s a factor.2 It’s also possible, though, that much of the underlying research is just not as strong as we’d like to begin with.

To be clear, nothing here demonstrates that any particular ‘student-centered’ approach doesn’t have its place, even potentially in classrooms with large numbers of struggling students.

This study is, however, more evidence that many traditional, ‘teacher-centered’ approaches are often unfairly-maligned and under-utilized.

  1. In fact, calling those approaches ‘student-centered’ at all seems presumptuous. []
  2. This is not exactly a ringing defense of student-centered approaches; if they are harder to implement, so much the worse for them. []
Posted in Teaching & Learning | Tagged , , | 10 Responses

Explained Variation Is Not A Measure of Importance

246717376_aa7e238d67Back in early April the American Statistical Association put out a “Statement on Using Value-Added Models for Educational Assessment“.

Last month, Raj Chetty, John Friedman, and Jonah Rockoff issued a response, in part because so many commentators seemed to misunderstand the ASA statement and in part because the ASA seemed not to have incorporated some of Chetty et al.’s most recent research.

Diane Ravitch’s unimpressed follow-up involves a few all-too-common misconceptions:

What do Chetty, Friedman, and Rockoff say about the ASA statement? Do they modify their conclusions? No. Did it weaken their arguments in favor of VAM? Apparently not. They agree with all of the ASA cautions but remain stubbornly attached to their original conclusion that one “high-value added (top 5%) rather than an average teacher for a single grade raises a student’s lifetime earnings by more than $50,000.” How is that teacher identified? By the ability to raise test scores. So, again, we are offered the speculation that one tippy-top fourth-grade teacher boosts a student’s lifetime earnings, even though the ASA says that teachers account for “about 1% to 14% of the variability in test scores…”

The argument is that if teachers account for only a small fraction of the variation in student test scores, teacher quality is probably not a useful lever by which we can improve education outcomes.

This is wrong for at least three reasons.

First, to know whether 1%-14% is a lot of variation to account for we have to compare teachers to something else. It’s not entirely clear from her post, but Ravitch1 seems to want to compare teachers to all other factors put together, but that comparison tells us very little. The 86%-99% of the variation in student test scores not explained by teachers is not explained by a single other factor for us to focus all of our policy energy on; it’s an aggregate of a large number of factors, each likely accounting for a much smaller fraction of the variation.

Second, even if some factors explain more variation in test scores, that doesn’t mean we have to pick just one factor to care about. We may want to prioritize, say, poverty reduction over teacher quality improvements, but that doesn’t mean only the former matters.

Third, and most fundamentally, variation accounted for by a factor is not a measure of that factor’s importance. The ASA statement actually points this out somewhat obscurely:

Research on VAMs has been fairly consistent that aspects of educational effectiveness that are measurable and within teacher control represent a small part of the total variation in student test scores or growth; most estimates in the literature attribute between 1% and 14% of the total variability to teachers. This is not saying that teachers have little effect on students, but that variation among teachers accounts for a small part of the variation in scores.

The fact that this was included in the ASA statement has not prevented considerable confusion; VAM critics have latched on to the first sentence, but seem not to understand the significance of the second.

Let’s unpack that second sentence.

If a factor doesn’t explain much of the variation in student test scores, that could mean that the factor is relatively unimportant and that even large changes in that factor would not have significant effects on scores.

Another possibility, however, is that that factor doesn’t systematically vary between students.

Consider “access to breathable oxygen”. If you crunched the numbers, you would likely find that access to breathable oxygen accounts for very little – if any – of the variation in students’ tests scores.  This is because all students have roughly similar access to breathable oxygen. If all students have the same access to breathable oxygen, then access to breathable oxygen cannot “explain” or “account for” the differences in their test scores.

Does this mean that access to breathable oxygen is unimportant for test scores? Obviously not. On the contrary: access to breathable oxygen is very important for kids’ test scores, and this is true even though access to breathable oxygen explains ≈0% of their variation.

Now let’s return to the importance of teachers. If teachers account for only a small fraction of variation in student test scores, that may mean that teacher quality is largely unimportant. It may also mean that teacher quality does not vary systematically very much between students.

Another way to think of it is this: if every teacher was exactly as effective as every other teacher, teachers would account for exactly 0% of the variation in student test scores. This would be true regardless of whether these imaginary teachers spent the entire school day reading the newspaper or if they successfully taught advanced calculus to 3rd graders.

In other words, determining statistically how much variation is “explained” by teachers will not, by itself, tell you how important teacher quality is.

This is precisely where research like that of Chetty et al. comes in. It attempts to go beyond simple measures of “explained variation” to quantify teachers’ actual importance and impact.

We can still reasonably disagree about what that research tells us about “the importance of teachers”. What you can’t reasonably do is dismiss that research out of hand using measures of explained variation, as those are not direct measures of importance.

  1. Ravitch is by no means the only one who makes these mistakes, but she’s usefully illustrative here. []
Posted in Education Reform | Tagged | 2 Responses

My Twitter Reactions to the Vergara Ruling

We are already being inundated with analyses of what yesterday’s Vergara ruling “means”, so rather than write up yet another one I just compiled my thoughts from Twitter:

Posted in Education Reform | Tagged , | Leave a comment

There Is Probably No “Crisis” In American Education

Here is a chart of educational attainment in the United States since 1940:


When you look at that chart, do you see a crisis?

No? Me neither.

How about in these charts of reading and math achievement on the NAEP for 17-year-olds, broken down by race?reading17


Still hard to see a crisis, at least to my eyes.

Certainly, it’s fair to say that American education has many problems related to effectiveness, efficiency, and equity.

And it’s probably reasonable to say that our country and the world face a number of bona fide crises – war, climate change, poverty, or criminal justice, for example – each  with some relationship to education, however complex or indirect.

But next time you hear someone claim – or are yourself tempted to claim – that American education is in a state of “crisis” or is being “destroyed” by education reform, remember these charts. Then ask yourself whether those terms are being used productively or whether they are being defined down in a way that obscures as much as illuminates.

Posted in Education Reform | Tagged , , , , | 16 Responses

Are the Common Core Standards Voluntary?

50323994Did states adopt the Common Core standards voluntarily, or were they forced to do so by the federal government?

I’m not sure why we would think the answer to that question matters very much. If adopting the CCSS would be good for a state, then their adoption by that state would be good. If adopting the new standards would be bad for a state, then the state shouldn’t adopt them. Information about how the standards are (or are not) adopted seems neither necessary nor sufficient to determine whether adoption should proceed.1

Still, suppose we’re really interested in whether or not the standards were “voluntary” for states. And let’s suppose, for the sake of argument, that it makes sense to talk about states doing things “voluntarily”, a substantial assumption given that “freedom” is a difficult concept to apply even to individual people.

So, were the standards voluntarily adopted by states? There is a prima facie case that they were, even if the Obama administration ended up supporting them. As Michael Petrilli says:

These standards started out as a state effort, with support from private entities like the Gates Foundation. It was the governors and state superintendents who came together, voluntarily, to draft higher common standards, because they acknowledged that their own state standards were set too low. There was already momentum behind the standards when the Obama administration intervened.

Along with the fact that not every state has adopted the CCSS, this is basically dispositive. If states mostly supported the standards prior to the Obama administration’s endorsement, it hardly makes sense to talk about the federal government “coercing” the states into adoption. If Arne Duncan hadn’t gotten involved it’s possible that some states may ultimately have decided not to adopt the standards they had helped to create, but “a few states were marginally more likely to adopt” hardly seems sufficiently coercive to justify the resulting libertarian outrage.2

Libertarians, however, have been outraged. Why? Here’s Neal McCluskey:

From the outset of the Obama administration, officials talked about a need for national standards, and under the mammoth 2009 “stimulus” they got a lever by which to push that: the $4.35 billion Race to the Top program. To fully compete for Race to the Top money states had to adopt standards common to multiple states, and only one set of standards fully met the definition: the Common Core.

This argument will baffle most people. How is being offered money to do something “coercive”? Presumably McCluskey is offered monetary compensation of some kind for his work with the Cato Institute; has he therefore been “coerced” into working there?

Most people, however, are not libertarians. McCluskey sneaks his premises in a little later in the argument:

Adopting the Common Core was, in principle, no more voluntary than having a mugger take your money — [state] taxpayer money — then let you “voluntarily” hand him the keys to your car to get the dough back.

Wait, what? McCluskey thinks paying your taxes to the federal government is morally equivalent to being mugged? You could be forgiven for thinking you misunderstood, but here he is replying to me last week on Twitter:

This is the classic anarcho-libertarian line on government: that its activity necessarily involves forcing people to do things they would prefer not to do and is ipso facto illegitimately coercive.3 Because this is such a fringe view many CCSS supporters have had a hard time understanding why anybody would consider, e.g., Race to the Top to be “coercive”. The answer is that some people think taxation is essentially theft.

Without getting too far into the weeds here, it’s worth mentioning why most people think being anti-coercion doesn’t mean that taxation is theft. First, “taxation is theft” strikes most people as a reductio ad absurdum of naive definitions of “coercion”. If you are using “coercion” in such a way that you have to conclude that taxation is the moral equivalent of armed robbery, something is probably wrong with your use of the word.

Second, most people believe that taxation is justified in one way or another. This may be because they view taxes as “the price we pay” for the various (typically larger) benefits of membership in society. Or it may be because they realize that your possessions and what they are worth are determined to a large extent by our social context, including our governmental institutions. If, e.g., the nominal and real value of your wages is a function of our social (and governmental) institutions, it becomes very difficult to discern how much of them (if any) is “yours” in a morally fundamental sense.4

So it is safe to say that the federal government did not coerce states into adopting the Common Core standards by offering  money to do so. But if carrots aren’t coercive, what about sticks? It is probably fair to say that sticks were in play:  

We’ve dispensed already with the “Fed $ from taxpayers under force of law” issue, but NCLB waivers appear to be somewhat different.

No Child Left Behind, you will recall, requires that states set learning standards for basic skills – reading and math, mostly – and get ever-increasing proportions of students over related proficiency thresholds. Over the years the requirements of NCLB have become more difficult for states to meet as proficiency targets have risen and Congress has failed to modify the legislation. As a result, the Obama administration offered states “waivers” to avoid many of NCLB’s accountability provisions in exchange for adopting other various reforms including, in many cases, the CCSS.

A somewhat more plausible case for federal coercion is this: States that did not adopt the CCSS would have a much harder time getting a waiver from NCLB, and without a waiver they would be subject to increasingly harsh penalties for failing to satisfy the law’s requirements.

Here, too, the libertarian arguments mostly don’t work.

First, it was possible to get a waiver without adopting the CCSS (as Virginia did) or just skip the standards altogether (as several states have).

Second, the provisions of No Child Left Behind are substantially voluntary. This has been largely forgotten, but states have several options for avoiding NCLB’s accountability penalties. NCLB allows states to set their own standards and proficiency levels, for example, and there are various ways of making sure your schools are not actually held to the highest proficiency standards. NCLB, moreover, is just another carrot: required only for states that wish to receive federal education funding.

In other words, when confronted with the possibility of adopting the Common Core standards, states actually had a substantial amount of flexibility.

There is arguably a naive sense in which states were “coerced” into adopting the CCSS in that some states may not have been completely satisfied with any of the options available to them. But by this standard virtually everything is coercive to one degree or another and the term is rendered meaningless for normative purposes. That conventional, lay-person use of “coercion” may be practically useful on a day-to-day basis, but it has no philosophical precision so it can’t do any philosophical work.

And this, I think, is why the debate over whether the CCSS are “voluntary” has been so intractable. The question is a philosophical (and possibly incoherent) one, but the parties involved – especially libertarian opponents – insist on using notions of coercion that are so imprecise (or loaded) that they can’t possibly settle it one way or the other.

  1. Of course, we should prefer to have a set of background institutions that effectively distributes power between different levels of government. Those concerns, however, are mostly neither here nor there when evaluating the merits of CCSS adoption since the standards have only minor – if any – implications for our background institutions. []
  2. Note also that the Obama administration’s belated involvement in the adoption process has been so polarizing that it’s not obvious the net effect was pro-CCSS in each individual state. There was a point at which CCSS supporters started asking them to back off the advocacy precisely because it seemed to be backfiring. []
  3. McCluskey claims not to endorse the anarcho-libertarian view, but it’s hard to know how else to interpret an analogy between taxation and armed robbery. []
  4. We have socially and economically useful private property institutions, but they depend heavily on coercion, especially government coercion, themselves. []
Posted in Education | Tagged , | Leave a comment