More Evidence of the Trouble with ‘Student-Centered’ Teaching

248447327_2f8a1c8249_nI’ve long had many related-but-separate complaints about ‘student-centered’ teaching practices. A new study in Educational Evaluation and Policy Analysis lends new evidence to several of them. (You can also check out good write-ups from Sarah Sparks and Bettina Chang.)

The authors used data on a large number of first grade students to see what strategies their teachers used to teach them math. They then looked to see whether teachers tended to use different strategies when students had stronger or weaker math skills, and then grouped these strategies together based on whether they would normally be considered ‘teacher-directed’ or ‘student-centered’.

Their methods – including an elaborate set of statistical controls for variables like student SES and prior achievement – also allowed them to make tentative causal inferences about which teaching strategies seem to be more effective for students who were stronger or weaker in math to begin with.

The results are mostly unflattering to student-centered approaches.

Student-centered Teaching Can be a Pedagogy of Privilege.

A couple of years ago I claimed that teaching methods typically considered ‘student-centered’ together represent a ‘pedagogy of privilege‘; such methods might be good – or at least good enough – for relatively strong students, but they often do not meet the needs of students with weaker skills.

The authors of this new study reach a basically similar conclusion, at least in regards to first grade math instruction:

Controlling for many potential confounds, we also found that only more frequent use of teacher-directed instructional practices was consistently and significantly associated with residualized (value added) gains in the mathematics achievement of first-grade students with prior histories of MD [i.e., mathematics difficulties]. For students without MD, more frequent use of either teacher-directed or student-centered instructional practices was associated with achievement gains. In contrast, more frequent use of manipulatives/calculator or movement/music activities was not associated with significant gains for any of the groups.

An important contribution of our work is that we find that teacher-directed instructional practices are associated with achievement by both students with a prior history of persistent MD, as well as those with a prior history of transitory MD. In contrast, other, more student-centered activities (i.e., manipulatives/calculators, movement/music) were not associated with achievement gains by students with MD.

In other words, the most fortunate students will manage one way or the other but the less fortunate kids are not well-served by student-centered approaches.1

Student-centered Teaching is Attractive in Low-Skill Settings

Despite their inappropriateness for struggling students, I’ve also hypothesized that student-centered approaches may – paradoxically – be more favored when students have fewer or weaker skills.

My guess was that student-centered approaches can obscure skill gaps, which tend to be more salient in low-skill classrooms. When students are mostly proficient or advanced, teachers, administrators, and parents tend to have plenty of independent verification that students are skilled; ambiguous, student-centered activities are not relied on for demonstrations of mastery. With lower-skilled students, adults are more likely to be worried about their students’ skills, because much of the available evidence (e.g., test scores, independent classwork) suggests those skills are absent or weak. When students engage in student-centered activities, they can easily give the illusion of proficiency – talking to one another, handling materials, and so on –  especially if you don’t examine their work too closely or don’t know what you’re looking for.  And it’s easy to interpret ambiguous evidence of learning favorably if you really want to see proficiency (as most educators do).

This new study finds evidence consistent with my theory, at least for some student-centered teaching strategies:

We found no significant relation between the percentage of MD students in the classroom and the frequency of teacher-directed or student-centered instructional activities. However, we did find that…classes of students with higher percentages of MD students were more likely to be taught these skills and with instructional practices emphasizing using manipulatives/calculators and movement/music. As reported below, these instructional activities…were not associated with mathematics achievement gains by students with MD.

Regardless of the reason, however, it seems that teachers are choosing to use less effective methods especially with those students who need the most help.

Student-centered Teaching is not Obviously Research-based

Of course, if you ask adults who favor student-centered methods, they will very often say that those methods are ‘research-based’. There is some sense in which this is true, at least to the extent that you can find seemingly-reputable education studies to support almost any instructional decision.

The trouble is that a great deal of education research is ideologically-motivated, and well-controlled studies of instructional effectiveness are difficult to perform in any case. So how strong, really, is the research base for student-centered teaching?

This new study suggests it is probably not as strong as it is often made out to be:

Some types of instructional practices are commonly considered “evidence-based,” and so presumably their use by teachers should result in increased mathematics achievement. For example, Baker, Gersten, and Lee’s (2002) synthesis of researcher-directed intervention studies yielded a weighted ES of .66 for the use of structured peer tutoring on low-skilled children’s mathematics achievement. Additional syntheses also support peer tutoring as an evidence-based practice (Elbaum, Vaughn, Tejero, & Watson, 2000; Mathes & Fuchs, 1994). Yet our estimate of student-centered instruction, which includes peer tutoring, was statistically non-significant when used with students with prior histories of MD (Guarino et al. [2013] also reported a statistically non-significant finding for peer tutoring).

The authors suggest this might be related to implementation fidelity problems with student-centered approaches, and I suspect that’s a factor.2 It’s also possible, though, that much of the underlying research is just not as strong as we’d like to begin with.

To be clear, nothing here demonstrates that any particular ‘student-centered’ approach doesn’t have its place, even potentially in classrooms with large numbers of struggling students.

This study is, however, more evidence that many traditional, ‘teacher-centered’ approaches are often unfairly-maligned and under-utilized.

  1. In fact, calling those approaches ‘student-centered’ at all seems presumptuous. []
  2. This is not exactly a ringing defense of student-centered approaches; if they are harder to implement, so much the worse for them. []
This entry was posted in Teaching & Learning and tagged , , . Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

6 Comments

  1. Posted June 27, 2014 at 7:26 AM | Permalink

    I’ve read a bunch of your posts on the teacher-centered/student-centered divide, and I find myself agreeing with everything but the rhetoric and some of your conclusions. Pardon me for thinking out loud here about a topic that you have thought more deeply about. It’s a chance for me to get smarter on this, and I’ll selfishly take it.

    I don’t have the research knowledge that you or your others have, but the idea that more teacher-centered techniques help our weakest students fit with what I know about the research and with teaching. And it’s not hard to see how activities where students determine the objectives, or are merely asked to explore and decide their own goals or purposes, would lead to very little learning at all.

    So, fine. Now we face both a third-personal question (What should we ask teachers to do?) and a first-personal question (What should I do?). As best I can tell, you’re interested in the third-personal question: how should we expect teachers to run their classes? Through the third-personal lens, the difficulty of correctly implementing student-centered techniques counts against them, since recommending them will surely lead to poor execution.

    What about the first-personal question, though? What should I do in my math class in September?

    There is solid research backing certain programs, such as Cognitively-Guided Instruction, right? This is one of the most well-investigated ideas in math education research. There’s also the report in this post — along with others like it — showing that teacher-centered instruction often gets better results than activities that are labeled “student-centered.” There’s a sort of tension here — how do I navigate it?

    Part of the answer is in living in productive tension between these two poles. For example, maybe CGI is hard to implement, but I can work hard to try to implement it with fidelity. While it’s interesting that in large studies “problem-solving” isn’t shown to help much, there’s good theory supporting the idea that strong, organized, flexible and transferable knowledge is built in situations that don’t initially show their cards as to which skills need to be used. The problem is, maybe I’m one of those teachers that will struggle to implement the program well, and maybe I need to protect my students against that possibility.

    As a matter of policy, it sounds like we shouldn’t be pushing teachers to use more student-centered techniques. As a matter of personal policy, though, I’m mostly interested in what I can make work. To say “student-centered techniques don’t really work” is to elide a lot of what is possible on the personal level, though it also offers us a lot of humility.

    One last note: we often pretend as if we all agree what the preferred outcomes of a math education should be, but we don’t. Many advocates of discovery-based approaches support them because they want kids to experience discovery, or because they want kids to get better at inquiry. (Of course, developing robust inquiry skills is very difficult and requires a basis of knowledge, but hold that for now.) Many advocates of student-centered approaches would shrug at a report like this and say something like, “Shocker: focusing on basic skills and practicing basic knowledge in class is the best way to get kids to do well on this sort of test that assesses basic knowledge.”

    I’m not saying that line of response is correct, but there’s a lot of unspoken disagreement in the purposes of a math education hanging out right below the surface of these discussions.

    • Posted June 27, 2014 at 8:51 AM | Permalink

      I am not familiar with CGI, beyond what you just provided in that link, so I can’t speak to its merits. It also claims not to be a curriculum, so I’m not sure it would fall on one side of the TC/SC divide in any case.

      It’s certainly true that individual teachers can be better at implementing ‘student-centered’ methods. It’s also true that some students are better-served than others by student-centered methods.

      So while I don’t to elide the personal issues, but I think teachers are much more prone to overestimate student-centered methods than to underestimate them so if every teacher is making these decisions personally we’re going to end up with too many student-centered practices on average. (This may not be a big problem for stronger students but, as we agree, it could be very risky for weaker students.)

      And if student-centered methods can be defended on the grounds that they can be made to work when carefully designed and implemented by thoughtful teachers, it seems that teacher-centered methods can be further bolstered by a similar argument.

      As with, for example, medical treatments, it’s rarely going to be the case that a single guiding principle is going to be the best for every teacher with every group of kids. But the medical community still relies on rules of thumb because they’re *generally* right and because they help to avoid systematic biases. General rules can help reduce the total number of errors even if they directly result in new errors.

      Teachers, in other words, should definitely see what they can make work on their own. But they should also keep in mind what the research says and that they are likely to be personally biased about themselves and their students.

      Keeping in mind that I’m science, not math, what you say about lack of consensus about goals rings true to me. It will probably not surprise you to hear that I think most of those goal-related objections are also not compelling. I think, for example, that there is nothing about ‘teacher-directed’ instruction that is incompatible with conceptual learning or higher-order skills, and I think the research suggests very strongly that critical thinking/inquiry skills are mostly non-transferable (and if they are transferable, we don’t know reliable ways of teaching them as such.)

      But in any case, a disagreement about goals is not adequate to settle the argument about methods. Whether student-centered approaches lead to better inquiry skills, for example, is an empirical question, not one that can be deduced from the definition of “inquiry skills”. (In fact, re: scientific inquiry skills, there’s a bunch of research showing that direct instruction can effectively promote control-of-variable strategies with kids.)

      So I think you’re right that goal disagreements underlie methods disagreements, although not to the degree people people sometimes think when caricaturing the ‘other side’. But part of the problem is also, as I see it, is that people seem to believe, a priori, that your goals somehow straightforwardly imply the best methods to use.

      • Posted June 27, 2014 at 9:20 AM | Permalink

        I think it would help me to better understand how teacher-centered tactics could help foster better higher-order skills.

        The definition of teacher-centered from the Chang piece had: first comes an explanation of X, then comes practice of X.

        For example, one would give kids a worked example of how to solve a linear equation, then tell them how to solve any linear equation, and then you’d ask them to practice a bunch of linear equations.

        Where’s the room for explaining, hypothesizing, estimating, making connections or justifying in this sequence?

        To get specific about what alternative I’m imagining, with a group of low-performing students I would probably start by giving out a series of problems that connect what we’re about to study to problems they can already solve. (“Inventing To Prepare For Future Learning”). Ideally, I’d have decided before the lesson what student work would lead to a productive conversation, and I’d look for that thinking while circulating. Then I’d ask questions and try to create some controversy based on their thinking. Then I (or a kid) would wrap up the disagreement, and if it’s necessary (it often is) I’ll give an explanation that settles any remaining controversy and reexplains the idea, and then we’d move in to practice.

    • Posted December 28, 2014 at 12:05 PM | Permalink

      Hi Michael. I also have no direct experience with CGI. I read the general information on the link you gave. First, the experiment they describe has a serious procedural error at its core. As in many proprietary research studies it is easy to identify a biased experimenter’s finger on the scale in how the control (“comparison”) group is selected. Here you see that the control group of teachers was trained in the use of problem-based learning (but without CGI). Thus you are not comparing CGI against a “status quo” school, but one in which teachers are freshly taught a “modern” technique that has not lived up to its promise; generic “problem-based learning” or discovery learning is known to perform quite poorly in controlled studies. To use this as the control group, especially with teachers only recently instructed in its use, is to handicap the control, which will work in favour of the test group. There is also nothing said about that training itself — but if it were carried out by the same instructors then it is quite likely that the control teachers were trained by persons ideologically biased against generic problem-based learning. In other words those training the teachers did not, themselves, believe in the effectiveness of the methods they were teaching. Think this might affect the outcomes a wee bit?

      The descriptions I read of CGI itself reminds me so much of an earlier system called Cognitively Oriented Curriculum that I suspect it is simply a descendent of COC. It certainly has the same philosophical perspective, whether or not it involves identical techniques. So it is of note to observe how COC performed in a certain large-scale longitudinal study.

      I am referring to the largest and most expensive comparative educational study in history, Project Follow Through. PFT took place over most of a decade, initially examining some 20 proposed methodologies, though only 9 stayed with it until the end. COC was one of these. As for your point about basic skills instruction preparing students to perform on tests of basic skills, this study is very relevant. Pay attention now …

      PFT examined the students’ progress on three scales: Basic Skills, Cognitive, and Affective Domain. So we are able to parse the results as to the effectiveness of different methodologies in each of these three domains.

      The two independent organizations that performed the primary analysis on the results of PFT grouped the methodologies according to which domain they emphasized. Thus, COC fell into the class of “Cognitive” curricula (I guess this is pretty obvious). Now comes the interesting part.

      The three kinds of intervention were compared to each other and to the “control” — which in this case simply referred to students in the regular school system, with no particular kind of intervention.

      What you might find interesting, even surprising, is that the Affective Domain methodologies and the Cognitive Domain methodologies, generally UNDER-performed the control group — IN THE AFFECTIVE AND COGNITIVE DOMAINS. That is, they did more poorly than the status quo in the very domains they claimed to emphasize! Students would have had better cognitive and affective domain outcomes without these interventions.

      They also under-performed in the basic skills domain. Perhaps no surprise there.

      In contrast, the methodologies that emphasized basic skills did as well or better than the other two groups and better than the control, in ALL THREE domains. Even so, a couple of the basic skills systems were somewhat weak tea. However, the one called Direct Instruction — a system based on a particular, scripted, form of (you guessed it) direct instruction, a teacher-centered system that had teacher directly instructing students rather than expecting students to discover or inquire or engage in metacognition and otherwise self-direct — outperformed them all.

      And it wasn’t even close. This result was the firm conclusion of BOTH independent analyses. Then (predictably) when some members of the Educational Establishment wrote a screed criticizing the analyses as flawed, the analysts re-did their analysis to take into account the criticisms — and the conclusions remained the same, and just as clearly. Nevertheless, the Educational Establishment held to their line and insisted that PFT was too problematic for its results to inform policy and they laboured for the last 3 decades to memory-hole the results. That approach has led us to where well-meaning teachers like you are still weighing the benefits of poorly designed systems based on the same approaches that failed so miserably back in the 1970s.

      Athabasca University has an easily absorbable write-up of the PFT affair here:
      http://psych.athabascau.ca/html/387/OpenModules/Engelmann/evidence.shtml
      You might also read Doug Carnine’s excellent and scholarly piece here
      http://www.wrightslaw.com/info/teach.profession.carnine.pdf
      Harvard Review devoted a whole volume to the affair
      http://www.metapress.com.proxy2.lib.umanitoba.ca/content/u0547p667586/
      Here is another excellent collection of articles on the issue:
      http://darkwing.uoregon.edu/~adiep/ft/151toc.htm
      I particularly recommend Dr. Cathy Walker’s piece there, whose title asks the obvious question: “Follow Through: Why Didn’t We?”

      • Posted December 28, 2014 at 12:17 PM | Permalink

        As for what you should do in the classroom? You have many options. I have seen award-winning master teachers in the U.S. who simply cast their net for large caches of old texts from the 60s and 70s and teach from them. Some of the high school geometry texts from the very late “New Math” period were, for example, very good, though not all of “New Math” was ideal and belongs on a museum shelf. If you’re looking at classical texts you’d best know what’s out there, what to look for and what to avoid. There’s always been good and bad to pick from. Numerous contemporary resources have produced excellent results and perform consistently very well in comparative trials. Of these I particularly recommend Saxon Math, JUMP Math (which only goes to Grade 8, unfortunately) and Singapore Math (which is truly world class, but it does rely heavily on the teacher being proficient in the subject, whereas the preparatory materials for JUMP are known to help teachers remediate their own weaknesses — also, beware Singapore math pretenders; get the real thing!).

  2. EB
    Posted July 8, 2014 at 5:58 AM | Permalink

    I think Michael Pershan is assuming that the “explain X, then practice X” format is the only method used in teacher-directed instruction. In my experience, this is not the case. “Explain, then practice” is the anchor, but not nearly the only instructional method used in that type of classroom. I used both exploratory work and manipulatives (though not music/dancing nor calculators) in my first grade classroom. The curriculum I used assumed that there would be ancillary methods, but was based on explanation, demonstration, and practice of various sorts. These are not only the most efficient methods of helping children learn in terms of time spent, they are also the most satisfying for many students, and are also the only way to structure a curriculum so that you get the topics covered with sufficient depth and with the children having a chance to develop automaticity.

4 Trackbacks

Leave a Reply