Justin Reich of EdWeek (not to be confused with Justin Baeder of EdWeek) wrote enthusiastically of a new study, asking What If Your Word Problems Knew What You Liked?:

Last week, Education Week ran an article about a recent study from Southern Methodist University showing that students performed better on algebra word problems when the problems tapped into their interests. …The researchers surveyed a group of students, identified some general categories of students’ interests (sports, music, art, video games, etc.), and then modified the word problems to align with those categories. So a problem about costs of of new home construction ($46.50/square foot) could be modified to be about a football game ($46.50/ticket) or the arts ($46.50/new yearbook). Researchers then randomly divided students into two groups, and they gave one group the regular problems while the other group of students received problems aligned to their interests.

The math was exactly the same, but the results weren’t. Students with personalized problems solved them faster and

more accurately(emphasis mine), with the biggest gains going to the students with the most difficulty with the mathematics. The gains from the treatment group of students (those who got the personalized problems) persisted even after the personalization treatment ended, suggesting that students didn’t just do better solving the personalized problems, but they actually learned the math better.

Reich has it wrong. From the study:

Students in the experimental group who received personalization for Unit 6 had significantly higher performance within Unit 6, particularly on the most difficult concept in the unit, writing algebraic expressions (10% performance difference, p<.001). The effect of the treatment on expression-writing was significantly larger (p<.05) for students identified as struggling within the tutoring environment1 (22% performance difference).

Performance differences favoring the experimental group for solving result and start unknowns did not reach significance (p=.089).In terms of overall efficiency, students in the experimental group obtained 1.88 correct answers per minute in Unit 6, while students in the control group obtained 1.56 correct answers per minute. Students in the experimental group also spent significantly less time (p<.01) writing algebraic expressions (8.6 second reduction). However, just because personalization made problems in Unit 6easierfor students to solve, does not necessary mean that studentslearned morefrom solving the personalized problems.

(bold emphasis mine)

and in the Significance section:

As a perceptual scaffold (Goldstone & Son, 2005), personalization allowed students to grasp the deeper, structural characteristics of story situations and then represent them symbolically, and retain this understanding with the support removed. This was evidenced by the transfer, performance, and efficiency effects being strongest for,

or even limited to, algebraic expression-writing (even though other concepts, like solving start unknowns, were not near ceiling).

So the students who got personalized instruction did not demonstrate improved accuracy, at least to the same standard as they demonstrated improved ability to model.

I tweeted this as an observation and got into a mild debate with Michael Pershan, who runs a neat blog on math mistakes. Here’s the result:

I’m like oooh, I got snarked at! My own private definition of math!

But I hate having conversations on Twitter, and I probably should have just written a blog entry anyway.

Here’s my point:

Yes, personalizing the context enabled a greater degree of translation. But when did “translating word problems” become, as Michael Pershan puts it, “math”? Probably about 30 years old, back when we began trying to figure out why some kids weren’t doing as well in math as others were. We started noticing that word problems gave kids more difficulty than straight equations, so we start focusing a lot of time and energy on helping students translate word problems into equations—and once the problems are in equation form, the kids can solve them, no sweat!

Except, in this study, that didn’t happen. The kids did better at translating, but no better at solving. That strikes me as interesting, and clearly, the paper’s author also found it relevant.

Pershan chastised me, a tad snootily, for saying the kids “didn’t do better at math”. Translating math IS math. He cited the Common Core standards showing the importance of data modeling. Well, yeah. Go find a grandma and teach her eggsucking. I teach modeling as a fundamental in my algebra classes. It makes sense that Pershan would do this; he’s very much about the why and the how of math, and not as much about the what. Nothing wrong with this in a math teacher, and lord knows I do it as well.

But we shouldn’t confuse the distinction between *teaching* math and *doing* it. So I asked the following hypothetical: Suppose you have two groups of kids given a test on word problems. Group 1 translates each problem impeccably into an equation that is then solved incorrectly. Group 2 doesn’t bother with the equations but gives the correct answer to each problem.

Which group would you say was “better at math”?

I mean, really. Think like a real person, instead of a math teacher.

Many math teachers have forgotten that for most people, the point of math is to *get the answer*. Getting the answer used to be enough for math teachers, too, until kids stopped getting the answer with any reliability. Then we started pretending that the *process* was more important than the *product*. Progressives do this all the time: if you can’t explain how you did it, kid, you didn’t really do it. I know a number of math teachers who will give a higher grade to a student who shows his work and “thinking”, even if the answer is completely inaccurate, and give zero credit to a correct answer by a student who did the work in his head.

Not that any of this matters, really. Reich got it wrong. No big deal. The author of the study did not. She understood the difference between translating a word problem into an equation and getting the correct answer.

But Pershan’s objection—and, for that matter, the Common Core standards themselves—shows how far we’ve gone down the path of explaining failure over the past 30-40 years. We’ve moved from not caring how they defined the problem to grading them on how they defined the problem to creating standards so that now they are **evaluated solely on how they define the problem.** It’s crazy.

End rant.

Remember, though, we’re talking about the lowest ability kids here. Do they need models, or do they need to know how to find the right answer?