I feel a rant coming on.
Why on earth would we care how math teachers feel about their abilities? They either are prepared or they aren’t, and that can be demonstrated through a test. As long as they are surveying teachers (in just two states, mind you, and about 20 counties), why not give them a test to capture their actual abilities?
This sort of research typifies that offered by both sides of the education policy debate: soft data, complete ignorance of any existing hard data, and intent determination to avoid collecting anything approaching meaningful data.
Just a few of the more annoying parts:
- “By relying on teachers’ reports of their own feelings of adequate preparation, we only get at their knowledge indirectly.”
“Fortunately, this approach is sufficient to demonstrate how much variation there is in teachers’ content-specific knowledge, or at least in their feelings of adequate preparation.”
How, exactly, is this approach sufficient to demonstrate variation? Any evidence that “feelings” about preparation is equivalent to, you know, actual preparation? It would have been useful to collect some knowledge (say a short multiple choice quiz) to see if any correlation between feelings and reality existed, for example.
- Commenting on the finding that many first to third grade teachers feel academically prepared to teach only their grade level, and no higher: “Is it reasonable for teachers to focus only on the topics that they will teach?”
Great question. Except the author doesn’t answer it: “However reasonable such a position may appear, many of the more advanced topics for which teachers did not feel well prepared provide the mathematics background necessary to be truly well prepared to teach the more elementary topics at their grade level.”
I think he’s saying that teachers can’t really be well-prepared to teach elementary topics unless they know advanced topics as well. Gosh. Wouldn’t it be nice if there were data to support that opinion? So far, the data on the relationship between teacher’s subject proficiency and teaching outcomes is, to say the least, a bit squishy. More importantly, there’s a huge difference between knowing a topic and being prepared to teach it. So a teacher could know how to find the area of a triangle but not feel comfortable teaching the derivation of the formula. This is a huge difference that the author completely ignores.
- “Teachers’ self-perceptions of their preparedness seem likely, if anything, to overestimate what they know and how well prepared they are rather than to underestimate it.”
No data at all to support this bold assertion. I would intuitively argue (hey, the author can do it, so why can’t I?) that teachers, who are cautious critters of habit, are more likely to underestimate their knowledge and preparation, particularly in high school. I know this because I’m quite different from the average teacher in this regard. Tell me I’m teaching the history of technology or, god forbid, AP Calculus the next year, and I’d say, “Sure.” History of tech I wouldn’t start preparing for until three weeks before school started. AP Calc I’d spend the entire summer working on increasing my mastery, getting lesson plans from friends, and finding the critical teaching points in the subject. I know that I’d learn a huge amount the first year, that I’d do a decent but not great job my first year, be much better the second and subsequent years. Plus, I’d find it a fun intellectual challenge. Most teachers with my level of calc knowledge would, instead, say “Hell, no” to the offer. But then, I’m (literally) smarter and more flexible than the average teacher. This does not make me a better teacher, but I’m going to have an easier time stepping out of my comfort zone. From my perspective as an ex-techie, it’s hard to overstate the relative hide-boundedness of most teachers (I’m not saying this is a bad thing). So the author’s assertion that teachers would overstate their ability strikes me as not credible without further data.
- “In this section, we summarize what teachers have told us about their preparation in mathematics at the college level and as graduate students.”
I have a degree in English and took no math in college a hundred years ago. All the math I’ve learned, I’ve learned on the job as a tutor–some of it as a teacher, but most while tutoring students in their coursework and on the Math 2c. (I’ve spent little time tutoring calculus, which is why I don’t know it well.)
GRE Math score: 800. I am relatively certain I scored in the top 10% on two of the three required math credentialing tests, well higher than many math majors. My demonstrated math ability is probably in the top 10% of the entire college graduate population–certainly in the top 15%. Yet I would show up as one of these lamentably unqualified high school teachers who didn’t get a degree in math.
Unless the author has evidence that, controlling for demonstrated ability, teachers who took math coursework in college do better than teachers who did not–that is, two teachers who got 800 on the GRE but only one of which majored in math have distinctly different outcomes in algebra instruction—then he shouold stop pretending that coursework is relevant. The author cites no such research, and I’m aware of none. Coursework is a proxy. It’s not the real thing.
Of course, the author could have included a brief test with the survey so he could correlate, to some small degree, the performance of non math-majors to math majors on their chosen subject.
Why would anyone take this data seriously? I genuinely don’t understand this. Why not survey math ability with a test and get hard data? But then, the hard data might be so discouraging for his side.
Lest anyone take my ire amiss: I think most elementary school teachers close their eyes and think of England during their math segments. I’m not convinced their distaste makes a huge difference in outcomes. The country has dramatically increased teacher content knowledge requirements over the years, particularly in the last decade, particularly in elementary school. It hasn’t mattered a whip.