Tag Archives: math

Math teachers’ feelings and other irrelevancies

Why Math Teachers Feel They’re Poorly Prepared.

I feel a rant coming on.

Why on earth would we care how math teachers feel about their abilities? They either are prepared or they aren’t, and that can be demonstrated through a test. As long as they are surveying teachers (in just two states, mind you, and about 20 counties), why not give them a test to capture their actual abilities?

This sort of research typifies that offered by both sides of the education policy debate: soft data, complete ignorance of any existing hard data, and intent determination to avoid collecting anything approaching meaningful data.

Just a few of the more annoying parts:

  1. “By relying on teachers’ reports of their own feelings of adequate preparation, we only get at their knowledge indirectly.”

    Indeed.

    “Fortunately, this approach is sufficient to demonstrate how much variation there is in teachers’ content-specific knowledge, or at least in their feelings of adequate preparation.”

    How, exactly, is this approach sufficient to demonstrate variation? Any evidence that “feelings” about preparation is equivalent to, you know, actual preparation? It would have been useful to collect some knowledge (say a short multiple choice quiz) to see if any correlation between feelings and reality existed, for example.

  2. Commenting on the finding that many first to third grade teachers feel academically prepared to teach only their grade level, and no higher: “Is it reasonable for teachers to focus only on the topics that they will teach?”

    Great question. Except the author doesn’t answer it: “However reasonable such a position may appear, many of the more advanced topics for which teachers did not feel well prepared provide the mathematics background necessary to be truly well prepared to teach the more elementary topics at their grade level.”

    I think he’s saying that teachers can’t really be well-prepared to teach elementary topics unless they know advanced topics as well. Gosh. Wouldn’t it be nice if there were data to support that opinion? So far, the data on the relationship between teacher’s subject proficiency and teaching outcomes is, to say the least, a bit squishy. More importantly, there’s a huge difference between knowing a topic and being prepared to teach it. So a teacher could know how to find the area of a triangle but not feel comfortable teaching the derivation of the formula. This is a huge difference that the author completely ignores.

  3. “Teachers’ self-perceptions of their preparedness seem likely, if anything, to overestimate what they know and how well prepared they are rather than to underestimate it.”

    No data at all to support this bold assertion. I would intuitively argue (hey, the author can do it, so why can’t I?) that teachers, who are cautious critters of habit, are more likely to underestimate their knowledge and preparation, particularly in high school. I know this because I’m quite different from the average teacher in this regard. Tell me I’m teaching the history of technology or, god forbid, AP Calculus the next year, and I’d say, “Sure.” History of tech I wouldn’t start preparing for until three weeks before school started. AP Calc I’d spend the entire summer working on increasing my mastery, getting lesson plans from friends, and finding the critical teaching points in the subject. I know that I’d learn a huge amount the first year, that I’d do a decent but not great job my first year, be much better the second and subsequent years. Plus, I’d find it a fun intellectual challenge. Most teachers with my level of calc knowledge would, instead, say “Hell, no” to the offer. But then, I’m (literally) smarter and more flexible than the average teacher. This does not make me a better teacher, but I’m going to have an easier time stepping out of my comfort zone. From my perspective as an ex-techie, it’s hard to overstate the relative hide-boundedness of most teachers (I’m not saying this is a bad thing). So the author’s assertion that teachers would overstate their ability strikes me as not credible without further data.

  4. “In this section, we summarize what teachers have told us about their preparation in mathematics at the college level and as graduate students.”

    I have a degree in English and took no math in college a hundred years ago. All the math I’ve learned, I’ve learned on the job as a tutor–some of it as a teacher, but most while tutoring students in their coursework and on the Math 2c. (I’ve spent little time tutoring calculus, which is why I don’t know it well.)

    GRE Math score: 800. I am relatively certain I scored in the top 10% on two of the three required math credentialing tests, well higher than many math majors. My demonstrated math ability is probably in the top 10% of the entire college graduate population–certainly in the top 15%. Yet I would show up as one of these lamentably unqualified high school teachers who didn’t get a degree in math.

    Unless the author has evidence that, controlling for demonstrated ability, teachers who took math coursework in college do better than teachers who did not–that is, two teachers who got 800 on the GRE but only one of which majored in math have distinctly different outcomes in algebra instruction—then he shouold stop pretending that coursework is relevant. The author cites no such research, and I’m aware of none. Coursework is a proxy. It’s not the real thing.

    Of course, the author could have included a brief test with the survey so he could correlate, to some small degree, the performance of non math-majors to math majors on their chosen subject.

    Why would anyone take this data seriously? I genuinely don’t understand this. Why not survey math ability with a test and get hard data? But then, the hard data might be so discouraging for his side.

    Lest anyone take my ire amiss: I think most elementary school teachers close their eyes and think of England during their math segments. I’m not convinced their distaste makes a huge difference in outcomes. The country has dramatically increased teacher content knowledge requirements over the years, particularly in the last decade, particularly in elementary school. It hasn’t mattered a whip.


The myth of “they weren’t ever taught….”

A year or so ago, our math department met with one of the feeder middle schools to engage in a required exercise. The course-alike teachers had to put together a list of “needed skills” for each subject, to inform the teachers of the feeder course of the subjects they should cover.

One of the pre-algebra teachers looked at the algebra “needed skills” list and said, “Integer operations and fractions! Damn. Why didn’t I think of that?” and we all cracked up. End of Potemkin drill.

All teachers working in low-ability populations go through a discovery process.

Stage One: I will describe this stage for algebra I teachers, but plug in reading, geometry, writing, science, any subject you choose, with the relevant details. This stage begins when teachers realize that easily half the class adds the numerators and denominators when adding fractions, doesn’t see the difference between 3-5 and 5-3, counts on fingers to add 8 and 6, and looks blank when asked what 7 times 3 is.

Ah, they think. The kids weren’t ever taught fractions and basic math facts! What the hell are these other teachers doing, then, taking a salary for showing the kids movies and playing Math Bingo? Insanity on the public penny. But hey, helping these kids, teaching them properly, is the reason they became teachers in the first place. So they push their schedule back, what, two weeks? Three? And go through fraction operations, reciprocals, negative numbers, the meaning of subtraction, a few properties of equality, and just wallow in the glories of basic arithmetic. Some use manipulatives, others use drills and games to increase engagement, but whatever the method, they’re basking in the glow of knowledge that they are Closing the Gap, that their kids are finally getting the attention that privileged suburban students get by virtue of their summer enrichment and more expensive teachers.

At first, it seems to work. The kids beam and say, “You explain it so much better than my last teacher did!” and the quizzes seem to show real progress. Phew! Now it’s possible to get on to teaching algebra, rather than the material the kids just hadn’t been taught.

But then, a few weeks later, the kids go back to ignoring the difference between 3-5 and 5-3. Furthermore, despite hours of explanation and practice, half the class seems to do no better than toss a coin to make the call on positive or negative slopes. Many students who demonstrated mastery of distributing multiplication over addition are now making a complete hash of the process in multi-step equations. And many students are still counting on their fingers.

It’s as if they weren’t taught at all.

But teachers are resilient. They redouble their efforts. They spend additional time on “warm-up” questions, they “activate prior knowledge” to reteach even the simple subjects that have apparently been forgotten, and they pull down all the kaleidoscopic, mathy posters and psychology-boosting epigrams they’d hung up in their optimistic naivete and paper the walls with colorful images formulas and algorithms.

They see progress in the areas they review—until they realize that the kids now have lost knowledge in the areas that weren’t being taught for the first time or in review, much as if the new activity caused them to overwrite the original files with the new information.

At some point, all teachers realize they are playing Whack-a-Mole in reverse, that the moles are never all up. Any new learning seems to overwrite or at best confuse the old learning, like an insufficient hard drive.

That’s when they get it: the kids were taught. They just forgot it all, just as they’re going to forget what they were taught this year.

All over America, teachers reach this moment of epiphany. Think of a double mirror shot, an look of shocked comprehension on an infinity of teachers who come to the awful truth.

End Stage One and the algebra specificity.

Stage Two: At this point, some teachers quit. But for the rest, their reaction to Stage One takes one of two paths.

Blame the students: The transformation from “these poor kids have just never been taught anything” to “These kids just don’t value education” is on display throughout the idealistic Teach for America blogs. It’s pretty funny to watch, since on many sites you have the naive newbies excoriating their kids’ previous teachers for taking money and doing nothing, while on other sites the cynical second-years are simultaneously posting about how they hadn’t understood the degree to which kids could sabotage their own destinies, or some such nonsense. Indeed, I once had a conversation with a TFAer at my school, and she said this to a word: “I’ve realized I’m a great teacher, but my students are terrible.”

Not that this reaction is unique to TFAers. Many experienced teachers who began their careers in a homogenous, high-achieving district that transformed over time into a Title I area with a majority of low income blacks or Hispanics have this response as well.

It’s easy to denounce this attitude, but teaching has taught me that easy is never a good way to go. These teachers are best served at a place like KIPP, where the kids who don’t work are booted. It’s not that the kids learn more, but at least the ones that stay work hard, and that allows the blamers to reward virtue. At comprehensive schools, the teachers who saw their student body population change over time respond by failing half or more of their classes.

These teachers please both progressives and eduformers, because they have high expectations. Their low-achiever test scores, however, are often (but not always) terrible.

Acceptance: Here, I do not refer to teachers who show movies all day, but teachers who realize that Whack-a-Mole is what it’s going to be. They adjust. Many, but not all, accept that cognitive ability is the root cause of this learning and forgetting (some blame poverty, still others can’t figure it out and don’t try). They try to find a path from the kids’ current knowledge to the demands of the course at hand, and the best ones try to find a way to craft the teaching so that the kids remember a few core ideas.

On the other hand, these teachers are clearly “lowering expectations” for their students.

Which is the best approach? Well, I’m an accepter. Not that I was ever particularly naive, but despite my realism, I was caught off-guard by just how much low ability students can forget. But as I’ve said before, that’s the challenge I see in teaching.

I could go into more on this, but this post is long enough. Besides, I don’t want to lose sight of the opening story and the pre-algebra teacher’s mockery of the entire point of the exercise. Of course they were teaching integer operations and fractions. Of course they were doing their best to impart an understanding of exponents and negatives. They didn’t need the list. They knew their job.

Teachers know something that educational policy folk of all stripes seem incapable of recognizing: it’s the students, not the teachers. They have been taught. And why they don’t remember is an issue we really should start to treat as a key piece of the puzzle.


Test Pattern

The polynomial quiz results were fantastic. Over half the class got a perfect score; no one outright failed–that is, all of the students did at least one of the four problems correctly.

Which brings up something that I’ve been bothered by for a while: around a dozen of my algebra II students are nailing the quizzes and failing the tests, or close to it.

All of the students showing this pattern are hard workers. Some have shown strong math skills; a few of them struggle but patiently work things out. I know they are discouraged by their low test scores, and I’ve been encouraging, but puzzled.

Certainly, I expect some fall-off between quizzes and tests. My quizzes are directly on point with no surprises. I give problems exactly like the ones we’ve been working in class. My tests slant off sideways and crisscross (my geometry students constantly whine about this). Some of the strugglers might get flummoxed when the problems don’t appear in exactly the same form, sure. But others of these students are well beyond that. They work the toughest problems of the day with minimal guidance; when they have questions, they are logical and structured.

I don’t know what’s going on, but since all the students in question did excellent work on the last quiz, I decided it was time to step up to the problem. Rather than try to set up time with them individually, I just made an announcement in class when I returned the quiz.

“If you are a student who is looking down at an A on this quiz, but got a C or worse on the last test, I want you to know that I’ve noticed this pattern–great on quizzes, near-disaster on tests. So here’s the deal: I will be adjusting your grade on the next progress report. It’s now clear to me that something about the test is causing you problems, not the math itself.”

“BUT. You must schedule time to come in after our next test and work quietly, either at lunch or after school, so I can watch you and see what’s going on when you take tests. I’ve invited everyone to do that after every test, but for you guys, it’s mandatory if you want a grade adjustment. You can’t keep going through life tanking important tests when it’s clear you know more.”

In all three classes, I scanned the room as I said this, and every one of those dozen students looked up in hope and relief.

Which makes me a bit sad. It’s not like they haven’t had this option all along. Why don’t they take advantage of it? Why wait until I mandate it?

But it also reminds me that no matter how many times I think I’ve made it clear that my door is open, help is here, even on tests—I haven’t said it enough.


Discovery Doesn’t Work

I had trouble in ed school because (well, at least in my view of it) I openly disdained the primary tenets of progressive education. I am pro-tracking, anti-constructivist, and pro-testing, all of which put me at odds with progressives. Here is the irony: I mention often that I am a squishy teacher (squishy=touchy feely). I am not just squishy for a math teacher, I’m the squishiest damn math teacher from my cohort at the elite, relatively progressive ed school that made my life very difficult. My supervisor, who knew me first as a student in a curriculum class, was genuinely shocked to learn that I didn’t talk at my kids in lecture form for 45 minutes or more, given my oft-expressed disagreement with discovery. Even my lectures are more classroom back and forth than me yammering for minutes on end. (In fact, my teaching style did much to save me at ed school, but that’s a different story.)

Here is what I mean by squishy: My kids sit in groups, not rows. When I set them to practicing, which is usually 20-35 minutes of class, they are allowed to work independently, in pairs, or as a group of four. I often use manipulatives to demonstrate important math facts. My explanations are, god help me, “accessible”. I don’t just identify the opposite, adjacent, and hypotenuse and then lay out the ratios. No, I’ve been mentioning opposite, adjacent and hypotenuse for weeks, whenever I talked about special rights. I introduce trig by drawing a line with a rise of 4, a run of 3, and demonstrate how every right triangle made in which one leg is 3 and the other 4 (that is, have a “slope” of .75) must have the same angle forming it. I spend a great deal of time trying to think of a way to help kids file away knowledge under images, concepts, pictures, anything that will help them access the right method for the problem or subject at hand. (For more info, see How I Teach and The Virtues of Last Minute Planning.)

However, I am not in any sense a constructivist as progressive educators use it. I use discovery as illustration, not learning method. I don’t let kids puzzle over a situation and see if they can “construct” meaning. I explain, give specific instructions, and by god, my classroom is teacher centered. I am the sage on stage, baby. And that’s why I got in trouble in ed school, despite my highly accessible, extremely concept-oriented teaching style; I routinely argued against constructivist philosophy, and emphasized the importance of telling kids what to do.

Anyway. I was incredibly excited to read an article that openly states the obvious: Putting Students on the Path to Learning: The Case for Guided Instruction. This article is just so dead on right. To pick one of many great excerpts–click to enlarge, but why can’t I copy text from pdf files any more?:

Yes. Low ability kids like discovery; it is less work for them, yet they feel they are doing something important—but in fact, they aren’t learning very much. High ability kids tend to be “for chrissake, give me the algorithm”, when they would be better off puzzling through the math for themselves.

The article talks about the importance of worked-out examples. I read the article this morning and had a worked out example on the board the same day—step by step factoring of a quadratic. Here’s the weird thing: the kids who need the help with factoring had to be prompted to use the example, but the kids who got factoring were clamoring for worked examples in the area they had trouble with.

This would be a great thing for notebooks. But how do you get the kids who need help to keep the notebooks?

Great article, that changed my teaching immediately. How often does that happen?


Teaching Polynomials

When I met with my new supervisor in ed school (the second one), I told her that I didn’t feel like I was introducing topics well. She was extremely supportive, and it became one of our favorite discussion points. How do you introduce a math topic? And in my two plus years of teaching, I think I’ve become good at it–particularly in first year algebra, which I’ve taught more than any other. When I taught CPM Geometry, I hated everything about it except the way it introduced tangents (as a slope). I often spend several days mulling a good intro, and have been known to toss in a few days of review just to get my story right.

The story is usually a problem the kids can understand—and understand that they don’t have the tools to solve it. Or sometimes it’s a parallel. Either way, I try to give them an image, a reference, a bucket. Maybe it will help them trigger memories, because retention is a huge issue in teaching math.

It’s weird, the quick descriptors that teachers use. When I say I’m “teaching polynomials” in Algebra II, any math teacher knows I’m teaching everything but quadratics in their binomial/trinomial form, since quadratics is its own unit. Teaching polynomials means the kids are learning polynomial multiplication, polynomial division, synthetic division, maybe some binomial expansion, certainly some brute force factoring In general, the polynomials unit in Algebra II doesn’t have any obvious purpose other than to prepare the kids for pre-calculus. It’s just “let’s learn how to manipulate polynomials to no immediate purpose”. And that makes the intro tough.

Over half my students will not go on to pre-calc next year. Some will be taking Algebra II again, either with or without Trig. Others will be going into remedial college classes. So even leaving aside the intro, how do I help the kids make sense of this? I don’t care if they can expound on function notation or binomial expansion, but I do want to be sure they know the difference between multiplying two trinomials vs. two binomials, and when to factor. And for god’s sakes, I want them to know that they can’t “cancel out” the x2 term when presented with a rational expression.

I started the unit by explaining the preparatory nature of some of this—that they won’t really see how it’s used until pre-calc, that they just need to recognize these equations and know what to do. Multiplication, they’ve been doing for a while. Factoring, too. But then there’s division proper, which most of them won’t use again. I thought about not covering it, until I realized that I could use the lessons as a way to get them to think about division and factoring.

And so, the introduction:

I don’t present this all at once. I start with the first fraction, then ask what could we do. Someone will reliably say “reduce it”, and so we’ll reduce it. I then introduce the term “relatively prime”.

So then, I say, we do the same thing with variables and fractions, and we go through this step by step:

This isn’t the actual whiteboard example; I just wrote it up and took a picture. But it’s the idea.

And it worked. It gave the kids a great point of reference and most of them were able to divide a simple polynomial on a quiz a few days later.

I’m trying to build on that now. Thus far, I’ve taught them two forms of division and factoring. Ideally, they should be able to identify when to factor, when to divide and when, please, synthetic substitution is a good idea. So I went through the pros and cons of each:

  1. Factoring: the default. Pros: It’s fun to cancel out the common terms. In a test situation, you can pretty much assume that the terms will factor. Cons: Only works with first and second degree polynomials. After that, it’s brute force. Limited: if you can’t factor, you can’t. Nothing to tweak.
  2. Division: you don’t have to use it much, unless asked. Pros: It’s the easiest to relate to–works just like number division (most know this already. Most). It’s extremely flexible, works in every situation. Cons: you don’t have to use it much.
  3. Synthetic division: you never really have to use it. Pros: Incredibly useful for evaluating terms, which is what we’ve been using it for. Quickest method to find factors in higher order polynomials. Cons: It’s the most difficult to learn. Unless you use it often, it’s easy to forget. You have to know what it means in order to find it meaningful.

So they all copied it down. Did they get it? I gave them a quiz—a pop quiz, no less.

And with one exception, they did pretty well.

Question 2. It got to them. First, they saw the division sign. So they divide, right? No! Don’t they remember? “Division is…..” I prompt. “Oh, yeah, you flip it!” They flipped it. But then they multiplied, which made sense because they were being tested on that too, right?

Argggghhhh.

Still, it was a good quiz. Once I reminded them, they worked it correctly. Few misconceptions. I’ll need another week to beat in the triggers to tell them what to do when. But it’s working.

I think.


Meanwhile, back in Geometry….

As much as possible, I want the students to know a few unifying ideas about triangles. That’s why I dumped medians, orthocenters, and a lot of the relatively obscure triangle facts—not because they aren’t useful, but because most of my students will never use them again and I want them to have plenty of time working with the geometry they’ll need forever. My top students get practice thinking through challenging problems but I don’t throw a lot of random facts into this mix.

This has a lot to do with my own preferences. I like to remember the big things and look up the little, and I don’t personally see much value in making students jump through hoops to remember obscure math facts. (History is a different matter.) I do want them to remember the important math facts (Pythagorean theorem, special right ratios, area formulas, and so on), so when I say, underscored five times, remember this, my students won’t treat it as one of a random flood. That doesn’t mean they’ll all remember it, alas. A distressing number of my students still look at me perplexedly when I ask them the formula for the area of a triangle. It is to weep. But that’s all the more reason to keep the memorizing to a minimum.

I taught special right triangles as a ratio, rather than a pattern. This worked very well for most students, and it also provided for continuity when we moved into similarity. Same process, but now the ratio isn’t fixed. (And this should make the move to trigonometry, which is also based on ratios, part of the same continuity).

I gave them a test after special rights and then a test after similarity. I tested much of the same material in both. (After break, I’m going to give them a test on other things from first semester, to see how their retention is.) Here are the tests and the correct percentages for each.


I just noticed that I forgot to fix the typo in question 14 before I made this image. The kids were given corrections before they started the test. (My tests often have minor typos, as I create them from scratch each time. My kids know this and are encouraged to check with me if they think they find something wrong. They are usually incorrect—that is, there was no typo—but I’m confident that no one is sitting silently perplexed for the wrong reasons.)

Overall comments:

  • Questions 1 and 2: see what I mean about area? I was actually pleased with those numbers. They are still forgetting to take half.
  • Questions 3 and 4: very good understanding of perimeter and algebra (I don’t think my diagonals are accurate on that pentagon).
  • Question 5: A straightforward test of Triangle Inequality, something I’ve emphasized. It’s frequently on the SAT and ACT and besides, it’s a useful limit to remember. And they did!
  • Question 6 and 7: The students had to know that triangles had 180 degrees AND that a “not acute” triangle must have one angle of 90 degrees or higher. Another 20% knew that triangles had 180 degrees, but didn’t catch the second criteria.
  • Questions 8, 9, 10–straightforward special right questions.
  • Questions 11 and 12 are a case of unintended consequence. I thought 11 would be the easy one, as we’d just reviewed congruent triangles and I’d reminded them (again) of the SSA (see the bad word? Yeah. Not a congruence shortcut). Most chose the SAS answer—the correct answers were all top students. While question 12 didn’t have great results, more students hroughout the ability spectrum answered it correctly.
  • Questions 13-15—more special right questions. (see note above about the glitch on 14).

The last five were probably too difficult; I wanted to see how my top students would do. I ended up dropping the last five and grading students on the first 20.

Average curved score: 68% (51% uncurved)

This was a much more solid performance, I thought. I’d increased the difficulty, and the average was just a little higher (70% curved, 54% uncurved).

In both tests, the students had a terrible time dealing with radicals and the Pythagorean Theorem, which is what led to a review. But they showed increased mastery of special rights, and their understanding of similar triangles was better than I’d been hoping for.

These percentages are the actual answers given. I went through the D and F tests and reviewed questions 15, 17, and 20. In all three cases, I’d put a spin on the question. So if a D or F student saw that the angle was 60 degrees but didn’t realize I was asking about the unmarked part, did the proportion correctly in question 17 but forgot to convert to feet, or solved for x and then forgot to plug it in, I gave them credit for that work.

Any geometry teacher knows that I’m going veeeeeery slowly. I have two major topics to cover before state tests—trig and circles—and two minor ones, volume/solids and regular polygons. At the same time, I’m definitely not giving my students an easy time. I’m working them hard and I think they’re doing challenging math. In fact, that’s the most consistent beef my students have about me—they think my tests are “weird”. But they also appreciate my curving, and they also, I think, accept my assurances that this is what actual standardized tests look like, so they may as well man up.


Every so often….

I got into teaching to work with struggling kids. I’d enjoy working with an entire class of motivated and able kids, but it would also come as a shock. Most of my time I spend pushing kids up the hill and praying they don’t roll backwards.

But I do have exceptionally bright kids in my classes, too. My geometry classes in particular are a joy, since even the low ability, low incentive kids know they’ve finally escaped algebra and aren’t eager to repeat that experience by repeating geometry. So it’s a well-behaved group with decent motivation. On the other hand, 80% of my Algebra II classes scored Far Below Basic or Below Basic in Algebra two years earlier and a few of them are much harder to keep in line, even though the bulk of them are well-behaved and want to learn, not out of any love of math, but because they understand, thanks in part to my frequent rants, that they will be taking a placement test in a year or two and the only good thing about my class is that it’s free.

But I do have some students who chose an easier course because they are athletes, or because they struggled in geometry, or just because someone somewhere made an odd placement decision, and that are in fact quite strong in math—and of course, there are always a few students who finally “get it” and who actually start to grasp the material (this is extremely rare, and one major reason why value added is a problem in high school teacher evaluations).

Anyway, on Thursday, I had two interactions with bright students that stay with me because of their infrequency. I’m not complaining. It was just fun.

Incident #1

A solid geometry student had done horribly on her last test. She doesn’t often engage with me and remains a bit distant, but after I turned back the tests I checked in with her.

“I do not get special right triangles. I don’t understand how they work. I get similar triangles, but I can’t ever remember the ratios and I can’t see how it works.”

I drew a right triangle on the board with the two legs labelled “x” and the hypotenuse labelled “hyp”. “What kind of triangle is this?”

“Isosceles right.”

“Okay. Create an equation to solve for the hypotenuse using the Pythagorean Theorem. I’ll be back in a bit.”

I came back, and she was working through it, a classmate tossing in advice and argument.

“No, it’s square root of two! You took the square root of both sides!”

“Oh, that’s right.” and she had it solved, the hypotenuse equal to x * sqrt 2.

I pointed to the class notes which were on the board. The isosceles right triangle was labelled x, x, x sqrt 2.

“Oh! I get it.”

I then sketched out a 30-60-90, asked her what it was and she correctly said it was a half of an equilateral triangle. I told her to label the sides, making sure she put the x opposite the 30. Then I told her to solve for the second leg. When I came back, she’d finished that up and was in a great mood.

Had I taught the Pythagorean method before? Yes. Several times. Sometimes they just aren’t ready to get it until they’re ready to get it. But only a strong student can grasp the algebra of the Pythagorean proof and see how that knowledge can help her remember the ratios.

Incident 2

One of my strongest Algebra II students was struggling with synthetic division, which I introduced as a method of testing quadratic values (more on that later). She asked me to explain it to her again—not just the how, but the why and the what.

I began from the top and ran through it all. When she grasped that the division process revealed not only the quotient but that the remainder was the equivalent to evaluating the function at the divisor value, she said “Wow. You might almost think that was planned.”

I laughed in unexpected pleasure. I am not a believer in God nor a mathematician, nor am I a proponent of the “math is everywhere, math is beauty” propaganda that true math lovers preach. But the Remainder Theorem, like the Fundamental Theorems of both Algebra and Calculus, is indeed enough to make even me wonder if there’s some Grand Design. That a student of mine should reach that conclusion after a largely utilitarian but comprehensive explanation by yours truly was an unlooked for joy.

I was telling this to a colleague, and he reminded me of the famous quote: “God exists because Arithmetic is consistent. The Devil exists because we can’t prove it.”


Modeling with Quadratics

After my success (I hope) with linear equations, I started a unit doing the same thing with quadratics.

Days 1-3:

“A triangle’s height is three feet longer than its base. Create a table linking the height to the area. Graph.”

“A rectangle’s length is twice as long as its width. Create a table linking the width to its area.”

“A rectangle has sides of X+3 and X-2. Create a table linking X to the rectangle’s area.”

Just as with linear equations, the students really improved at generating values. They also, I think, quickly grasped that generating data for quadratics is considerably more complicated than linear equations. More than one pointed out to me that they couldn’t just “add three each time” as they could with linear equations.

I taught them how to break it down into parts and assign each part to a column. For example, the first triangle problem:

Base

Height

Base * Height

Half BH (Area)
1

4

4

2
2

5

10

5

The stronger students could see how this led to the equation; even the weaker students could see that each equation had steps, and they started to get suspicious if it got too easy. One struggling student called me over to tell me he must be doing something wrong because “look, it’s going up by the same amount”. He was linking length to width, rather than length to area.

Even if they don’t get much stronger in working with quadratics generally, this exercise clearly helped them gain competence at working through a problem. Next step: generating values quickly from an equation in standard form. Hello, synthetic substitution.


Modeling Linear Equations, Part 2

In Modeling Linear Equations, I described the first weeks of my effort to give my Algebra II students a more (lord save me) organic understanding of linear equations. These students have been through algebra I twice (8th and 9th grade), and then I taught them linear equations for the better part of a month last semester. Yet before this month, none of them could quickly generate a table of values for a linear equation in any form (slope intercept, standard form, or a verbal model). They did know how to read a slope from a graph, for the most part, but weren’t able to find an equation from a table. They didn’t understand how a graph of a line was related to a verbal model—what would the slope be, a starting price or a monthly rate? What sort of situations would have a meaningful x-intercept?

The assessment confirmed my hunch that I haven’t been wasting my time. I tried to focus on problems they could solve in multiple ways—up to and including plugging in the answers. I wanted them to be able to approach a problem “as if it were their money”, as I kept telling them when they were figuring out how many power bars and gatorade they could buy for $20.

Here’s the assessment, with the percentage of 83 students who answered correctly (click to enlarge).

Comments:

  • Questions 1, 3, 8, and 10 were at or just above the “random guess” percentage. Everyone screwed up the first question, flipping rise and run (answering -2). 3, 8, and 10, however, were answered correctly by students who received a C or higher.
  • Question 11 makes me want to beat my head with a student whiteboard. Only 62% of the class knew what the slope of y=-10x + 7? Really? Some of the students who got it wrong then went on to accurately identify the slope from a table and pick the right equation in slope intercept form. It is to weep. But anyway, my guess is that 20% of the kids who got it wrong actually know it, leaving 20% who still aren’t sure. And that’s bad enough.
  • Very proud am I of the results for question 13, since we hadn’t done anything like that in class. Apart from the “read carefully” hint, I gave no assistance. Of course, I didn’t include the midpoint for AM, which would have cut the success rate in half. But it still shows the students are thinking and not giving up.
  • While the system of inequalities questions are barely at 50%, that’s a huge improvement over the semester final.
  • The students had trouble with question 19. This suggests that many of them are still just plugging the values into the equation rather than reading the slope from the table, since the slope is a fraction—and at least a third of the class still can’t multiply fractions.

I curve my multiple choice questions on a 15 point scale, so 85 and up is an A, 70-84 a B, down to 40-54 a D. A student could pass with a D- by answering 8 questions; I think getting 40% of a test that would not (for those students) include any gimmees is worth a low passing grade. On that scale, the average score was 70%—and that’s a first. Every other assessment, including the two on linear equations last semester, had average scores in the D range.

Only two As and four Bs. Many students who usually nail Bs got Cs, while the students who usually failed also got Cs or solid Ds. This too makes me think I’m on the right track. It tells me that some of the B students are used to memorizing methods—and since I didn’t give them one clear-cut method, they had a bit of trouble. The D and F students, who couldn’t memorize or even really understand the methods, were really able to benefit from the instruction and had solid results. (For many students, I consider a passing grade of D a great achievement, and tell them so.)

Several of the B students who did poorly came up to talk to me about it, and that, too, was revealing. Most of them understood realized that this new approach was exposing a weakness on their part and, instead of complaining, talked about their difficulties and asked how they could improve.

We’re doing quadratics in the same way—just spent three days learning how to create table values from descriptions. For some reason, the only quadratic word equations I can think of involve geometry—but then, most of my kids need four or five seconds to remember the formula for a triangle, so the review is win-win.


Grading Tests

It kills me to say this, but any honest description of my grading would have to include the word “holistic”.

This tendency is getting worse. My normal method for a quiz: I assign points before hand, weighting the important problems heavily, and then grade the tests. I do not curve, but if I discover all students really tanked an important problem, I go back and re-weight, with a growl and a sigh.

Today, I was grading the data modelling quizzes I described in an earlier post, and just didn’t feel like assigning points.

Here’s the quiz (Click to enlarge):

Yes, yes, some of you will say “But this is algebra I material! Pre-algebra, in fact!” Newsflash: many, many students still don’t understand this. So get over it. The students had to create a table of values, graphs, and linear equations for four word models, and then four table of values and graphs for given equations. I included one more difficult equation (a difference equalling a constant).

I am usually pretty good at timing tests–I’d say 1 out of every 10 tests, I am genuinely surprised when my students don’t finish. In this case, I was certain that some students wouldn’t finish, but I was interested in fluency. How many students would be able to finish the whole thing? But even so, I would have been better off with three questions in each section.

Anyway–I wasn’t really interested in finely tuned grades here. So I created four categories before looking at the student tests:

A–finished 6 of 8 problems accurately or with minor errors. Identified the equations and came up with reasonable word models in most OR completed and graphed the difference equation.

B–did all of one side correctly and clearly didn’t finish the second half (I’d given them the option to come in at lunch or after school to finish), or did parts of both sides correctly.

C– To get a C, students had to have done 2-3 problems correctly in full (table of values, graph) OR done one part of several questions correctly (e.g. table of values done for most problems, no graph).

D/F–Very little completed, with a range of 1-2 problems done somewhat correctly to clearly had no clue.

So then I reviewed the tests and put them into those categories without any markings. I got a nice heap of Bs and Cs, more Ds/Fs than I’d like, but still within reason (only 2-3 absolutely no clue), and about 10 As. Tonight I’ll go through them and point out errors.

Points, schmoints.