Monthly Archives: January 2012

Pushing Remediation Down

Joanne Jacobs reports that more public universities are pushing remedial students down to community colleges, who are pushing their functionally illiterate students to adult ed.

It’s about time.

Remediation is one of the great unreported education stories. As our educational policy pushes more and more students into academically respectable college transcripts, their actual skills are declining. I won’t say it’s all Jay Mathews’ fault, but his moronic Challenge Index didn’t help.

In high schools all across the country, students who don’t understand algebra are taking trigonometry or precalc, students who can’t read at a freshman level are assigned Antigone, and students who don’t know whether the March on Washington occurred before or after WWII are being encouraged to “think and write critically” about civil rights in America. It is to laugh.

Most students who can’t score 500 or higher on every SAT section (22 on the ACT) were not capable of college prep courses in high school and thus really aren’t ready for college. Logically, universities should impose a cutoff, with exemptions for students who demonstrate capability in other ways (AP tests, for example). But only 20% of African Americans and 27% of Hispanics score higher than 500 on any section of the SAT. You can see the problem.

More on this later.


The Virtue of Last Minute Planning–Geometry

Teacher education, whether traditional or TFA, places great emphasis on essential importance of planning. I am not overstating when I say that many ed school instructors think that planning is teaching. Ed school instructors–and teachers as a group–are planners, the type who get nervous if they don’t know what they are teaching in two weeks, let alone three days out.

Me, I often don’t know what I’m teaching the next morning.

Like today, for instance. I’m in the middle of special right triangles. I introduced the isosceles right triangle before Christmas. Penetration was weak; the ones who were able to work the problems correctly still weren’t quite sure about it.

I always teach the pattern–x, x, xroot2 (to save me typing symbols) and show the students how easy it is just to use the pattern to assign values. Know one, know all. I derive the relationship with the class, I explain that it’s a ratio, and to me it’s just obvious from that point. But it hadn’t been obvious in my class two years ago, and it clearly was taking a barely tenuous hold for most of my students. So I mulled it over the holidays, and suddenly wondered why I didn’t just teach it as a proportion. After all, they need to review cross-multiplication. And then it occurred to me that special right triangles are, in fact, similar triangles, even though the curriculum separates them out and rarely makes the connection. So why not introduce proportion now? I checked around and the other geometry teachers don’t teach it this way, but it was worth a shot. Oh, by the way–this part was planning. I didn’t write any of it down, but it was all in my head.

After reviewing ratios, proportions, and cross-multiplications for a day (my top students didn’t review and did far more complicated problems), I re-introduced special right isosceles as a ratio and had them solve problems using the ratio. Much better penetration, although it’s still early. But even my weakest students were able to set up problems and cross-multiply, even if the last step of solving for x still left them a bit confused.

But through all this planning, in the back of my mind, for three weeks, 30-60-90 triangles were a black box. I’d have done a lot of lecturing by that point, and I needed to break the routine. 30-60-90 triangles are more complicated than isosceles rights, too. So things needed shaking up. But how?

The idea came to me at 6:30 am. Fortunately, it was a late start day, so after the staff meeting and the coffee run, I still had 10 minutes to think through how to write the instructions on the front board. And two of my students came in early, so they generously copied the instructions on the back two boards while I rounded up rulers and protractors and scratch paper.

Instructions:

  1. Your group will construct (create) 4 equilateral triangles, which are triangles with _____ equal sides and angle measures of _____. (I then asked the students to fill in the blanks). You will need four triangles: 2″, 3″, 4″, and 5″. Each person creates one triangle.
  2. Draw the base side with your assigned length. (In a different color, a line example)
  3. Use the protractor to construct a 60 degree angle. (In a different color, the 60 degree angle mark with an x.
  4. Use the ruler to draw the second side of the same length from one end of your base line through (or towards) the 60 degree mark. (In a different color, two sides of the triangle are now complete)
  5. Connect the remaining two sides with a third. Confirm (make sure) that the third side is also of the same length.

This worked really well. The kids had trouble with the length of the second side in some cases (connecting it with the angle mark rather than using the assigned length), and as always, some of them refused to start until I came around and issued a reassuring personal invitation. But it was a great review of equilateral triangles, let them do a little bit of construction (which I don’t emphasize at all) and reinforced the important idea of SAS congruency and the rigidity of triangles. Many students really “got” that there was only one possible length for the third side, and made that connection to earlier lessons.

What, you may ask, does this have to do with 30-60-90 triangles? Well, almost all equilateral triangle questions end up using 30-60-90 triangles, so it’s helpful to let them see this relationship. The last thing I did was ask the students to visualize what would happen if they folded their triangles in half. What would they have? Everyone saw that it would be a right triangle, created in the middle (we will review the “isosceles altitudes bisect” theorem tomorrow, briefly). They realized that one angle wouldn’t change, and one would be cut in half. “Oh!” said one student, pointing to the board where I’d written the two special rights. “30-60-90!” and the class all made agreement sounds.

And tomorrow, they will have new instructions.

  1. Get out your triangles from yesterday. (Or hang, draw, and quarter the person who was supposed to keep them safe and then ask me politely for extras.)
  2. Create a table for your four triangles, with length columns for Short Leg, Medium Leg, and Hypotenuse (Long leg? Hmm.) and one column where you will find the ratio of the medium leg to the short leg.
  3. Using a ruler, find the side values and add them to the table.
  4. Using a calculator, find the ratio of the medium leg to the short leg by actually dividing the medium leg by the short leg. (I will review the fact that all fraction statements are division problems, something I do frequently).
  5. When you’ve finished, put your values on the board. (where I have tables for all the different triangles).

This will give me a chance to discuss measurement error. The students will all see that they have the same values, which they’d expect, and that everyone has very similar ratios.

And THEN, after that, I will go through the Pythagorean Theorem for this kind of triangle. They will see that the short side and the hypotenuse have a ratio of 1:2, and we can use that to generalize (which we’ve already done for special rights).

I never would have come up with the “bridge” of constructing equilateral triangles and turning the ratio into a discovery class if I hadn’t let it percolate for three weeks. You can’t force these things.


Teacher Quality Pseudofacts, Part I

Lurking right behind the teacher pay debate lies the teacher quality debate–well, maybe not so much lurking behind as jumping right out in front of teacher pay and telling it to back off and get lost.

In the intro of the Room for Debate on teacher pay (linked above), the NY Times says,

In the private sector, people with SAT and GRE scores comparable to those of education majors earn less than teachers do. Does that mean teachers are overpaid? Or that public schools should pay more to attract top applicants who tend to go into higher-paying professions? (Emphasis mine).

I have no opinion about teacher pay….wait, that’s not true. I think teacher pay is about right. I think pensions are high, although teacher pensions aren’t nearly as egregious as those of public safety workers. But I don’t get particularly worked up about it, because after all the yelling and screaming is over, I’m pretty sure that everyone will realize that teacher pay, like Churchill’s democracy, is the worst method of teacher compensation except all the other methods we could try (I’ll get into those some other time).

I am extremely annoyed, however, by the bogus factoids the eduformers fling about in either ignorance or deception and the progressives’ determined refusal to refute this pseudodata due to their own ideological blinkers.

Jason Richwine and Andrew Biggs regurgitate the usual suspects in their teacher compensation study:

Students who indicated that education was their intended major earned a combined math and verbal score of 967, about 0.31 standard deviations below the average of 1,017, meaning the 38th percentile in a standard normal distribution. In contrast, students intending to major in engineering had average combined SAT scores of 1,118….College graduates who take the Graduate Record Examination (GRE) also indicate their intended field of study when they sit for the test. During the past academic year, students who planned to study elementary or secondary education in graduate school scored 0.13 standard deviations below average on the GRE. If all education-related fields are counted—including special
education, early childhood education, and curriculum development—the difference was 0.35 standard deviations

In other words, the charge is (1) undergraduate education majors have very low SAT averages and (2) graduate education students have low GRE averages. An undergraduate education major is the primary entry point for elementary school teachers, PE teachers, and special education teachers. Secondary school teachers in academic subjects are far more likely to get a degree in their major and then get a post-graduate credential or a M.Ed.

I’m going to take point 2 first. Notice that the authors conflate elementary and secondary teachers. Everyone else does, too. Because there’s no difference in the cognitive demands of teaching kindergarten or trigonometry, first grade math or biology, fourth grade science or AP US History. None at all.

In fact, secondary teachers have much higher GRE scores than elementary school teachers. The Educational Testing Service reports the GRE scores of all graduate schools by broad area of specialization. Ed school candidates are broken down by secondary, elementary, curriculum, special ed, and other minor categories. (I’ve rearranged the rows to fit it all on one image–click to get full size or check out the data in the report).

GRE mean scores for all testers in 2008-2009 were 462 for Verbal, 584 for Math.

Break the GRE scores into two categories, and you get a very different picture. Elementary/middle school teachers are dragging the average down. The elementary school teacher mean verbal score is 437, nearly 30 points below the mean for all testers. 70% of all candidates score lower than 500 on the verbal. The average math score is 520, 64 points below the mean for all testers; however, the scores are distributed close to normally throughout the score range, unlike the verbal scores. (High verbal GRE scores are extremely rare. Anything over 700 is in the top 2%; anything over 600 is, I think, top 10%.)

Secondary school teacher mean verbal score is 485, 20 points above the mean for all testers. Their average math score is 579, 5 points below the mean.

But remember, please, that secondary school teacher scores are all lumped together. English and history teachers don’t really need sterling math scores, and math and science teachers don’t need spectacular verbal scores. Remember, too, that ed schools turn out four or five English and history teachers for every 1 science or math teacher, roughly. 20% of all secondary teachers get 700 or higher on the math GRE; another 27% get 600-690.

I suppose it’s possible that all the English and history teachers are knocking out high 750 scores on the GRE math test. Or–and this is just a suggestion– secondary math and science ed school candidates have GRE scores comparable with other science and math grad school entrants.

If eduformers were genuinely interested in evaluating teacher quality, they’d see if ETS has any further categorization on secondary teachers. You know, just to make sure that those crack English teacher mathematicians aren’t beating out the wimpy, underqualified math teachers struggling to explain algebra.

This isn’t newss. ETS reports regularly on teacher quality. In a report full of useful graphics and stats proving this point comes this informative little tidbit, repeated several times:

Academic profiles continue to be markedly different for secondary school subject matter teachers in contrast with elementary, special education, and physical education teachers. Those with secondary
licenses have much stronger academic histories. (page 3)
….
The relative profile across licensing areas has remained steady. Those licensed in secondary subject areas continue to have verbal SAT scores at least as strong as those of national college graduates who took the SAT. Math SAT scores for those licensed in mathematics and science are well above those for other college graduates. Profiles are markedly different for secondary subject teachers in contrast to elementary, special education, and physical education teachers. (page 20)

These data also indicate that cohort gains in SAT scores are likely to be even more substantial than previously described, especially for secondary subject teachers. When the data are examined separately for middle-school and secondary subject test takers, the net improvements are even greater than previously presented. Figure 22 shows that the SAT-Math scores for those who took the secondary subject tests actually increased by 35 points from the earlier cohort. Secondary subject test takers licensed in English had SAT-Verbal scores that were 13 points higher than those of the earlier cohort. (page 26)

Individuals taking the middle-school tests have far less academic preparation in specific content areas than those seeking secondary subject licensure. The profile of test takers for middle-school licensure more closely resembles that of elementary generalists than of secondary subject teachers. (page 27)

..those with secondary subject licenses continue to be an academically strong group whose SAT scores and GPAs have grown stronger over time.

The report, which reviews data on 20 states and DC, stresses that elementary school teacher qualifications have improved tremendously, but I’ll get to that in part 2.

Those screaming for improved teacher qualifications have nothing in their arsenal when it comes to secondary teachers. I suspect they know that, which is why Richwine, Biggs, and everyone else conflates the scores and ignores Praxis data. But maybe they aren’t lying. Maybe they’re just ignorant.

Next up: Elementary and middle school qualifications, what they mean, what’s been done, and what hasn’t happened. Also, my speculations on why progressives don’t point out these obvious rebuttals.


Value Added–None

So the NY Times breathlessly informs us of a new study that links good math and reading teachers in elementary and middle school to all sorts of improved effects on students’ lives. Just look at the data:

Teachers that raise test scores in these subjects and grades improve each student’s income by $750/year, make it 1% more likely that each one will attend college, and drop the girls’ pregnancy rate by about .6%.

Pause.

Okay. Doesn’t that make it seem as if there isn’t all that much difference between teachers who raise scores and teachers who lower scores?

Color me unimpressed.

I found many things to quibble about. First, the bit about teenage pregnancies:

We …first identify all women who claim a dependent when fi…ling their taxes at any point before the end of the sample in tax year 2010. …We refer to this outcome as having a “teenage birth,”but note that this outcome differs from a direct measure of teenage birth in three ways. First, it does not capture teenage births to individuals who never …le a tax return before 2010. Second, the mother must herself claim the child as a dependent at some point during the sample years. If the child is claimed as a dependent by the grandmother for all years of our sample, we would never identify the child.” (Cite from study, page 19)

In other words, they aren’t checking for teenage pregnancies, but teen mothers who managed to get it together enough to find a job and file tax returns. Yeah, that’s a big group. Because teenage moms are no more or less likely than other students to become productive workers who file tax returns. We shouldn’t wonder, perhaps, if teen moms are disproportionately found in the 10% of the population that had no tax returns and thus weren’t included in the study.

And then, the researchers seem in an awful hurry to fire teachers.

“The message is to fire people sooner rather than later,” Professor Friedman said.

Professor Chetty acknowledged, “Of course there are going to be mistakes — teachers who get fired who do not deserve to get fired.” But he said that using value-added scores would lead to fewer mistakes, not more.

But in fact, their study says nothing about what will happen if teachers who lower scores are fired. They, of all people, should know that.

This paragraph, from the study, just amuses me no end:

Consider a teacher whose true VA is 1 SD above the median who is contemplating leaving a school. With an annual discount rate of 5%, the parents of a classroom of average size should be willing to pool resources and pay this teacher approximately $130,000 ($4,600 per parent) in order to stay and teach their children during the next school year. In other words, families would earn an average annual rate of return of 5% if they invested $4,600 to give their child a teacher whose VA is 1 SD higher. Our empirical analysis of teacher entry and exit directly shows that retaining such a high-VA teacher would improve students’ achievement and long-term outcomes. Because the impacts of teachers are roughly proportional to income, high income households should be willing to pay more for better teachers. Parents with an annual household income of $100,000 should be willing to pay $8,000 per year for a teacher whose true VA is 1 SD higher.

Before I get to the amusing part (okay, it’s not really funny in a haha way), consider that the data comes from a large urban school district. 71% of the students are black or Hispanic, 76% are low income. So even if we assume no other problems with the assumptions and gaps, these findings apply primarily to low income black and Hispanic students and, to a lesser extent, middle and high income students who go to public school in a high-poverty urban district—not, I’m thinking, an extremely representative sample. We have no idea if we can apply these findings to suburban middle class kids, working class kids, or even poor white kids (who routinely outscore middle class blacks in many state tests, SAT, and most NAEP report cards). hHw many high income families are living in an urban school district that’s 76% low income? How representative would they be of high income families in the suburbs? I’m not convinced they can conclusively argue their results hold for all populations.

Really, though, this whole line of thinking just makes me laugh. Do the researchers really think that high income families are thinking of their own kids when these studies hit the news? Or are they just using this absurd investment analogy to reinforce how strongly they feel about their data?

In Achievement Gap Mania, the best education article of 2011, Rick Hess writes:

First, achievement-gap mania has signaled to the vast majority of American parents that school reform isn’t about their kids. They are now expected to support efforts to close the achievement gap simply because it’s “the right thing to do,” regardless of the implications for their own children’s education. In fact, given that only about one household in five even contains school-age children — and given that two-thirds of families with children do not live in underserved urban neighborhoods, or do not send their kids to public schools, or otherwise do not stand to benefit from the gap-closing agenda — the result is a tiny potential constituency for achievement-gap reform, made up of perhaps 6% or 7% of American households.

Because middle-class parents and suburbanites have no personal stake in the gap-closing enterprise, reforms are tolerated rather than embraced. The most recent annual Gallup poll on attitudes toward schooling reported that just 20% of respondents said “improving the nation’s lowest-performing schools” was the most important of the nation’s education challenges. Indeed, while just 18% of the public gave American schools overall an A or a B, a sizable majority thought their own elementary and middle schools deserved those high grades. The implication is that most Americans, even those with school-age children, currently see education reform as time and money spent on other people’s children.

Note to researchers: Most people reading your report are only thinking about how taxpayers can get improved results for the vast quantities of money we spend on low income students—or, at least, spend less money for the same results. They are not thinking about how this research affects their own kids because they know full well that it doesn’t.

Incidentally, I am pro-testing. But all the value-added testing research I’ve seen has been utterly pointless and ignores the reality of teaching low ability kids, and testing them with assessments that are far beyond their ability level.

And ultimately, I’m not convinced that the difference between most teachers matters all that much. On this point, at least, the new study confirms my beliefs.


Graduation Test Retakes

Just got the news of how my juniors and seniors did on their graduation test retakes. All our sophomores take the test; it’s how we are evaluated for AYP. Last year, we had an exceptional year, raising our first time pass rate from 74% to 87%. I think, but am not sure, it’s because our algebra sophomores did much better than expected; all algebra teachers had a pass rate between 50 and 60 percent. I couldn’t find any numbers on what a normal rate is, so I’m going to check again this year and see how it goes.

Here’s an interesting stat I found in running numbers. I broke down all the students based on what math they took last year and this year–that is not as easy as it sounds. I couldn’t find Geometry repeaters easily, and that’s probably 10-15%. But then, most geometry repeaters would almost certainly be juniors and thus not testing. Then I had to find out how many algebra students took pre-algebra last year, since a Basic in pre-algebra is very different than a Basic in Algebra. Then not all students had test scores for both years– I lost about 40%. Still, it was interesting.

Comparing geometry and algebra students with similar algebra state test scores reveals that geometry students have a higher pass/proficient rate, but not dramatically so, given the numbers. This is going to help me out when planning test prep next month. I’m going to push more students to shoot for proficient, while focusing tutoring help and attention on my weakest algebra scorers to give the more support just for passing.

Yeah, those pre-algebra FBB numbers are weird, aren’t they? None of my students were among them. I checked with the teachers and they shrugged–all of them had been good students from day one. But they confirmed that they’d all taken pre-algebra the year before. Half of the pre-algebra Below Basic students are mine; they were all very weak students.

Back in November, our school had a retake for those who hadn’t yet passed it. I gave my students time to prepare, and even devoted a couple class lectures to understanding how the multiple choice tests were constructed and how they are much easier than they look. I stressed using their logic and reasoning rather than calculations and showed them how that was done. I’m pleased to say that 11 of my 18 retakers passed. Three juuuuuust missed passing. One of them has a terrible time taking tests and has skills well above most of the students who did pass. Two of the remaining failers were stopped for goofing around (sigh) and one of them, who has since left the school, doesn’t fall into any of those categories, but I’m not going to say what happened in case it identifies the student in some way.

Five of algebra students who failed last March took the test again (two are in my class this year). Two of them passed, one just missed.

Good news overall.


How I Teach

I know the buzzwords. “I actively differentiate instruction in my classroom.”

What I do, really, is group my students by ability. Within the first week of school, I’ve used assessment tests and observation to group my kids. My desks are arranged in groups of 4, with some in tight clusters and others in “L” shapes because otherwise one student would have his back to me. Top student groups go in the back, weakest in the front, behavior problems who refuse to work way off to the sides ideally by themselves or well-behaved mid-level students (where the behavior problems will often eventually work, if only out of boredom. Or they sleep.)

Most lessons are two days, occasionally 3. I rarely work just one day on any subject unless it’s a simple development on the previous day’s work, since an hour doesn’t allow enough time for low ability kids gain some basic comprehension and the mid-ability kids get some practice and working memories of the subject. I hit most of the Explicit Direct Instruction components over the 2-3 day period.

On the first day, I have a combined lecture/classroom give and take on the topic and its connections to what the students have learned so far. While I used to think note-taking was largely pointless, I’m noticing that my low ability students look back through their notes with encouragement. Besides maybe just copying down the material gives them some focus. In any event, I’m spending more time with notebooks than worksheets this year. (Many of my top ability students don’t take notes.) I work a couple problems, then assign them to work a single problem, and wander around checking for errors. The upfront time usually lasts 20 minutes or so; if it’s going to go much longer then I warn the kids ahead of time and do even more classroom give and take.

Then I let them loose and they work on a group of similar problems for rest of the first day. On the second day, I do a quick review and go through common errors I saw the day before. If they were working a longer assignment, I send them back to it. If day 2 has some additional depth or complexity, I introduce it. But day 2 is usually very little upfront time. Then they spend most of day 2 working in depth on the problems. This time allows me to really dig in and work with everyone–or try to. More on that in a minute.

My weakest students will, for the most part, listen and engage in the classroom discussion. After that, they will work in fits and starts, stopping the minute they get confused. I stop by, help them with a problem, start them on the next, and it’s a huge win if they’ve done even one complete problem by the time I stop by again. This process is exactly like rolling rocks up hill. If you stop at any point on the way up—and you have to, to have any chance of checking in with all the students—you only hope that it hasn’t rolled back down too far before you return. But then, half an hour of focus on a difficult subject is still a lot of work and learning for them. (Work with any low ability student for longer than 30 minutes and they get extremely tired; mental stamina is a real problem).

The mid-level students are the primary beneficiaries of this approach. My teaching is designed for kids who, in previous generations, might never have taken more algebra or geometry in their entire high school career. They would have had far more time working with arithmetic, more practical math lessons, and considered algebra as a major challenge. The average college-bound student went so far as trig, and this was a generation when most students weren’t expected to be college bound. Only 40% of all high school students completed algebra II in 1982, for example. (Cite). Introducing abstract math to students with average intellect and no particular interest in math is one of the great challenges of teaching the subject. It’s why I’d always feel like I was taking the easy way out if I taught only history or English, two subjects that would give me a decent shot of being a popular teacher.

So what to do with the students who used to be the only ones to take advanced math, who can finish my entire planned two day lesson in half an hour? Many teachers let these students do homework from other classes or simply read, which is why a growing number of policy experts are warning that we are letting down our bright students.

My first year of teaching, I realized I would either have to move more quickly, allow my top students to read, or give them more to do. That was an easy call. I began planning different lessons for my strong students. With the exception of the second semester of last year, when I ran four different lesson plans each class, my advanced student lessons are the side show to the main event. Sometimes, I give the students a different, more difficult work sheet. Other times, I give them additional problems—applications, challenge work—after they finish what everyone else is doing. Still other times, they get a handout or a book section explaining concepts that the other students will never work with. The students work independently, first reviewing the material while I’m running the classroom discussion with the rest of the class, and working the problems if they can—and they usually can, maintaining productivity until I finish up with the class and check in with them. I always have white boards around them, and sometimes I’ll put worked problems on these boards for them as examples.

Assessment is the most difficult part of this approach. How can I be sure they are learning the material? The first year of teaching, I rarely did formal assessments of what they learned. Last year, when running four different groups, I gave four different assessments and the results were very good. This year, I’ve expanded my test difficulty, including some advanced questions to see how the students are getting the material. At some point, though, I’ll start giving students different tests. I can see it coming.

Most teachers think that the multiple lessons is the tough part. That makes sense, since many teachers are highly structured and methodical. They have worked all the problems out of every assignment before they give it out, and having two or three different assignments just makes for that much more work. But I never work out the problems to start with and just wing it when the kids ask me questions. Every so often I’ll assign the top kids a problem that I don’t instantly understand, and we have a good time chewing it over, which gives me time to work it out.

No, for me, the challenging part is working the room and making sure that I’m checking in with all students. Are they getting it? Did I miss an important element in the explanation? What do I need to cover tomorrow? It’s easy to miss quiet students, who will then complain that I ignored them. My first weeks of class regularly include reminders that go something like this:

“No, I’m not ignoring you. I just didn’t see you. If you need help, and I’m not coming by, tackle me. Holler at me! Beg! Make choking noises! But don’t sit there silently and sulk or worse, talk about something else and then whine that I didn’t help you. That way lies a rant from me.”

And as the semester goes by, I get better at circling, identifying who needs more support, and not getting sucked in by one student. For their part, they learn to yell for help if I accidentally ignore them.


SAT Snobs

I generallly enjoy reading Gary Rubenstein, an ex-TFAer who has developed a healthy skepticism for the eduformer group. His flaws include a purist attitude about math (common among mathematicians, as opposed to people who teach math) and a reverence for Diane Ravitch.

But he raised my ire when, in the midst of a rant about desperate schools paying “expert” consultants for turnarounds that will never happen, he mentions a school’s low SAT scores:

Bronx International High School in New York had a lot of attrition and very low test scores. As far as academic rigor, their average SAT score was 1010. This was a bad score when it was out of 1600, but now that it is out of 2400, this is absolutely horrific. You get 750 for just writing your name on the test!

See, that’s just irritating. First, the low score is 600 for three sections, not 750. (Low score per section is 200).

But the real annoyance is his invocation of the idiotic trope of “getting points for signing your name” on the SAT.

“You get 120 points on the LSAT just for writing your name!” or “You get a 1 on the AP test if you turn in a blank test!”, or “You get a point on the ACT just for writing your name!” You never hear those much, yet they’re all equally true.

Newsflash, people: few if any standardized tests give a zero score. Everyone gets the lowest score just for turning in the test. Mocking the SAT for giving 200 points just for showing up either shows a real ignorance of the other tests or a lamentable desire to follow the herd.

In fact, the 200 score reflects a range of scores from -16 or so (marking every single question on the reading section incorrectly, a near impossibility) to a 0 (leaving the test blank, or getting a 1:4 ratio of right to wrong answers).

But that’s just obsessive geekishness and besides, it’s not the scoring, it’s the criticism that mildly annoys me. First, the SAT and its competitor, the ACT are the most egalitarian standardized tests in the nation. Really. I’m not saying they’re perfect, but these tests are the fairest, cheapest, most achievable option for low income students who want to improve their chances for college admission and placement. These tests find the diamonds in the rough, but also help the steady plodders. If you don’t understand this, then you’re an ideologue with an agenda or you’ve been snowed by one.

If you wish to criticize a test’s scoring system, pick on the Advanced Placement test, Jay Mathew’s beauty queen, star of the Challenge Index. AP tests have a skimpy five point scoring system, with no way to distinguish between the many, many blank tests turned in by students forced to take the test so their schools will qualify for Jay’s contest and a genuinely weak effort that simply failed. They are all scored as a 1—the most common score received by a good number of schools on Jay’s Index.

The AP is a good test, but its owner, the College Board, is too busy raking in the money from the Challenge Index bonanza to risk turning off the faucet by revealing how many profoundly unprepared students are taking it so their schools can qualify for a place on a meaningless list.

Oh, and Gary’s actual point? He’s right. The turnaround industry is a disgrace.

End rant.


What causes the achievement gap? The Voldemort View

The View That Must Not Be Spoken is getting a bit more purchase these days.

Steven Pinker, on IQ:

Question: Thus, I think IQ tests merely measure a pedestrian or functionary level of intellect. What are your thoughts on its efficacy in measuring real human intelligence? ….

Pinker: I think you’re wrong about IQ tests in general. They’ve been shown to predict (statistically, of course) a vast array of outcomes that one would guess require intelligence, including success at school, choice of intellectually demanding professions, income (in a modern economy), tenure and publications in academia, and other indicators, together with lower crime rates, lower infant mortality, lower rates of divorce, and other measures of well-being. The idea that IQ tests don’t predict anything in the real world is one of the great myths of the intellectuals.

…. It’s an empirical fact – massively and repeatedly demonstrated – that people who do well on tests of verbal intelligence also do well on tests of spatial and quantitative intelligence, and vice-versa. The correlation is nowhere near perfect (some people really are better at math, others with words), but it is undoubtedly a positive correlation. General intelligence in this sense is a real phenomenon.

(emphasis mine)

Average African American IQ is 1SD below average white IQ, average Hispanic IQ a little less than 1SD below. Asian groups with the highest mean IQ are slightly higher than the average white IQ. I imagine if we went out and tested IQ scores by income, after controlling for race, we would see that mean IQ raises with income.

The Voldemort View: Mean differences in group IQs are the most likely explanation for the academic achievement gap in racial and SES groups.

That opinion could get a person fired. It could particularly get a teacher fired. Pinker has tenure, legitimacy, and fame. I’m 0 for 3.

Why is it so risky? In an excellent essay, Affirmative Distraction, Shelby Steele once offered his idea of the real motivation for affirmative action:

It is important to remember that the original goal of affirmative action was to achieve two redemptions simultaneously. As society gave a preference to its former victims in employment and education, it hoped to redeem both those victims and itself. When America—the world’s oldest and most unequivocal democracy—finally acknowledged in the 1960s its heartless betrayal of democracy where blacks were concerned, the loss of moral authority was profound. In their monochrome whiteness, the institutions of this society—universities, government agencies, corporations— became emblems of the evil America had just acknowledged. Affirmative action has always been more about the restoration of legitimacy to American institutions than about the uplift of blacks and other minorities.

Steele is not thinking of IQ here (in fact, I think he holds that culture is the cause of the gap), but I believe that the rush to crucify anyone who points out the possible role of IQ in our society is likewise about institutional legitimacy. The elites, broadly defined, can’t accept an intelligence gap–particularly a racial one–so they have to constantly push for equal representation in any job but their own (mild sarcasm, there–but only mild). I think that many elites would argue that America can’t accept that gap, but at this point–speaking of gaps–the chasm between what our business, media, political and intellectual leaders want and the average American wants means that the elites don’t speak for America any more.

My opinion about the achievement gap is founded on the fact of consistently measured mean racial IQ differences. Alas, as Pinker points out, most people are completely ignorant of this fact. Thanks in no small part to determination to avoid any mention of IQ in public discourse, most people think that the difference in average racial IQs—a well-established fact—is a bogus pseduofactoid straight out of the Big Would-Otherwise-be-Black Book of Racist White Folks. So simply mentioning the IQ difference carries the risk of the Racist Scum label.

I have no idea why the difference exists. I only know that it does exist, and that simplistic explanations (legacy of racism, culture of poverty, low expectations, enrichment activities, lack of Head Start) have largely been eliminated. I suspect, but don’t know, that IQ is a combination of innate characteristics and environment broadly defined (plenty of iodine, not getting dropped on the head, not being subjected to drug use in utero) and hope, but think it unlikely, that a rich cognitive environment can have some effect. But the cause is largely irrelevant, in my view, and doesn’t make any difference to educational policy.

The Voldemortean nature of this opinion has relaxed slightly in recent years. While no media outlet would ever acknowledge the IQ facts without recasting them as opinions, more and more scientists and opinion makers at the top of the heap are able to mention this–gingerly–without risking public dismemberment. I do mean “recent years”; just four years ago, William Saletan was roundly and publicly slapped for Liberal Creationism, in which he simply stated the facts. The resulting beatdown traumatized Saletan so badly that he now calls for complete elimination of racial categorization of student achievement (Race and Test Scores).

Only slightly better, though. So if someone wanted to make trouble for me, they could simply demand that I be taken to task for “racist statements about IQ differences”, and the crucifixion would begin.

It wouldn’t matter that the racial IQ averages are fact, not opinion. It wouldn’t matter that this fact doesn’t preclude people of all races having the entire gamut of IQs. Most of all, it wouldn’t matter that the IQ differences and the achievement gap are about groups, not individuals.

My top students are white, Hispanic, black, and Asian. My weakest students are white, Hispanic, and Asian. (No, I didn’t forget a group there.) Like all teachers, I don’t care about groups. I teach individuals. And the average IQ of a racial group doesn’t say squat about the cognitive abilities and the thousand other variables that make up each individual.

I dedicate a good deal of my spare time each spring to helping low income under-represented minorities to improve their college admission test scores, and I’m very good at it. Every year, some 8-10 kids escape remedial math and English, saving time and money and dramatically improving their chances of graduation. I teach at a Title I school and am passionately committed to helping every one of my students negotiate the crazy world that educational policy has made of public education and, not incidentally, become more competent at math.

But none of that would matter if someone decided to make an issue of my opinions in this matter. A whole bunch of people who haven’t ever done a thing personally to improve educational outcomes, regardless of gaps, would demand I be fired and stripped of my credentials simply because I think cognitive ability has a lot to do with academic outcomes.

It’s a weird world we live in.


Never trust an education success story

Education success stories are arrant garbage.

Jay Mathews prides himself on using data in his education stories. The only problem is, he usually doesn’t understand the data, or how to evaluate the data critically. Not that Jay is any different from any other reporter, and in fact is better than most–which, given that he’s the most influential education reporter in the country, is no small thing. (Disclosure: I know Jay slightly, and when he’s reporting a story, instead of data, he is very good at getting the details right. He’s also an amazingly nice guy.) So when Jay, or any other reporter, starts praising someone for closing the achievement gap, start with the premise that it’s complete crap.

Jay’s laudatory blog item on Robert G. Smith, head of Arlington County Schools, Stunningly reasonable achievement gap approach is a case in point.

As is often the case, Jay’s focus is puzzling. He’s not praising Smith for having dramatically closed the achievement gap in his district, even though Jay clearly thinks Smith has, in fact, closed it. (He hasn’t, but more on that in a minute). No, Jay is pleased with Smith for being “reasonable” about his insistence on closing the achievement gap, for only wanting improvement towards a goal instead of 100% proficiency, as the feds do.

Smith closed the achievement gap, but all Jay notices is how reasonable he is?

But of course, Smith didn’t really close the gap, or at least any gap worth caring about.

From 1998 to 2009, the portion of black students passing Virginia Standards of Learning tests in Arlington rose from 37 to 77 percent. For Hispanic students, the jump was from 47 to 84 percent. The gap between non-Hispanic white and black passing rates dropped from 45 percentage points to 19. Between Hispanics and non-Hispanic whites, the gap shrank from 35 points to 12.

Ah, passing rates. How many students passed a particular bar?

So if you give students a one-question test: 2 + 2 = ______, and everyone answers 4, then everyone’s passing rate is 100%. You’ve closed the achievement gap! Huzzah!

Or, suppose one year on some arbitrary test saw whites have a mean score of 85 points with a 95% passing rate, while blacks had a mean score of 63 with a 36% passing rate. The next year, whites have a mean score of 94 points, still with a 95% passing rate, while blacks have a mean score of 62, with a 58% passing rate. The passing rate closed. The average score gap got larger. Now, this is still big news if “passing” means, for example, clearing the high school graduation hurdle. But on a state test, you have to look closer.

So what does “passing” mean, in the Virginia Standards of Learning tests? Arlington County’s Report Card shows that students can be “Advanced”, “Proficient”, or Fail. And no mention of average scores.

A state test that only ranks students as Advanced or Proficient? A state test that declares 77% of blacks, 84% of Hispanics and 96% of whites in Arlington Country proficient–and they’re only a little bit ahead of the state average? Huzzah for Virginia! Why is everyone fussed about Finland when Virginia stands as such a shining example?

Or, we could just say the cut scores are a tad low. Let’s do that, instead.

I’m not a fan of NAEP’s tests; they place far too much emphasis on writing and frankly, I don’t trust their method of sampling. How hard is it to test everyone? A lot easier if they weren’t so determined to make the students write to excess. But it’s extremely useful as a benchmark, as it ranks the states by an absolute standard. Here are the rankings of state proficiency standards against the NAEP (I’m going to try wordpress’s slideshow here, see how it works):


Source: Mapping State Proficiency Standards Onto the NAEP Scales: Variation and Change in State Standards for Reading and Mathematics, 2005–2009

Virginia’s “proficient” standard is below basic on NAEP’s scale–and lower than all but a few states. In the cellar.

So Smith’s big achievement is increasing the pass rate on a state test with a very low cut score–and we have no idea whether the actual average score gap was increased or not.

That’s pretty much what to expect in your usual education miracle story. They’re all lies.

Now, I’m not the first, second, or thousandth person to have pointed this out–in fact, most bloggers get far more obsessive about the data then I do, creating excel spread sheets and creating whizbang images. The NAEP benchmark report is old news–and of course, reporters faithfully reported that state proficiency rates were extremely low. And then they all go right back to repeating the latest myth.

That’s the real question. Why are reporters, consultants, and politicians still spewing this swill, happily repeating useless fuzzy data when they know it’s a lie?


Teaching Geometry

I taught two geometry sections my first year at a different school, and while I didn’t do a particularly good job (the classroom management problems were horrible for a different reason, and the curriculum was CPM–ick), I came away with useful insights that have really improved my execution this year.

Geometry Then and Now

Back in the dark ages, we used to say “There are two sorts of people in the world: those who prefer algebra and those who prefer geometry.” This mindset comes from a time when advanced students took algebra in 8th grade, most of everyone else college bound took it in 9th or 10th grade, and then followed it up with geometry, Algebra II, and precalc if there were enough time. Students who were really bad at math took Basic Math or Business Math and maybe took algebra their senior year. If this sounds familiar to you, fine–but it’s not like that anymore.

“Students who were really bad at math” were not representing our nation’s racial balance, and research unsurprisingly showed that students who went beyond Algebra II in high school had higher college completion rates. Naturally, this meant that everyone should take algebra as early as possible, cognitive ability or readiness be damned. The resulting carnage of this policy did not lead to re-evaluation, but rather to the determination that pre-algebra preparation should start earlier–and, of course, kids who fail algebra need to take it again.

Consider the effect of this policy on the average, “okay at math” kid today. Starting in sixth grade, it’s All Algebra, All the Time. By the time they get to geometry, “math instruction” and “algebra” are virtually synonymous–and they don’t even realize it. Kids have spent three, four, or even five years with algebra preparation or instruction. Specifically, using processes to solve for an unknown.

And then: geometry. Good god, what fresh hell is this? Facts. Vocabulary. Relationships. And then, in some weird way, you use these facts and vocabulary and relationships to come up with more facts and vocabulary and relationships. There’s no solving. There’s not even an answer. Half the time the book gives you the answer but then expects you to explain it using, god help us all, facts and vocabulary and relationships.

This is a whole galaxy away from “I like algebra better than geometry.” First off, all but 10-15% of my students found algebra completely unmanageable, so they aren’t looking back fondly at an easier subject. They’re trying hard not to curl up in a fetal position at the realization that math gets worse than the horror of the past three years.

Geometry teachers would do well, I think, to acknowledge this confusion. I tell my students some version of what I’ve just explained above and I see the light dawn. They get it. They might not get geometry, yet, but they get why they feel so lost. And that helps them move forward.

Try, try again
Two years ago, I could see that many of my students weren’t getting it. I retaught, thought of other ways to explain things, but I didn’t understand the degree of their lostness until relatively late in the first semester. I adjusted my teaching more, but I still hadn’t figured out why they were so lost.

This year, I was teaching parallel lines and transversals in week 2 or so and I suddenly realized that most of my students didn’t get it. They weren’t complaining, they weren’t acting out, they were just lost. I recognized the look from two years ago, and was now able to distinguish a furrowed brow of mild confusion from a blank look of utter nihilistic despair.

So at the end of day 2, I did a thumb check. “Okay, guys, my sense is a lot of you are feeling lost. Thumbs up if you feel confident, sideways or down if you’re kind of or totally lost.” And sure enough, most of the class was sideways or down.

I told the class I would come up with a different way to explain it. The next day, I used Geoboards, rubber bands, and little wooden geometric shapes to create a visual image of corresponding, alternate interior, and so on. (I’ll write that up some time). The lesson was very effective in helping students understand the angle relationship. But more important, the students recognized that I had stopped everything, rethought the lesson, came up with a radically different way of explaining the concepts–and had gone through this effort because I could see they were lost. The feedback from the new lesson was very enthusiastic, kids felt much less lost–but more importantly, they felt like I understood their confusion and was willing to spend time and effort helping them out.

This created a lot of good will, and since then they’ve been very trusting of my oftentimes bizarre way of building visual images to help them grasp geometric concepts.

It’s easier to do this in geometry than algebra, since geometry is new to everyone. Even my top students appreciate the occasional visual exercise, and I always have extra challenges for them. In algebra, some kids are lost right from the beginning, and it’s impossible to reteach everyone. (Which means, now that I think about it, that if I differentiate immediately after my assessment test in algebra, I might have an easier time. Hmmm.)

But fundamentally, it’s important to understand that time spent at the beginning, pacing be damned, will really pay off in student investment. I now realize that many of my geometry students two years ago had checked out because none of it made sense, and I didn’t pick up on that early enough to intervene. I took five days to explain parallel lines and transversals rather than two, but every minute of it was well spent.

De-emphasize what they won’t use

Most college graduates think proofs, logic, and construction as quintessential geometry subjects. That’s because we never use them again. We don’t spend any time in formal logic, never do formal proofs, and as for construction, forget it.

So I mostly dump them (which is how I make up the five days on transversals). Not completely. My Holt text starts with these five chapters:

  1. Foundations (Undefined Terms, Segments, Angles, area/perimeter formulas, coordinate geometry, transformations)
  2. Geometry Reasoning (logic and proofs)
  3. Parallel and Perpendicular Lines
  4. Triangle Congruence
  5. Triangle Properties

I dumped transformations entirely. I then took coordinate geometry, proofs, and logic and broke them up into tiny digestible chunks (coordinate geometry was review), rather than cover them all at one time, and covered about 20% of the material. So rather than an entire section on proofs, I introduced algebraic proofs at a natural pause point, when I had a day or two between major sections. I just introduced it; my goal was familiarity and recognition but not competence. Then, after introducing congruent triangles, I introduced two column proofs, and the students used congruence shortcuts to create two column proofs. This was much more successful than introducing a whole chapter on proofs when they were still in the WTF stage.

Yes, I know, the purists out there, assuming anyone is reading, is shocked. What? Proofs and logic introduce an invaluable way of thinking logically and methodically! Yep. But ask geometry teachers in heterogeneous classrooms if their kids understand proofs, and they will sigh. There’s just no way to get the lower ability half of the population to understand proofs and they’ll never use it again. I could spend lots of time trying, but I have better things to do with their time.

Ideally, I’d love to make my top students go through rigorous proofs, but it would take more instruction time than I can manage in differentiation. I hope to figure this out at some point, but I’m not as practiced at teaching geometry as I am at algebra.

So you’re thinking my class is too easy, right? Well, we just had our semester final and the geometry teachers agreed to start with a common assessment, built by a traditional geometry teacher who had covered far more material than I had in the first semester. I considered the test a little too easy and more picayune than I would build, so I substituted some harder questions. I didn’t dump more than two or three of the questions that we didn’t cover, because I felt pretty confident the students could figure it out–and, for the most part, they did.

Always remember where they are going
Geometry is just a brief respite. The next year, it’s back to algebra II, another course that causes a lot of carnage. Half of my class has extremely weak algebra skills, half of the rest are adequate, and the top students were rarely challenged with tough material. They need the practice. My sophomores will be taking the algebra and pre-algebra intensive state graduation test and my juniors are taking the SAT. Algebra is a big part of their testing load this year.

So I teach my geometry course as Applying Algebra with Geometry Facts. My students will never again need to prove that triangle ABC is congruent to triangle XYZ, but they will always need to know how to find the angle measures of a triangle whose angle ratio is 2:3:5. They will never use a compass again, but they will need to know what to do if Angle A and Angle B are supplementary and Angle A = 4x+ 13 and Angle B = 2x + 17 and they need to solve for x.

The problem is that the state tests tend to emphasis more traditional geometry. Aggravating, really, given that the state has clearly de-emphasized traditional geometry in its overall curriculum, but so be it.

Results

I had told all my students this year that if they showed up and worked, they’d pass with a D-. In my Algebra II course, several students did not in any way demonstrate understanding of the material we covered that year (not for lack of trying, in most cases), but I kept my promise.

But in my Geometry classes, my D students were genuinely Ds. They struggled, but got Ds or “respectable Fs” (50% or higher) on all the tests and quizzes. On the 100 question final (40 correct is a D-, 15 point grade scale instead of 10), 15 students failed. All but two had “respectable” Fs (answered 30 or more questions correctly, and those two were just below 30. The distribution was pretty close to normal, the average score and the mode were C. So far, so good.