Tag Archives: student achievement

Assessing Math Understanding: Max, Homer, and Wesley

This is only tangentially a “math zombies” post, but I did come up with the idea because of the conversation.

I agree with Garelick and Beals that asking kids to “explain math” is most often a waste of time. Templates and diagrams and “flow maps” aren’t going to cut it, either. Assessing understanding is a complicated process that requires several different solutions methods and an interpretive dance. Plus a poster or three. No, not really.

As I mentioned earlier, I don’t usually ask kids to “explain their answer” because too many kids confuse “I wrote some words” with “I explained”. I grade their responses in the spirit given, a few points for effort. “Explain your answer” test questions are sometimes handy to see if top students are just going through the motions, or how much of my efforts have sunk through to the students. But I don’t rely on them much and apart from top students, don’t care much if the kids can’t articulate their thinking.

It’s still important to determine whether kids actually understand the math, and not just because some kids know the algorithm only. Other kids struggle with the algorithm but understand the concepts, Still others don’t understand the algorithm because they don’t grok the concepts. Finally, many kids get overwhelmed or can’t be bothered to work out the problem but will indicate their understanding if they can just read and answer true/false points.

If you are thinking “Good lord, you fail the kids who can’t be bothered or get overwhelmed by the algorithms!” then you do not understand the vast range of abilities many high school teachers face, and you don’t normally read this blog. These are easily remediable shortcomings. I’m not going to cover that ground again.

So how to ascertain understanding without the deadening “explain your answer” or the often insufficient “show your work”?

My task became much easier once I turned to multiple answer assessments. I can design questions that test algorithm knowledge, including interim steps, while also ascertaining conceptual knowledge.

I captured some student test results to illustrate, choosing two students for direct comparison, and one student for additional range. None of these students are my strongest. One of the comparison students, Max, would be doing much better if he were taught by Mr. Singh, a pure lecture & set teacher; the other, Homer, would be struggling to pass. The third, Wesley, would have quit attending class long ago with most other teachers.

To start: a pure factoring problem. The first is Max, the second Homer.

zombiecomp1

Both students got full credit for the factoring and for identifying all the correct responses. Max at first appears to be the superior math student; his work is neat, precise, efficient. He doesn’t need any factoring aids, doing it all in his head. Homer’s work is sloppier; he makes full use of my trinomial factoring technique. He factored out the 3 much lower on the page (out of sight), and only after I pointed out he’d have an easier time doing that first.

Now two questions that test conceptual knowledge:

zombiecomp2

Max guessed on the “product of two lines” question entirely, and has no idea how to convert a quadratic in vertex form to standard or factored. Yet he could expand the square in his head, which is why he knew that c=-8. He was unable to relate the questions to the needed algorithms.

Homer aced it. In that same big, slightly childish handwriting, he used the (h,k) parameters to determine the vertex. Then he carefully expanded the vertex form to standard form, which he factored. This after he correctly identified the fact that two lines always multiply to form a quadratic, no matter the orientation.

Here’s more of Homer’s work, although I can’t find (or didn’t take a picture of) Max’s test.

zombiecomp5

This question tests students’ understanding of the parameters of three forms of the quadratic: standard, vertex, factored. I graded this generously. Students got full credit if they correctly identified just one quadratic by parameter, even if they missed or misidentified another. Kids don’t intuitively think of shapes by their parameter attributes, so I wanted to reward any right answers. Full credit for this question was 18 points. A few kids scored 22 points; another ten scored between 15 and 18. A third got ten or fewer points.

Homer did pretty well. He was clearly guessing at times, but he was logical and consistent in his approach. Max got six points. He got a wrong, got b, c, & d correct, then left the rest blank. It wasn’t time; I pointed out the empty responses during the test, pointing out some common elements as a hint. He still left it blank.

On the same test, I returned to an earlier topic, linear inequalities. I give them a graph with several “true” points. Their task: identify the inequalities that would include all of these solutions.

zombiecomp4

(Ack: I just realized I flipped the order when building this image. Homer’s is the first.)

Note the typo that you can see both kids have corrected (My test typos are fewer each year, but they still happen.) I just told them to fix it; the kids had to figure out if the “fix” made the boundary true or false. (This question was designed to test their understanding of linear concepts–that is, I didn’t want them plugging in points but rather visualizing or drawing the boundary lines.)

Both Max and Homer aced the question, applying previous knowledge to an unfamiliar question. Max converted the standard form equation to linear form, while Homer just graphed the lines he wasn’t sure of. Homer also went through the effort of testing regions as “true”, as I teach them, while Max just visualized them (and probably would have been made a mistake had I been more aggressive on testing regions).

Here I threw something they should have learned in a previous year, but hadn’t covered in class:
zombiecomp3

Most students were confused or uncertain; I told them that when in doubt, given a point….and they all chorused “PLUG IT IN.”

This was all Max needed to work the problem correctly. Homer, who had been trying to solve for y, then started plugging it in, but not as fluently as Max. He has a health problem forcing him to leave slightly early for lunch, so didn’t finish. For the next four days, I reminded students in class that they could come in after school or during lunch to finish their tests, if they needed time. Homer didn’t bother.

So despite the fact that Homer had much stronger conceptual understanding of quadratics than Max, and roughly equal fluency in both lines and quadratics, he only got a C+ to Max’s C because Homer doesn’t really care about his grade so long as he’s passing.

Arrgghhh.

I called in both boys for a brief chat.

For Max, I reiterated my concern that he’s not doing as well as he could be. He constantly stares off into space, not paying attention to class discussions. Then he finishes work, often very early, often not using the method discussed in class. It’s fine; he’s not required to use my method, but the fact that he has another method means he has an outside tutor, that he’s tuning me out because “he knows this already”. He rips through practice sheets if he’s familiar with the method, otherwise he zones out, trying to fake it when I stop by. I told him he’s absolutely got the ability to get an A in class, but at this point, he’s at a B and dropping.

Max asked for extra credit. He knew the answer, because he asks me almost weekly. I told him that if he wanted to spend more time improving his grade, he should pay attention in class and ask questions, particularly on tests.

We’ve had this conversation before. He hasn’t changed his behavior. I suspect he’s just going to take his B and hope he gets a different teacher next year who’ll make the tutor worth the trouble. At least he’s not trying to force a failing grade to get to summer school for an easy A.

I was extremely direct with Homer,  expressing (snarling) my disappointment that he wouldn’t make the effort to be excellent, when he was so clearly capable of more. What was he doing that was so important he couldn’t take 20 minutes or so away to finish a test, given the gift of extra time? Homer stood looking a bit abashed. Next test, he came in during lunch to complete his work. And got an A.

Max got a B- on the same test, with no change in behavior.

I haven’t included any of the top students’ work because it’s rather boring; revelations only come with error patterns. But here, in a later test, is an actual “weak student”, who I shall dub Wesley.

Wesley had been forced into Algebra 2, against his wishes, since it took him five attempts to pass algebra I and geometry. He was furious and determined to fail. I told him all he had to do was work and I’d pass him. Didn’t help. I insisted he work. He’d often demand to get a referral instead. Finally, his mother emailed about his grade and I passed on our conversations. I don’t know how, but she convinced him to at least pick up a pencil. And, to Wesley’s astonishment, he actually did start to understand the material. Not all of it, not always.

weakstudentwork

This systems of equations question (on which many students did poorly) was also previous material. But look at Wesley! He creates a table! Just like I told him to do! It’s almost as if he listened to me!

He originally got the first equation as 20x + 2y = 210 (using table values); when I stopped by and saw his table, I reminded him to use it to find the slope–or, he could remember the tacos and burritos problem, which spurred his memory. You can’t really see the rest of the questions, but he did not get all the selections correct. He circled two correctly, but missed two, including one asking about the slope, which he could have found using his table. He also graphed a parabola almost correctly, above (you can see he’s marked the vertex point but then ignored it for the y-intercept).

He got a 69, a stupendous grade and effort, and actually grinned with amazement when I handed it back.

Clearly, I’m much better at motivating underachieving boys than I am “math zombies”. Unsurprising, since motivating the former is my peculiar expertise going back to my earliest days in test prep, and I’ve only recently had to contend with the latter. However, I’ve successfully reached out and intervened with similar students using this approach, so it’s not a complete failure. I will continue to work on my approach.

None of the boys have anything approaching a coherent, unified understanding of the math involved. In order to give them all credit for what they know and can do, while still challenging my strongest students, I have to test the subject from every angle. Assessing all students, scoring the range of abilities accurately, is difficult work.

As you can see, the challenges I face have little to do with Asperger’s kids who can’t explain what they think or frustrated parents dealing with number lines or boxes of 10. Nor is it anything solved by lectures or complex instruction. My task is complicated. But hell, it’s fun.

Advertisements

Why Merit Pay and Value Added Assessment Won’t Work, Part I

The year I taught Algebra I, I did a lot of data collection, some of which I discussed in an earlier post. Since I’ve been away from that school for a while, I thought it’d be a good time to finish the discussion.

I’m not a super stats person. I’m not even a mathematician. To the extent I know math, it’s applied math, with the application being “high school math problems”. This is not meant to be a statistically sound analysis, comparing Treatment A to Treatment B. But it does reveal some interesting big picture information.

This data wasn’t just sitting around. A genuine DBA could have probably whipped up the report in a few hours. I know enough SQL to get what I want, but not enough to get it quickly. I had to run reports for both years, figure out how to get the right fields, link tables, blah blah blah. I’m more comfortable with Excel than SQL, so I dumped both years to Excel files and then linked them with student id. Unfortunately, the state data did not include the subject name of each test. So I could get 2010 and 2011 math scores, but it took me a while to figure out how to get the 2010 test taken—and that was a big deal, because some of the kids whose transcripts said algebra had, in fact, taken the pre-algebra (general math) test. Not that I’m bitter, or anything.

Teachers can’t get this data easily. I haven’t yet figured out how to get the data for my current school, or if it’s even possible. I don’t know what my kids’ incoming scores are, and I still haven’t figured out how my kids did on their graduation tests.

So the data you’re about to see is not something teachers or the general public generally has access to.

At last school, in the 2010-11 school year, four teachers taught algebra to all but 25 of over 400 students. I had the previous year’s test scores for about 75% of the kids, 90% of whom had taken algebra the year before, the other 10% or so having taken pre-algebra. This is a slightly modified version of my original graph; I put in translations of the scores and percentages.

algallocdist

You should definitely read the original post to see all the issues, but the main takeaway is this: Teacher 4 has a noticeably stronger population than the other three teachers, with over 40% of her class having scored Basic or Higher the year before, usually in Algebra. I’m Teacher 3, with by far the lowest average incoming scores.

The graph includes students for who I had 2010 school year math scores in any subject. Each teacher has from 8-12 pre-algebra student scores included in their averages. Some pre-algebra kids are very strong; they just hadn’t been put in algebra as 8th graders due to an oversight. Most are extremely weak. Teachers are assessed on the growth of kids repeating algebra as well as the kids who are taking it for the first time. Again, 80% of the kids in our classes had taken algebra once. 10-20% had taken it twice (our sophomores and juniors).

Remember that at the time of these counts, I had 125 students. Two of the other teachers (T1 and T4) had just under 100, the third (T2) had 85 or so. The kids not in the counts didn’t have 2010 test scores. Our state reports student growth for those with previous years’ scores and ignores the rest. The reports imply, however, that the growth is for all students. Thanks, reports! In my case, three or four of my strongest students were missing 2010 scores, but the bulk of my students without scores were below average.

So how’d we do?

I limited the main comparison to the 230 students who took algebra for both years and had scores for both years and had one of 4 teachers.

scoreimpalg

Here are the pre-algebra and algebra intervention growth–pre-algebra is not part of the above scores, but the algebra intervention is a sub-group. These are tiny groups, but illustrative:

scoreimpother

The individual teacher category gains/slides/pushes are above; here they are in total:
myschooltotcatchg

(Arrrggh, I just realized I left off the years. Vertical is 2010, horizontal is 2011.)

Of the 230 students who took algebra two years in a row, the point gain/loss categories went like this:

Score change > + 50 points

57
Score change > -20 points

27
-20 points < score change < + 50 points

146

Why the Slice and Dice?

As I wrote in the original post, Teacher 1 and I were positive that Teacher 4 had much stronger student population than we did—and the data supports that belief. Consequently I suspected that no matter how I sliced the data, Teacher 4 would have the best numbers. But I wanted a much better idea of how I’d done, based on the student population.

Because one unshakeable fact kept niggling at me: our school had a tremendous year in 2010-2011, based largely on our algebra scores. We knew this all throughout the year—benchmark tests, graduation tests—and our end of year tests confirmed it, giving us a huge boost in the metrics that principals and districts cared about. And I’d taught far more algebra students than any other teacher. Yet my numbers based on the district report looked mediocre or worse. I wanted to square that circle.

The district reports the data on the right. We were never given average score increase. A kid who had a big bump in average score was irrelevant if he or she didn’t change categories, while a kid who increases 5 points from the top of one category to the bottom of another was a big win. All that matters were category bumps. From this perspective, my scores look terrible.

I wanted to know about the data on the left. For example Teacher 1 had far better “gain” category numbers than I did. But we had the same mean improvement overall, of 5%, with comparable increases in each category. Broken down further, Teacher 4’s spectacular numbers are accompanied by a huge standard deviation—she improved some kids a lot. The other three teachers might not have had as dramatic a percentage increase, but the kids moved up more consistently. In three cases, the average score declined, but was accompanied by a big increase in standard deviation, suggesting many of the kids in that category improved a bit, while a few had huge drops. Teacher 2 and I had much tighter achievement numbers—I may have moved my students less far, but I moved a lot of them a little bit. None of this is to argue for one teacher’s superiority over another.

Of course, once I broke the data down by initial ability, group size became relevant but I don’t have the overall numbers for each teacher, each category, to calculate the confidence interval or a good sample size. I like 10. Eleven of the 18 categories hit that mark.

How many kids have scores for both years?

The 2011 scores for our school show that just over 400 students took the algebra test. My fall 2010 graph above show 307 students with 2010 scores (in any subject) who began the year. Kick in another 25 for the teacher I didn’t include and we had about 330 kids with 2010 scores. My results show 230 kids with algebra scores for both years, and the missing teacher had 18, making 248. Another 19 kids had pre-algebra scores for the first year, although the state’s reports wouldn’t have cared about that. So 257 of the kids had scores for both years, or about 63% of the students tested.

Notice that I had the biggest fall off in student count. I think five of my kids were expelled before the tests, another four or so left to alternative campuses. I remember that two went back to Mexico; one moved to his grandparents’ in Iowa. Three of my intervention students were so disruptive during the tests that they were ejected, so their test results were not scored (the next year our school had a better method of dealing with disruptive students). Many of the rest finished the year and took the tests, but they left the district over the summer (not sure if they are included in the state reports, but I couldn’t get their data). I think I had the biggest fall-off over the year in the actual student counts; I went from 125 to 95 by year-end.

What about the teachers?

Teacher 1: TFA, early-mid 20s, Asian, first year teacher. Had a first class honors masters degree in Economics from one of the top ten universities in Europe. She did her two, then left teaching and is now doing analytics for a fashion firm in a city where “fashion firm” is a big deal. She was the best TFAer I’ve met, and an excellent new teacher.

Teacher 2: About 60. White. A 20-year teacher who started in English, took time off to be a mom, then came back and got a supplemental math credential. She is only qualified to teach algebra. She is the prototype for the Teacher A I described in my last post, an algebra specialist widely regarded as one of the finest teachers in the district, a regard I find completely warranted.

Teacher 3: Me. 48 at the time, white. Second career, second year teacher, English major originally but a 15-year techie. Went to one of the top-rated ed schools in the country.

Teacher 4: Asian, mid-late 30s. Math degree from a solid local university, teaches both advanced math and algebra. She became the department head the next year. The reason her classes are top-loaded with good students: the parents request her. Very much the favorite of administration and district officials.

And so, a Title I school, predominantly Hispanic population (my classes were 80% Hispanic), teachers that run the full gamut of desirability—second career techie from a good ed school, experienced pro math major, experienced pro without demonstrated higher math ability, top-tier recent college grad.

Where was the improvement? Case 1: Educational Policy Objectives

So what is “improvement”? Well, there’s a bunch of different answers. There’s “significant” improvement as researchers would define it. Can’t answer that with this data. But then, that’s not really the point. Our entire educational policy is premised on proficiency. So what improvement does it take to reach “proficiency”, or at least to change categories entirely?

Some context: In our state, fifty points is usually enough to move a student from the bottom of one category to the bottom of another. So a student who was at the tip top of Below Basic could increase 51 points and make it to the bottom of Proficient, which would be a bump of two categories. An increase of 50 points is, roughly, a 17% increase. Getting from the bottom of Far Below Basic to Below Basic requires an increase of 70%, but since the kids were all taking Algebra for the second time, the boost needed to get them from FBB to BB was a more reasonable 15-20%. To get from the top of the Far Below Basic category to Proficient—the goal that we are supposed to aim for—would require a 32% improvement. Improving from top of Basic to bottom of Advanced requires a 23% improvement.

Given that context, only two of the teachers in one category each moved the needle enough to even think about those kind of gains—and both categories had 6-8 students. Looking at categories with at least ten students, none of the teachers had average gains that would achieve our educational policy goals. In fact, from that perspective, the teachers are all doing roughly the same.

I looked up our state reports. Our total population scoring Proficient or Advanced increased 1%.

Then there’s this chart again:

myschooltotcatchg

32 students moved from “not proficient” to “proficient/advanced”. 9 students moved from “proficient” to “advanced”. I’ll throw them in. 18% of our students were improved to the extent that, officially, 100% are supposed to achieve.

So educational policy-wise, not so good.

Where was the improvement? Case 2: Absolute Improvement

How about at the individual level? The chart helps with that, too:

myschooltotcatchg

Only 18 students were “double gainers” moving up two categories, instead of 1. Twelve of those students belonged to Teacher 4; 4 belonged to Teachers 1 , while Teacher 2 and I only had 1 (although I had two more that just missed by under 3 points). Teachers 1, 2, and 3 had one “double slider” each, who dropped two categories.

(I interviewed all the teachers on the double gainers; in all cases, the gains were unique to the students. The teachers all shrugged—who knew why this student improved? It wasn’t some brilliant aha moment unique to that teacher’s methods, nor was it due to the teacher’s inspiring belief and/or enthusiasm. Two of the three echoed my own opinion: the students’ cognitive abilities had just developed over the past year. Or maybe for some reason they’d blown off the test the year before. I taught two of the three “double sliders”—one was mine, one I taught the following year in geometry, so I had the opportunity to ask them about their scores. Both said “Oh, yeah, I totally blew off the test.” )

So a quarter of the students had gains sufficient to move from the middle of one category to the middle of another. The largest improvement was 170 points, with about 10 students seeing >100 point improvement. The largest decline was 169 points, with 2 students seeing over 100 point decline. Another oddity: only one of these two students was a “double slider”. The other two “double sliders” had less than 100 point declines. My double slider had a 60 point decline; my largest point decline was 89 points, but only dropped one category.

However, the primary takeaway from our data is that 63% of the students forced to take algebra twice were, score-wise if not category-wise, a “push”. They dropped or gained slightly, may have moved from the bottom of one category to the middle of the same, or maybe from the top of one category to the bottom of another.

One might argue that we wasted a year of their lives.

State reports say our average algebra score from 2010 to 2011 nudged up half a point.

So it’s hard to find evidence that we made much of a difference to student achievement as a whole.

I know this is a long post, so I’ll remind the reader that all of the students in my study have already taken algebra once. Chew on that for a while, will you?

Where was the improvement? Case 3: Achievement Gap

I had found no answer to my conundrum in my above numbers, although I had found some comfort. Broken down by category, it’s clear I’m in the hunt. But the breakdown doesn’t explain how we had such a stupendous year.

But when I thought of comparing our state scores from year to year, I got a hint. The other way that schools can achieve educational policy objectives is by closing the achievement gap.

All of this data comes from the state reports for our school, and since I don’t want to discuss who I am on this blog, I can’t provide links. You’ll have to take my word for it—but then, this entire post is based on data that no one else has, so I guess the whole post involves taking my word for it.

2010-11 Change
Overall

+

0.5
Whites

7.2
Hispanics

+

4
EcDis Hisp

1
ELL

+

7

Wow. Whites dropped by seven points, Hispanics overall increased by 4, and non-native speakers (almost entirely Hispanic and economically disadvantaged), increased by 7 points.

So clearly, when our administrator was talking about our great year, she was talking about our cleverness in depressing white scores whilst boosting Hispanics.

Don’t read too much into the decline. For example, I personally booted 12 students, most of them white, out of my algebra classes because they’d scored advanced or proficient in algebra the previous year. Why on earth would they be taking the subject again? No other teacher did this, but I know that these students told their friends that they could get out of repeating Algebra I simply by demanding to be put in geometry. So it’s quite possible that much of the loss is due to fewer white advanced or proficient students taking algebra in the first place.

So who was teaching Hispanics and English Language Learners? While I can’t run reports anymore, I did have my original file of 2010 scores. So this data is incoming students with 2010 scores, not the final 2011 students. Also, in the file I had, the ED and ELL overlap was 100%, and I didn’t care about white or black EDs for this count. Disadvantaged non-ELL Asians in algebra is a tiny number (hell, even with ELL). So I kept ED out of it.

 

Hisp

ELL
t1

30

21
t2

32

38
t3

48

37
t4

39

12

Well, now. While Teacher 4 has a hefty number of Hispanics, very few of them are poor or ELLs. Teacher 2 seems to have Asian ELLs in addition to Hispanic ELLs. I have a whole bunch of Hispanics, most of them poor and ELL.

So I had the most mediocre numbers, but we had a great year for Hispanic and ELL scores, and I had the most Hispanic and ELL students. So maybe I was inadvertently responsible for depressing white scores by booting all those kids to geometry, but I had to have something to do with raising scores.

Or did I? Matthew DiCarlo is always warning against confusing comparing year to year scores, which are a cross-section of data at a point in time, with comparing student progress at two different points in time. In fact, he would probably say that I don’t have a conundrum, that it’s quite possible for me to have been a crappy teacher who had minimal impact on student achievement compared point to point, while the school’s “cross-section” data, which doesn’t compare students directly, could have some other reason for the dramatic changes.

Fair enough. In that case, we didn’t have a great year, right? It was just random happenstance.

This essay is long enough. So I’ll leave any one interested to explain why this data shows that merit pay and value added scores are pointless. I’m not sure when I’ll get back to it, as I’ve got grades to do.


Midterms and Ability Indicators

Again with the big monster post (Escaping Poverty has eclipsed all but the top post of my old top 6, and my total blog view count as of today is 45,134), and again with the procrastination of success. I did not watch reruns while in hiding; it was All Election All the Time, and now I’m depressed. In the VDH typology, I’m a Near Fatalist, but an optimistic one. Like Megan McArdle, I think that the demographic changes will be offset by the inability of the Dems to manage a coalition with lots of demands but little else—and yes, I think, after a while, that the producers will wander over to the Republican side. If not, I will achieve total Fatalism.

Anyway. I got unnerved because I have many new followers, and I write about many things that may bore them because I don’t just write about policy. I have two posts almost completely done (okay, I didn’t just watch elections), but was actually intimidated to post them because they are about teaching math. Am I the only writer/blogger scared of the audience?

But I just graded midterms, and I thought I would mention something that may be illustrative to the people who are unhappy with my relatively frank discussion of race. As I wrote when I originally invoked the Voldemort View, (a notion proposed by an an anonymous teacher):

My top students are white, Hispanic, black, and Asian. My weakest students are white, Hispanic, and Asian. (No, I didn’t forget a group there.) Like all teachers, I don’t care about groups. I teach individuals. And the average IQ of a racial group doesn’t say squat about the cognitive abilities and the thousand other variables that make up each individual.

I wrote this at my previous school. So here’s what’s happening at my new school, in midterm results:

Freshman Geometry

Tied top scores: African American boy, Hispanic girl, white girl. Following right behind with one fewer right answer: two white boys. These five have consistently been the top achievers. The remaining top students are a mix of white and Hispanic (there are no other African Americans).

Low scores: two white girls, South Asian boy. Three Hispanic kids are next in line, but two of them took their time and did outstandingly well for their skill level, pulling off a D+.

The Asians in my class are all south Asian, and all but one are in the bottom half of the class, although one of those is clearly under achieving.

Keep in mind, however, that all the Asian kids are in the Honors Geometry class.

Intermediate Algebra

Top score: white senior boy, right behind him was a white sophomore boy, right behind him a Chinese junior girl and a Chinese sophomore boy, in that order. My top students, taking the tougher of the two courses I teach in one class, are a mix of whites, Asians (far east, mid-East, and south), and Hispanics (two girls, both in the top half of the top group). In the second half of the class, the top students are Chinese (but remember, this puts them in the middle of overall ability) and Hispanic.

Again, this is intermediate algebra; many of the top kids are taking Alg II/Trig.

But talking about race and cognitive ability can instantly annihilate a teacher’s career because of a flawed premise. A teacher who accepts that cognitive ability is real and explains much of the achievement gap must be a racist, sexist, or both. Racists can’t properly teach because their assumptions will color their outcomes. They’ll treat the black kids like they’re stupid, favor white kids, and assume all Asian kids are awesome math machines who can’t write. The sexists will be sighing impatiently at the girls who want more context and less competition and praising the eager beaver boys who want the facts and figures and that most horrible of all horribles, The Right Answer.

So, for what it’s worth, I offer my results to dispute that premise, and to restate: I don’t teach races, I don’t teach groups. I teach individual students.