Monthly Archives: January 2012

The Gap in the GRE

(For those who have better things to do than ponder GRE scores, this post will make more sense if you know that around four percent of all GRE testers achieve the highest score of 800 on the quant (math) section, while just 2-3% of all testers get over 700 on the verbal section.)

Razib Khan, building on his previous work, correlates GRE verbal and math scores by intended major into a stunningly cool graphic. Many commenters, both at Khan’s and Steve Sailer’s site observed the sizable gap between quant and verbal averages, repeated the amateur’s conventional wisdom that foreign testers, particularly Asians, are the cause.

This may be a small point, but could everyone please take note so they don’t irritate me with gormless speculation: verbal scores on standardized tests have been lower than math scores for forty years or more. High verbal scores are extremely rare; high math scores are, in comparison, common.

First up, Sex, Race, Ethnicity, and Performance on the GRE General Test provides GRE scores broken down by nationality for the year 2001-2002 (first year of the last change):

About 3% of US born testers get anywhere close to 700 on the verbal section. And just to forestall the next objection, no, it’s not URM scores that are dragging the average down, either:


(This is US citizens only).

In this year, at least, white women are 50% of the tested population; white men another 27%. This makes sense; most of the high volume, low academic barrier grad school specialities are white women jobs (teachers, social workers, nurses). That also explains the rather sizable gap between the genders; men who take the GRE are at least as likely to be testing into a hard sciences speciality as they are into teaching. But again, it’s clear that about 10-12% of whites or US testers are getting over 600 on the verbal.

This isn’t a recent development, as the score history from 1965 on shows:

(Source: NCES, and you can see all the scores through 2007 there).

From 1965-69, verbal and math scores had roughly the same average, although math had the greater standard deviation, which should mean that there were more 800s in math than in verbal. In the lax 70s, both verbal and math scores declined, although verbal scores dropped far more. In the “A Nation at Risk” 80s, math scores rebounded and exceeded the good old days (some of that growth, no doubt, attributable to the increased Asian presence). Verbal scores never did.

The GRE was originally a knockoff of the SAT, and the same decline in verbal scores can be seen through these years. Math scores didn’t take much of a dive during this time, interestingly enough.

None of this decline is news; Murray and Herrnstein’s Bell Curve first made the data available, I think. But it shows that long before Asians became overrepresented in college tests, verbal GRE scores have been low, and high verbal scores have always been rarer than math. You can see the same gap between verbal and math in the GMAT, and MCAT science scores are far higher than the verbal section. The LSAT doesn’t test math and I couldn’t find a breakdown of section scores. I coached the LSAT, though, and distinctly remember reading in the company manual (which I’ve since tossed) that, while most testers think the logic games was the most difficult, the reading section had the lowest average score. Can’t find any data confirming or denying that memory, unfortunately.

Why do we appear to have fewer high verbal achievers than math achievers? I think Murray and Herrnstein were correct when they wrote that “a politically compromised curriculum is less likely to sharpen the verbal skills of students than one that hews to standards of intellectual rigor and quality” annd that “when parents demanded higher standards, their schools introduced higher standards in the math curriculum that really were higher, and higher standards in the humanities and social sciences that really were not”. (Bell Curve, page 432-433) Without question, we have lost a couple generations of cognitively able students who weren’t given the opportunity to really achieve to their fullest capability, and we stand to lose a few more.

But I also wonder if verbal intelligence is less understood and consequently less valued. If one is “good at math”, there’s a logical progression of courses to take, problems to solve (or spend a lifetime trying to), and increasingly difficult subjects to tackle–and plenty of careers that want them. But if one has a high verbal intelligence without good spatial aptitude (which seems to be necessary for higher math) it is often described as “good at reading”, a woefully inaccurate characterization of high verbal intelligence—and then what? Apart from law, there aren’t nearly as many clearly defined career paths with a wide range of opportunities for all temperaments and interests. Most of the ones I can think of involve luck and driving ambition just to get started (journalism, tenured academia, political consultant).

For a good twenty years or so, people with high verbal skills who were indifferent at high-level math went into technology. It’s hard to remember now in the age of Google and after the heyday of corporate computing, but IBM and mainframe shops were filled with bright people who had degrees in history and English and humanities who just “didn’t like math” but were excellent programmers. I routinely worked in shops where all the expert techies making six figures came from non-STEM majors. But that time appears to be over.

Of course, doing anything about this lack of clearly defined career paths for smart folks with less spatial aptitude would involve acknowledging it’s a problem, and I might be the only one who thinks it’s a problem.


The Great Shift

A few years back, Charles Murray wrote Real Education, which he marketed as having four simple ideas:

  1. Ability varies
  2. Half of the children are below average
  3. Too many people are going to college
  4. America’s future depends on how we educate the academically gifted

Meanwhile, Mr. Teachbad describes The Great Shift:

It is my responsibility to always be engaging the child, rather than the child’s responsibility to learn how to shut….up, think, and do some thing he or she doesn’t love once in a while. This HUGE shift in responsibility away from students and families and onto teachers is a topic unto itself. It represents an enormous social capitulation and places an utterly unfair burden on teachers.

They’re both largely correct, although I quibble with them on the details. But they’re not just right, they’re correlated.

The American educational system refuses to acknowledge the basic truth behind Murray’s four ideas. I suspect that it would easily accept them if the import of Ideas #1 and #2 weren’t disproportionately allocated by race. Check out exclusively white or Asian high schools and you will find high schools that track ruthlessly, since they have no unsettling patterns in their bonehead classes. Schools whose bonehead classes have over-representation of underrepresented minorities get lawsuits and multi-generational court orders.

So while the educational system refuses to acknowledge reality, it can’t acknowledge reality anyway, because our legal system gets very cranky and starts talking about disparate impact. Our elites get even more upset because, hey, if we can’t move everyone up the ladder equally in our multi-racial, multi-cultural society, then there might be something wrong with the society, and racism is always their favorite culprit.

But regardless of the reason, here we are. If the system can’t accept that abilities vary, and that academic results are strongly linked to cognitive ability, then the system needs someone to blame. The kids can’t be blamed–and here, unlike Mr. Teachbad, I don’t think they should be. They’re not signing up to take trigonometry and poetry analysis and demanding excellent grades for no work. Not that it matters, though, since the system isn’t giving the kids a pass out of kindness but rather necessity. Blaming the kids leads to the obvious solution—take the kids out of the class and, if necessary, out of the school. Back to the disparate impact penalty box and the elites prating about racism and institutional legitimacy.

Government is supposed to protect kids from bad parents, so even if the parents are bad (again, not a major culprit), the public can’t be expected to pony up billions to run schools if the schools are going to shrug and say “wuddyagunna do? It’s the parents.”

That leaves teachers. Mr. Teachbad is correct. It’s extremely unfair. But we can’t resolve it without facing up to the core truths in Murray’s four ideas.


Elementary School Teachers: What do they know, and what does it matter?

My post on teacher qualifications got some play, which was fun. However, a wounded comment from a third-grade teacher suggests that some teachers feel I’m dissing their smarts. Likewise, many bloggers bewailed the apparently low intellect revealed by the unimpressive SAT scores for elementary school, special ed, and P. E. teachers.

But the point is the absurd use of misleading data, not teacher smarts. Eduformers want to convince the public that teachers are a great lump of uneducated, overpaid louts. The bigger the gap between the teacher’s task and the SAT scores, the better to convince. The public might not care that fourth grade teachers average 1500 on the SAT, whereas the same score for a calculus teacher would be unsettling. My sole objective was to disaggregate the scores by teacher role and demonstrate the degree to which eduformers distort the data.

I don’t hold non-secondary school teachers’ intellect in contempt. To the extent I have an opinion on their intelligence, I’d say that these teachers are, as a group, far less likely to prep for the SAT, since they aren’t likely to go to competitive schools. Thus their scores probably aren’t at the top of their ability range. SAT scores in the 510-530 range (per section) are perfectly adequate for the broad middle of cognitive tasks, and that’s where elementary school curriculum knowledge sits.

But here’s the real question: how smart do we want elementary teachers to be?

Remember, the evidence linking teacher content knowledge to student achievement is….well, it’s there, but it’s neither as unambiguous or strong a relationship as eduformers like to think.

Remember, too, that SAT scores aside, elementary school teachers have to pass state licensure tests—and in the past ten years, the states have made these tests much tougher. As the ETS report on teacher quality I linked in earlier explicitly links the increasing PRAXIS failure rates with the more rigorous testing burden and says:

During the last decade, policies have been put in place to improve the quality of the teaching force. This study examines changes in SAT scores and college grades for two cohorts of Praxis test takers to determine whether the quality of the teacher pool has improved over an eight-year period. While these are relatively simple and generic measures, each has been associated with teacher quality. The results support the view that the policies are working and have contributed to a stronger cohort of individuals seeking teacher certification.

States have already substantially raised the bar for elementary teachers. Check out, for example, California’s shift from the CBEST and an ed school course sequence to the CSET Multiple Subjects Test.

So, naturally, we saw a huge uptick in new teacher effectiveness, who were much more effective than teachers from the bad old days of the easy Praxis tests. We’ve seen a dramatic increase in test scores from these new teachers. Smarter teachers have already helped thousands of students and all we have to do is wait for the teachers from those bad old days to retire.

Right?

Or then, maybe we didn’t.

So tell you what, eduformers. Go do that research. Here’s some ideas:

  1. Teachers who passed the newer, tougher licensure tests are more effective than teachers who didn’t.
  2. Teachers aren’t really passing the newer, tougher, licensure tests and are getting in through loopholes.
  3. The new licensure tests aren’t really tougher.

Go away until you can explain why the tougher standards haven’t made better teachers. Or prove that they did.

Then we can talk about how smart elementary school teachers need to be.

Until then, chew on this Joanne Jacobs’ commenter:

This data supports the observations I’ve always made that academically-oriented teachers prefer to teach the upper grades while the social-emotionally oriented teachers prefer the lower grades.

Because the life of an elementary school teacher isn’t one that academically-oriented people (smart or not) are likely to embrace. Dealing with kids who throw up, cry, occasionally pee their pants, pick their noses, tell potty jokes, and change best friends daily isn’t a picnic. Dealing with these problems effectively might take more empathy than brains.

I am not dismissing the need for competence: math, in particular, seems to be a problem area. But let’s be really sure we haven’t done enough already before we start demanding more.


Getting Engagement

I once got in a bit of trouble with someone who, after an observation, expressed deep concern that my students had been “off task”. Had I any sense, I would have nodded sagely and agreed, asking for suggestions and methods to improve my students’ engagement level.

Alas, I lay claim to a fair amount of brains but no sense at all, and so, fatally, I looked askance at the comment. It was a Thursday afternoon, sixth period, and the kids had been on task. No, not every second, not every student, but I’d set them a difficult and challenging assignment and they’d known I was being observed. They’d jumped into a high-octane class discussion, jumping in with questions and answers, asking for clarifications, bursting with enthusiasm on cue. A fantastic performance (in the acting sense). Then they worked studiously on the handout. When they got stuck, naturally (sigh) they chitchatted until I came by and answered their questions, and then went back to work. I’d wanted the class to finish at least a quarter of the handout, to finish it up the next day. Three students finished the entire handout, over half the class finished half of it, and everyone finished the quarter I’d planned. I’d spent the last ten minutes of class, as always, going round one last time to ensure everyone was at a good stopping point for the next day. Then I went back up front, praised them for a good day’s work, reminded them of the key ideas, and told them to be ready to finish up the next day.

And so, instead of asking for suggestions and methods, I demurred—and here’s the really stupid part—told the observer truthfully that a certain amount of off-task is normal, particularly with kids who struggle with math. The trick was, I said nonchalantly, to keep them moving and minimize the off-task behavior by ensuring the students feel capable of doing the work. Like I said: no sense. No worries, though: It all ended well, if not without bloodshed.

One might think—truth be told, people who know me often do think—that I am incapable of accepting criticism. Unfair. My objections were well-founded. In fact, other teachers hearing of that conversation are filled with horror that any observer would expect 100% engagement or anything close to it with a heterogeneous class of mostly math strugglers.

But more to the point, I take all suggestions in, even as I am giving out with my objections and rationales, and they sit in the back of my mind, waiting to pounce.

Improving engagement levels is one of my top two or three concerns as a teacher, and something I think about constantly. Those who don’t worry about it teach in tracked classes, the lucky dogs. My objection to that observer was precisely on this point: the observer didn’t understand how high the engagement level was, given the audience, and how the engagement level and activity was the primary focus of the activity I’d chosen.

Much later, when I was reviewing midpoint and distance with my Algebra II kids on another Thursday, I’d noticed they were moving too slowly. I’d given them a double worksheet with lots of problems because I wanted them to gain fluency. Some of them were working hard, but too many had lollygagged through five or six on Day One, which meant my original plans for Day Two, finishing the worksheet—on a Friday no less—would be an exercise in herding cats, feeling relieved if over half the class did a few problems lackadaisically. It wasn’t so much the number of problems as the time spent focused on the work. What I needed was some sort of activity that would keep them on task the whole time.

And just then, the observer’s suggestion pounced. Not the literal suggestion made that day, of a competition during the last 20 minutes of class. Totally unworkable in my classrooms, But I’d still filed the idea away. Somewhere in that notion was a nugget I instinctively knew was useful—no, not the competition, not the last 20 minutes, god knew (just organizing it would take 10 minutes), but….and so I came up with Switch and Stay.

Flashing neon sign goes here: Despite my immediate pushback and valid objection, I did not slough off the suggestion but rather morphed it into something I could use to be more successful at engagement. See? I do listen to criticism.

But onto larger issues:

Progressives think the key to engagement is “relevant curriculum”. They’re mostly wrong, but not entirely. Eduformers think a lack of engagement is the teacher’s fault. They’re mostly wrong, too.

What most people fail to understand is that engagement, in and of itself, doesn’t lead to learning. It seems like it ought to–that’s what happens in all the feel-good teacher movies. Get the kids to care through rap and poetry slams, and suddenly they’re spouting Shakespeare and writing award-winning blogs.

Larry Cuban spells out the engagement assumptions:

  1. Motivated students will engage in academic work that teachers assign.
  2. Engaged students are attentive, participate in classroom activities, and complete assigned work.
  3. Because students pay attention, participate, and complete work, they acquire academic knowledge and skills from teachers and peers that result in classroom and school rewards further strengthening engagement.
  4. Expanded school-based knowledge and skills produce academic improvement as measured by teacher grades and standardized tests.

Cuban is focusing on engagement via high tech in the classroom, but his larger point is worth remembering: there is little evidence that supports this chain of reasoning.

I am reasonably certain that engagement leads to increased individual achievement on the margin. I am equally near certain that engagement has little to do with low test scores. I know many hardworking, engaged kids who don’t do as well as kids who tune out and show up periodically but learn a lot more in half the time with half their attention.

I just need engagement because otherwise, the kids don’t do the work. If they don’t do the work, they won’t have a chance at achieving fluency or creating memories they can access later. But all engagement gives them is a chance at that, not the guarantee of increased understanding and achievement.


Switchers and Stayers

I often read suggestions about competitions and other games to get students working “to speed”. But direct competition doesn’t work unless you’ve got truly homogenous abilities–the same students always win and the rest won’t try. Likewise, games like trashketball suffer from the problems this blogger describes: either the high ability students dominate or the low ability students just sit there and let them do all the work.

But I’ve been noticing that any time I give my students a lot of problems to practice automaticity, a lot of students will just do a few, slowly–the same way they work new material. And that’s not the point. So, thanks to a comment by an observer, I’ve spent quite some time mulling how to do some form of high-octane activity. This one was successful enough I’m going to try it again.

I put the desks all about the room in pairs, instead of the usual quartets. As the algebra II students came in the room, I told them to get out a notebook and pencil and put their backpacks on the counter, out of the way (something I really should do more often), and then stand at the back of the room. I told them that I’d seen they hadn’t been productive the day before, and I wanted more focused work, that this activity would ensure they completed a number of problems. I took from half the class and put them one to a pair of seats. They were the Stayers. The rest of the class, at the back of the room, were the Switchers.

The activity: Each Switcher sits next to a Stayer. I put up a problem, each pair works it–separately or together, but both of them must have the correct answer in the allotted time in order for them each to get a chip. Then the Switchers move to a different Stayer, and it starts again. The more chips, the better the classwork grade. I made them practice Switching, which they thought insane (and said so) and made sure the Stayers knew they were to stay put. Switchers must switch–no sticking to the same person. I also made sure that certain students were both Stayers or both Switchers, so they’d never be together. Such a clever teacher am I.

Do not imagine that I gave these instructions to a happily compliant group of productive teenagers, eagerly awaiting a new activity from a beloved teacher. Surly, sarcastic, grousing, rude, and close to rebellious, the students made the opening very difficult, and their constant interruptions added a good five minutes onto my instructions–in each class! Sixth period, as always, was the best.

But then, my god. They worked like dogs to get those chips. They set up their own internal competitions to see who could finish first and the team that finished first started asking if they could pick the color of chip they got. I gave them five minutes for the first problem in each category, then four, then three. If I noticed a student struggling with a concept, I’d ensure he or she was paired with a high ability student in the next round and signal that I’d like the second one to give a thorough explanation–and they always complied. The students worked 9-10 problems through the period, with very little downtime. Very few students missed any problems, although I made sure a few didn’t get chips the first time round just to show it could happen.

In short, it worked beautifully after the Great Battle of the Beginning.

But it’s very unsettling, their instant hostility when I do something different. Third and fourth period Algebra II act like I’ve whipped them daily and just stopped yesterday; hackles ready to rise if I raise a hand again. Sixth period is much more relaxed. And then other classes are completely on my side. My geometry kids, particularly second period, would walk off a cliff if I asked them. It was the same thing last year. Second period algebra seemed to hate my guts, while my intervention and sixth period algebra students were likewise ready to try anything on my word. You just never know.


Why higher standards are impossible

Rigorous academic standards are impossible. Full stop. Sorry, Checker (barriers #3 and #4).

Oklahoma’s recent fold is instructive. In 2005, the legislature voted in Achieving Classroom Excellence, a three-part implementation of tougher high school standards. High school graduates, beginning in 2012, would have to pass end-of-course tests in algebra, geometry, English, history, and science.

The math tests didn’t seem like cakewalks ( Algebra, Geometry) although the English test seems rudimentary.

But then, the state provided exemptions, which are an entirely different story. According to the exemption requirements, students could score an 18 on the ACT Math subtest (460 or thereabouts on the SAT) and a 15/17 on the English and Reading tests (430 ditto) in order to graduate. Any student who couldn’t pass the state tests faced a far friendlier standard–and a much lower one.

And yet, even with that low bye, Oklahoma is looking to end the requirement, because at least 6,000 students a year are at risk of not graduating.

Given that thousands of Oklahoma ACT testers can’t meet the exemption standard, which is above the mean for African Americans, and just at the mean for Hispanics and Native Americans, that’s not much of a shock.

I can never tell which side does more damage. Progressive educators set standards embarrassingly low while pretending to teach a challenging “idea-rich” curriculum. They think it’s demeaning to teach low ability kids what they need to know, so instead they “scaffold” advanced concepts and lead the kids through a mock-version of the real thing. So the kids “read” Hamlet, but in fact, all they do is watch a movie and talk about how they felt when their moms let them down. They are given difficult math problems to solve, in no particular sequence of instruction, but they don’t really have to solve them. It’s not the answer that’s important, it’s the process of thinking about the problem, didn’t you know?

And as frustrating and fraudulent as this behavior is, eduformers top progressives with their purely delusional insistence that all students can learn the same advanced curriculum.

Simple question: what is the algebra mastery rate for students with sub-100 IQs? What’s that? You don’t know? Well, it doesn’t have to be IQ. Pick the cognitive metric of your choice and take the bottom half. How are they doing in algebra?

You still don’t know?

Then kindly shut up about higher standards for all.


Not in front of the children?

I use the phrase Voldemort View (borrowed from an anonymous teacher) to describe the troubles that come along with suggesting that cognitive issues may be the source of the achievement gap. (To repeat myself: the average IQ of a racial group doesn’t say squat about the cognitive abilities of any one individual.)

But Ted Horrell, new principal of a Memphis Tennessee high school, didn’t discuss the cause of the achievement gap. In fact, he didn’t mention it at all. He was going through test scores by race and SES, using state reports, to explain why the school was starting a new advisory period. Naturally, a student goes home and complains about the race-based graphics, totally misrepresents Horrell’s presentation, the media jumps all over the story and ensures the misrepresentation gets played all over the country. (Day One and Day Two of the coverage). Horrell apologizes, but at least makes it clear that it was the students’ imagination, not his presentation, that started the problem.

Apparently, Horrell should have had race-based assemblies to discuss the results.

You could just dismiss this as just another example of the niggardly issue; if a certified member of an identity group takes offense, reality takes a back seat.

But education in America begins and ends with the achievement gap. Horrell took the slides from the state’s website. The media–the same media now playing up the insanity–routinely reports state scores broken down by race and income. I don’t recall them being rated R. No warning to leave the room, or an alert that some viewers might find the information offensive.

Now, apparently all someone has to do is call the paper or the TV station and complain.

So when Congress tries to renew No Child Left Behind and mandated reporting to close the achievement gap…..Hey. Not in front of the kiddies.


Grading Tests

It kills me to say this, but any honest description of my grading would have to include the word “holistic”.

This tendency is getting worse. My normal method for a quiz: I assign points before hand, weighting the important problems heavily, and then grade the tests. I do not curve, but if I discover all students really tanked an important problem, I go back and re-weight, with a growl and a sigh.

Today, I was grading the data modelling quizzes I described in an earlier post, and just didn’t feel like assigning points.

Here’s the quiz (Click to enlarge):

Yes, yes, some of you will say “But this is algebra I material! Pre-algebra, in fact!” Newsflash: many, many students still don’t understand this. So get over it. The students had to create a table of values, graphs, and linear equations for four word models, and then four table of values and graphs for given equations. I included one more difficult equation (a difference equalling a constant).

I am usually pretty good at timing tests–I’d say 1 out of every 10 tests, I am genuinely surprised when my students don’t finish. In this case, I was certain that some students wouldn’t finish, but I was interested in fluency. How many students would be able to finish the whole thing? But even so, I would have been better off with three questions in each section.

Anyway–I wasn’t really interested in finely tuned grades here. So I created four categories before looking at the student tests:

A–finished 6 of 8 problems accurately or with minor errors. Identified the equations and came up with reasonable word models in most OR completed and graphed the difference equation.

B–did all of one side correctly and clearly didn’t finish the second half (I’d given them the option to come in at lunch or after school to finish), or did parts of both sides correctly.

C– To get a C, students had to have done 2-3 problems correctly in full (table of values, graph) OR done one part of several questions correctly (e.g. table of values done for most problems, no graph).

D/F–Very little completed, with a range of 1-2 problems done somewhat correctly to clearly had no clue.

So then I reviewed the tests and put them into those categories without any markings. I got a nice heap of Bs and Cs, more Ds/Fs than I’d like, but still within reason (only 2-3 absolutely no clue), and about 10 As. Tonight I’ll go through them and point out errors.

Points, schmoints.


Modeling Linear Equations

“I have a certain amount of nickels and dimes that add up to $2.10.”

“Sam bought a number of tacos and burritos. The tacos were $2 and the burritos were $3. Sam spent $24.”

“Janice joined a gym with a sign-up fee of $40 and a monthly rate of $25.”

I put these three statements on the board and told my Algebra II students to generate a table of values and graph the values.

Some students were instantly able to use their “real-life” math knowledge to start working. Others needed a push and were then able to start. Some needed calculators to work out the values, but all 90 students were, with minimal prompting, able to use that part of their brain that had nothing to do with school to create a list of possibilities. Only a few questioned the lack of an “answer”, and none needed the explanation more than once.

We did this for three days. Over half my students were able to determine the slope of the line and link it back to the word problem without my expressly teaching it. Some of them improved as time went on; few of them are completely incapable of linking the word model to an equation. With discussion, most students began to realize that models with two changing numbers adding up to a constant (nickels and dimes, tacos and burritos) had a negative slope, because an increase in one led to a decrease in the other.

Then, for two days, I gave them problems in equation form:

“y = 3x+2”

“2x + 5y = 45”

“3x – 2y = 6”

“y = -.5 + 50”

They had to generate a table of values and graph the line. When some of them had difficulties, I pointed out the links to word models (what if you were buying burritos for $2 and 6-packs for $5–how much money did you have to spend?) and they got it right away. The subtraction models were the most difficult (which I expected, and didn’t emphasize).

And then, for a couple days, I mixed it up–gave them word models and equations.

I began writing the answer to the most common question in huge letters on the white board: YES, YOU CAN JUST PICK ANY NUMBER.

We’ve just spent two days doing the same thing with word models that provide two points but no relationship.

“Janice joined a gym that had a monthly rate and a signup fee. After three months, she’d paid $145. After 6 months, she paid $190.”

“Brad buys grain for his livestock on a regular cycle and re-orders it when he runs out. After 3 days, he had 72 pounds left. After 7 days, he had 24 pounds left.”

So they have to create the table, find the slope from the table and graph it. Day 2, they had to do that while also answering questions (“What was the signup fee? What was the monthly rate? How much grain did Brad use daily? When would he have to reorder?)

At no point during these two weeks did I work problems algebraically. Some students did, but in all cases I encouraged them to think about other ways to work the data.

They improved dramatically. I gave them a quiz, and all but a few students were able to do the problems with a minimum of questions, although they needed lots of reassurance. “Okay, I think I know what’s going on, so there must be something wrong.” A few kids gave me the “I have no idea how do to this” and I was pretty brutal in my lack of sympathy because they were the kids who don’t pay attention.

I spent the entire first semester teaching them linear and quadratic equations–graphing, systems, solving, factoring, the works. Algebra II is a course designed for students who don’t want or aren’t ready to move to Algebra II/Trigonometry–or who failed that and need a third year. So the first semester is a rehash of Algebra I. I covered it, they all learned a lot–and yet, the semester final was dismal. Some of it was Christmas crazy, and then I wasn’t happy with the test. But nonetheless, they should have done better.

So I mulled this over Christmas break. All but a few of my students are juniors or seniors. Some will be taking a proficiency exam in a few months. Others will be taking a proxy for the exam in their state tests. I’ve always been more focused on their college tests than their knowledge of second year algebra. I want my students to test out of remedial math, or spend as little time as possible in it.

That’s why I’ve decided to spend a month helping them use their math knowledge–the knowledge they see as entirely separate from algebra and geometry, their “real-life” knowledge–to model data.

If this works properly, the strongest students will have a much deeper understanding of the equations and how they relate to the data. The weaker students will be able to work problems using their inherent math ability, rather than struggling to turn the problem into an abstract representation they don’t recognize.

I’m going to finish up linear equations with maps, to give them a better understanding of midpoint, distance, parallel and perpendicular.

For more samples and boardwork, see Modeling Linear Equations, part 3
Then it’s onto quadratics.


Teacher Quality Pseudofacts, Part II

In Part I, I looked at the Richwine/Biggs criteria for judging school teachers’ cognitive ability based on GRE scores, which primarily involves secondary school teachers.

On to undergraduate ed majors and those terrible, terrible SAT scores:

Students who indicated that education was their intended major earned a combined math and verbal score of 967, about 0.31 standard deviations below the average of 1,017, meaning the 38th percentile in a standard normal distribution.

Just last year, the National Council on Teacher Quality buried the lede in its research on student teaching:

Fewer than half of all education majors (or even intended education) majors become teachers. Can someone tell me why eduformers are always squawking about ed majors’ SAT scores?

Yes, elementary school teachers are less than stellar, academically speaking. But why not use data that directly links SAT scores to teachers? The Educational Testing Service released a report on teacher quality that is directly on point–so, naturally, eduformers ignore it.

In the 2002-2005 cohort, elementary school teachers’ combined SAT score was over 1000, nearly 40 points higher than the overall mean that Richwine and Biggs use. Secondary school teacher scores in academic subjects are much higher–math and science teachers are above the national average in both, and English/history teachers above in verbal and slightly below in math.

Now, these reports are only for 20 states and DC (California, for example, doesn’t use Praxis tests and so wouldn’t be included). But it’s far more accurate than SAT scores for ed majors.

But Biggs and Richwine use education major SAT scores, when a Google search reveals actual teacher SAT scores for a huge number of states, and then, as before, they conflate elementary and secondary school teacher scores (to say nothing of PE and special ed instructors).

I really don’t mind an argument about teacher salary. But the data used on teacher quality is simply crap. Next time out, I’ll talk about why eduformers mislead about teacher quality (apart from the obvious goal of saving in salary), and why progressives let them.